Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
12,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td>
</table>
Step1: Total Least Squares
GOAL
Step2: Classical Least Squares
Step3: Total Least Squares
Step4: Now plot and compare the two solutions | Python Code:
%matplotlib inline
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
Explanation: <table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td>
</table>
End of explanation
# npoints uniformly randomly distributed points in the interval [0,3]
npnts =100
x = np.random.uniform(0.,3.,npnts)
# set y = mx + b plus random noise of size err
slope = 2.
intercept = 1.
err = .5
y = slope*x + intercept
y += np.random.normal(loc=y,scale=err)
# add some random noise to x variable as well
x += np.random.normal(loc=x,scale=err)
# And plot out the data
plt.figure()
plt.scatter(x,y)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Data')
plt.grid()
plt.show()
Explanation: Total Least Squares
GOAL: Demonstrate the use of the SVD to calculate total least squares regression and compare it to the classical least squares problem that assumes only errors in y.
Random data:
We start by constructing a random data set that approximates a straight line but has random errors in both x and y coordinates
End of explanation
# Vandermonde matrix
A = np.array([ np.ones(x.shape), x]).T
# solve Ac = y using the QR decomposition via scipy
c_ls,res,rank,s = la.lstsq(A,y)
print 'Best fit Linear Least Squares:'
print ' slope={}'.format(c_ls[1])
print ' intercept={}'.format(c_ls[0])
Explanation: Classical Least Squares:
We first calculate the best fit straight line assuming all the error is in the y variable using the a QR decomposition of the Vandermonde matrix [ 1 x ]
End of explanation
# Data matrix
X = np.array([ x , y]).T
X_mean = np.mean(X,0)
print 'Mean of data matrix=',X_mean
# de-mean the data matrix
X -= X_mean
# now calculate the SVD of the de-meaned data matrix
U,S,VT = la.svd(X,full_matrices=False)
V = VT.T
print 'Singular values=', S
print 'First Right singular vector V=', V[:,0]
Explanation: Total Least Squares:
We now use the SVD to decompose the demeaned data matrix into its principal components
End of explanation
# dummy variables
t_ls = np.linspace(0,x.max())
t_svd = 2*(t_ls - np.mean(t_ls))
# make figure
plt.figure()
# plot data
plt.scatter(x,y)
# plot the least squares solution
plt.plot(t_ls,c_ls[0]+t_ls*c_ls[1],'r-',label='Least Squares')
# plot the total least Squares solution
# plot the mean
plt.plot(X_mean[0],X_mean[1],'go')
# calculate a line through the mean with the first principal component as a basis
L_tls = X_mean + np.outer(t_svd,V[:,0])
plt.plot(L_tls[:,0],L_tls[:,1],'c-',label='Total Least Squares')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Comparison Least Squares vs Total Least Squares')
plt.legend(loc='best')
plt.grid()
plt.show()
Explanation: Now plot and compare the two solutions
End of explanation |
12,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rossmann
Data preparation / Feature engineering
In addition to the provided data, we will be using external datasets put together by participants in the Kaggle competition. You can download all of them here. Then you shold untar them in the dirctory to which PATH is pointing below.
For completeness, the implementation used to put them together is included below.
Step1: We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.
Step2: join_df is a function for joining tables on specific fields. By default, we'll be doing a left outer join of right on the left argument using the given fields for each table.
Pandas does joins using the merge method. The suffixes argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "_y" to those on the right.
Step3: Join weather/state names.
Step4: In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.
We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data
Step5: The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
You should always consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
Step6: The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.
Step7: Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.
Aside
Step8: Next we'll fill in missing values to avoid complications with NA's. NA (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we are picking an arbitrary signal value that doesn't otherwise appear in the data.
Step9: Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of apply() in mapping a function across dataframe values.
Step10: We'll replace some erroneous / outlying data.
Step11: We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
Step12: Same process for Promo dates. You may need to install the isoweek package first.
Step13: Durations
It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.
Step14: We'll be applying this to a subset of columns
Step15: Let's walk through an example.
Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call add_elapsed('SchoolHoliday', 'After')
Step16: We'll do this for two more fields.
Step17: We're going to set the active index to Date.
Step18: Then set null values from elapsed field calculations to 0.
Step19: Next we'll demonstrate window functions in pandas to calculate rolling quantities.
Here we're sorting by date (sort_index()) and counting the number of events of interest (sum()) defined in columns in the following week (rolling()), grouped by Store (groupby()). We do the same in the opposite direction.
Step20: Next we want to drop the Store indices grouped together in the window function.
Often in pandas, there is an option to do this in place. This is time and memory efficient when working with large datasets.
Step21: Now we'll merge these values onto the df.
Step22: It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
Step23: The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
Step24: We'll back this up as well. | Python Code:
PATH=Config().data_path()/Path('rossmann/')
table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test']
tables = [pd.read_csv(PATH/f'{fname}.csv', low_memory=False) for fname in table_names]
train, store, store_states, state_names, googletrend, weather, test = tables
len(train),len(test)
Explanation: Rossmann
Data preparation / Feature engineering
In addition to the provided data, we will be using external datasets put together by participants in the Kaggle competition. You can download all of them here. Then you shold untar them in the dirctory to which PATH is pointing below.
For completeness, the implementation used to put them together is included below.
End of explanation
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday!='0'
Explanation: We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.
End of explanation
def join_df(left, right, left_on, right_on=None, suffix='_y'):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", suffix))
Explanation: join_df is a function for joining tables on specific fields. By default, we'll be doing a left outer join of right on the left argument using the given fields for each table.
Pandas does joins using the merge method. The suffixes argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "_y" to those on the right.
End of explanation
weather = join_df(weather, state_names, "file", "StateName")
Explanation: Join weather/state names.
End of explanation
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
Explanation: In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.
We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'. This is a good opportunity to highlight pandas indexing. We can use .loc[rows, cols] to select a list of rows and a list of columns from the dataframe. In this case, we're selecting rows w/ statename 'NI' by using a boolean list googletrend.State=='NI' and selecting "State".
End of explanation
def add_datepart(df, fldname, drop=True, time=False):
"Helper function that adds columns relevant to a date."
fld = df[fldname]
fld_dtype = fld.dtype
if isinstance(fld_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):
fld_dtype = np.datetime64
if not np.issubdtype(fld_dtype, np.datetime64):
df[fldname] = fld = pd.to_datetime(fld, infer_datetime_format=True)
targ_pre = re.sub('[Dd]ate$', '', fldname)
attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
if time: attr = attr + ['Hour', 'Minute', 'Second']
for n in attr: df[targ_pre + n] = getattr(fld.dt, n.lower())
df[targ_pre + 'Elapsed'] = fld.astype(np.int64) // 10 ** 9
if drop: df.drop(fldname, axis=1, inplace=True)
add_datepart(weather, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
add_datepart(train, "Date", drop=False)
add_datepart(test, "Date", drop=False)
Explanation: The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
You should always consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
End of explanation
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
Explanation: The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.
End of explanation
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
joined = join_df(train, store, "Store")
joined_test = join_df(test, store, "Store")
len(joined[joined.StoreType.isnull()]),len(joined_test[joined_test.StoreType.isnull()])
joined = join_df(joined, googletrend, ["State","Year", "Week"])
joined_test = join_df(joined_test, googletrend, ["State","Year", "Week"])
len(joined[joined.trend.isnull()]),len(joined_test[joined_test.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()]),len(joined_test[joined_test.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
joined_test = join_df(joined_test, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()]),len(joined_test[joined_test.Mean_TemperatureC.isnull()])
for df in (joined, joined_test):
for c in df.columns:
if c.endswith('_y'):
if c in df.columns: df.drop(c, inplace=True, axis=1)
Explanation: Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.
Aside: Why not just do an inner join?
If you are assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event you are wrong or a mistake is made, an outer join followed by a null-check will catch it. (Comparing before/after # of rows for inner join is equivalent, but requires keeping track of before/after row #'s. Outer join is easier.)
End of explanation
for df in (joined,joined_test):
df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
Explanation: Next we'll fill in missing values to avoid complications with NA's. NA (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we are picking an arbitrary signal value that doesn't otherwise appear in the data.
End of explanation
for df in (joined,joined_test):
df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear,
month=df.CompetitionOpenSinceMonth, day=15))
df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
Explanation: Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of apply() in mapping a function across dataframe values.
End of explanation
for df in (joined,joined_test):
df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
Explanation: We'll replace some erroneous / outlying data.
End of explanation
for df in (joined,joined_test):
df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"]//30
df.loc[df.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
Explanation: We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
End of explanation
# If needed, uncomment:
# ! pip install isoweek
from isoweek import Week
for df in (joined,joined_test):
df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1))
df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days
for df in (joined,joined_test):
df.loc[df.Promo2Days<0, "Promo2Days"] = 0
df.loc[df.Promo2SinceYear<1990, "Promo2Days"] = 0
df["Promo2Weeks"] = df["Promo2Days"]//7
df.loc[df.Promo2Weeks<0, "Promo2Weeks"] = 0
df.loc[df.Promo2Weeks>25, "Promo2Weeks"] = 25
df.Promo2Weeks.unique()
joined.to_pickle(PATH/'joined')
joined_test.to_pickle(PATH/'joined_test')
Explanation: Same process for Promo dates. You may need to install the isoweek package first.
End of explanation
def get_elapsed(fld, pre):
day1 = np.timedelta64(1, 'D')
last_date = np.datetime64()
last_store = 0
res = []
for s,v,d in zip(df.Store.values,df[fld].values, df.Date.values):
if s != last_store:
last_date = np.datetime64()
last_store = s
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1))
df[pre+fld] = res
Explanation: Durations
It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:
* Running averages
* Time until next event
* Time since last event
This is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.
We'll define a function get_elapsed for cumulative counting across a sorted dataframe. Given a particular field fld to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.
Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly.
End of explanation
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
#df = train[columns]
df = train[columns].append(test[columns])
Explanation: We'll be applying this to a subset of columns:
End of explanation
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
Explanation: Let's walk through an example.
Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call add_elapsed('SchoolHoliday', 'After'):
This will apply to each row with School Holiday:
* A applied to every row of the dataframe in order of store and date
* Will add to the dataframe the days since seeing a School Holiday
* If we sort in the other direction, this will count the days until another holiday.
End of explanation
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
Explanation: We'll do this for two more fields.
End of explanation
df = df.set_index("Date")
Explanation: We're going to set the active index to Date.
End of explanation
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o+p
df[a] = df[a].fillna(0).astype(int)
Explanation: Then set null values from elapsed field calculations to 0.
End of explanation
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
Explanation: Next we'll demonstrate window functions in pandas to calculate rolling quantities.
Here we're sorting by date (sort_index()) and counting the number of events of interest (sum()) defined in columns in the following week (rolling()), grouped by Store (groupby()). We do the same in the opposite direction.
End of explanation
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
Explanation: Next we want to drop the Store indices grouped together in the window function.
Often in pandas, there is an option to do this in place. This is time and memory efficient when working with large datasets.
End of explanation
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns,1,inplace=True)
df.head()
Explanation: Now we'll merge these values onto the df.
End of explanation
df.to_pickle(PATH/'df')
df["Date"] = pd.to_datetime(df.Date)
df.columns
joined = pd.read_pickle(PATH/'joined')
joined_test = pd.read_pickle(PATH/f'joined_test')
joined = join_df(joined, df, ['Store', 'Date'])
joined_test = join_df(joined_test, df, ['Store', 'Date'])
Explanation: It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
End of explanation
joined = joined[joined.Sales!=0]
Explanation: The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
End of explanation
joined.reset_index(inplace=True)
joined_test.reset_index(inplace=True)
joined.to_pickle(PATH/'train_clean')
joined_test.to_pickle(PATH/'test_clean')
Explanation: We'll back this up as well.
End of explanation |
12,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Performing Maximum Likelihood Estimates (MLEs) in IPython
By Delaney Granizo-Mackenzie and Andrei Kirilenko.
This notebook developed in collaboration with Prof. Andrei Kirilenko as part of the Masters of Finance curriculum at MIT Sloan.
Part of the Quantopian Lecture Series
Step1: Normal Distribution
We'll start by sampling some data from a normal distribution.
Step2: Now we'll define functions that given our data, will compute the MLE for the $\mu$ and $\sigma$ parameters of the normal distribution.
Recall that
$$\hat\mu = \frac{1}{T}\sum_{t=1}^{T} x_t$$
$$\hat\sigma = \sqrt{\frac{1}{T}\sum_{t=1}^{T}{(x_t - \hat\mu)^2}}$$
Step3: Now let's try our functions out on our sample data and see how they compare to the built-in np.mean and np.std
Step4: Now let's estimate both parameters at once with scipy's built in fit() function.
Step5: Now let's plot the distribution PDF along with the data to see how well it fits. We can do that by accessing the pdf provided in scipy.stats.norm.pdf.
Step6: Exponential Distribution
Let's do the same thing, but for the exponential distribution. We'll start by sampling some data.
Step7: numpy defines the exponential distribution as
$$\frac{1}{\lambda}e^{-\frac{x}{\lambda}}$$
So we need to invert the MLE from the lecture notes. There it is
$$\hat\lambda = \frac{T}{\sum_{t=1}^{T} x_t}$$
Here it's just the reciprocal, so
$$\hat\lambda = \frac{\sum_{t=1}^{T} x_t}{T}$$
Step8: MLE for Asset Returns
Now we'll fetch some real returns and try to fit a normal distribution to them using MLE.
Step9: Let's use scipy's fit function to get the $\mu$ and $\sigma$ MLEs.
Step10: Of course, this fit is meaningless unless we've tested that they obey a normal distribution first. We can test this using the Jarque-Bera normality test. The Jarque-Bera test will reject the hypothesis of a normal distribution if the p-value is under a c | Python Code:
import math
import matplotlib.pyplot as plt
import numpy as np
import scipy
import scipy.stats
Explanation: Performing Maximum Likelihood Estimates (MLEs) in IPython
By Delaney Granizo-Mackenzie and Andrei Kirilenko.
This notebook developed in collaboration with Prof. Andrei Kirilenko as part of the Masters of Finance curriculum at MIT Sloan.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
In this tutorial notebook we'll do the following things:
1. Compute the MLE for a normal distribution.
2. Compute the MLE for an exponential distribution.
3. Fit a normal distribution to asset returns using MLE.
First we need to import some libraries
End of explanation
TRUE_MEAN = 40
TRUE_STD = 10
X = np.random.normal(TRUE_MEAN, TRUE_STD, 1000)
Explanation: Normal Distribution
We'll start by sampling some data from a normal distribution.
End of explanation
def normal_mu_MLE(X):
# Get the number of observations
T = len(X)
# Sum the observations
s = sum(X)
return 1.0/T * s
def normal_sigma_MLE(X):
T = len(X)
# Get the mu MLE
mu = normal_mu_MLE(X)
# Sum the square of the differences
s = sum( np.power((X - mu), 2) )
# Compute sigma^2
sigma_squared = 1.0/T * s
return math.sqrt(sigma_squared)
Explanation: Now we'll define functions that given our data, will compute the MLE for the $\mu$ and $\sigma$ parameters of the normal distribution.
Recall that
$$\hat\mu = \frac{1}{T}\sum_{t=1}^{T} x_t$$
$$\hat\sigma = \sqrt{\frac{1}{T}\sum_{t=1}^{T}{(x_t - \hat\mu)^2}}$$
End of explanation
print "Mean Estimation"
print normal_mu_MLE(X)
print np.mean(X)
print "Standard Deviation Estimation"
print normal_sigma_MLE(X)
print np.std(X)
Explanation: Now let's try our functions out on our sample data and see how they compare to the built-in np.mean and np.std
End of explanation
mu, std = scipy.stats.norm.fit(X)
print "mu estimate: " + str(mu)
print "std estimate: " + str(std)
Explanation: Now let's estimate both parameters at once with scipy's built in fit() function.
End of explanation
pdf = scipy.stats.norm.pdf
# We would like to plot our data along an x-axis ranging from 0-80 with 80 intervals
# (increments of 1)
x = np.linspace(0, 80, 80)
plt.hist(X, bins=x, normed='true')
plt.plot(pdf(x, loc=mu, scale=std))
plt.xlabel('Value')
plt.ylabel('Observed Frequency')
plt.legend(['Fitted Distribution PDF', 'Observed Data', ]);
Explanation: Now let's plot the distribution PDF along with the data to see how well it fits. We can do that by accessing the pdf provided in scipy.stats.norm.pdf.
End of explanation
TRUE_LAMBDA = 5
X = np.random.exponential(TRUE_LAMBDA, 1000)
Explanation: Exponential Distribution
Let's do the same thing, but for the exponential distribution. We'll start by sampling some data.
End of explanation
def exp_lamda_MLE(X):
T = len(X)
s = sum(X)
return s/T
print "lambda estimate: " + str(exp_lamda_MLE(X))
# The scipy version of the exponential distribution has a location parameter
# that can skew the distribution. We ignore this by fixing the location
# parameter to 0 with floc=0
_, l = scipy.stats.expon.fit(X, floc=0)
pdf = scipy.stats.expon.pdf
x = range(0, 80)
plt.hist(X, bins=x, normed='true')
plt.plot(pdf(x, scale=l))
plt.xlabel('Value')
plt.ylabel('Observed Frequency')
plt.legend(['Fitted Distribution PDF', 'Observed Data', ]);
Explanation: numpy defines the exponential distribution as
$$\frac{1}{\lambda}e^{-\frac{x}{\lambda}}$$
So we need to invert the MLE from the lecture notes. There it is
$$\hat\lambda = \frac{T}{\sum_{t=1}^{T} x_t}$$
Here it's just the reciprocal, so
$$\hat\lambda = \frac{\sum_{t=1}^{T} x_t}{T}$$
End of explanation
prices = get_pricing('TSLA', fields='price', start_date='2014-01-01', end_date='2015-01-01')
# This will give us the number of dollars returned each day
absolute_returns = np.diff(prices)
# This will give us the percentage return over the last day's value
# the [:-1] notation gives us all but the last item in the array
# We do this because there are no returns on the final price in the array.
returns = absolute_returns/prices[:-1]
Explanation: MLE for Asset Returns
Now we'll fetch some real returns and try to fit a normal distribution to them using MLE.
End of explanation
mu, std = scipy.stats.norm.fit(returns)
pdf = scipy.stats.norm.pdf
x = np.linspace(-1,1, num=100)
h = plt.hist(returns, bins=x, normed='true')
l = plt.plot(x, pdf(x, loc=mu, scale=std))
Explanation: Let's use scipy's fit function to get the $\mu$ and $\sigma$ MLEs.
End of explanation
from statsmodels.stats.stattools import jarque_bera
jarque_bera(returns)
jarque_bera(np.random.normal(0, 1, 100))
Explanation: Of course, this fit is meaningless unless we've tested that they obey a normal distribution first. We can test this using the Jarque-Bera normality test. The Jarque-Bera test will reject the hypothesis of a normal distribution if the p-value is under a c
End of explanation |
12,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Concatenation
Concatenation basically glues together DataFrames. Keep in mind that dimensions should match along the axis you are concatenating on. You can use pd.concat and pass in a list of DataFrames to concatenate together
Step2: Example DataFrames
Step3: Merging
The merge function allows you to merge DataFrames together using a similar logic as merging SQL Tables together. For example
Step4: Or to show a more complicated example
Step5: Joining
Joining is a convenient method for combining the columns of two potentially differently-indexed DataFrames into a single result DataFrame. | Python Code:
import pandas as pd
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index = [0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index = [4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index = [8, 9, 10, 11])
df1
df2
df3
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Merging, Joining, and Concatenating
There are 3 main ways of combining DataFrames together: Merging, Joining and Concatenating. In this lecture we will discuss these 3 methods with examples.
Example DataFrames
End of explanation
pd.concat([df1, df2, df3])
pd.concat([df1, df2, df3],
axis = 1)
Explanation: Concatenation
Concatenation basically glues together DataFrames. Keep in mind that dimensions should match along the axis you are concatenating on. You can use pd.concat and pass in a list of DataFrames to concatenate together:
End of explanation
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
left
right
Explanation: Example DataFrames
End of explanation
pd.merge(left, right,
how = 'inner',
on = 'key')
Explanation: Merging
The merge function allows you to merge DataFrames together using a similar logic as merging SQL Tables together. For example:
End of explanation
left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
pd.merge(left, right,
on=['key1', 'key2'])
pd.merge(left, right,
how = 'outer',
on = ['key1', 'key2'])
pd.merge(left, right,
how = 'right',
on = ['key1', 'key2'])
pd.merge(left, right,
how = 'left',
on = ['key1', 'key2'])
Explanation: Or to show a more complicated example:
End of explanation
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index = ['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index = ['K0', 'K2', 'K3'])
left.join(right)
left.join(right,
how = 'outer')
Explanation: Joining
Joining is a convenient method for combining the columns of two potentially differently-indexed DataFrames into a single result DataFrame.
End of explanation |
12,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with BigQuery tables and the Genomics API
Case Study
Step1: Next we're going to need to authenticate using the service account on the Datalab host.
Step2: Now we can create a client for the Genomics API. NOTE that in order to use the Genomics API, you need to have enabled it for your GCP project.
Step3: We're also going to want to work with BigQuery, so we'll need the biguery module. We will also be using the pandas and time modules.
Step4: The ISB-CGC group has assembled metadata as well as molecular data from the CCLE project into an open-access BigQuery dataset called isb-cgc
Step5: OK, so let's get the complete list of cell-lines with this particular mutation
Step6: Now we want to know, from the DataFile_info table, which cell lines have both DNA-seq and RNA-seq data imported into Google Genomics. (To find these samples, we will look for samples that have non-null readgroupset IDs from "DNA" and "RNA" pipelines.)
Step7: Now let's find out which samples are in both of these lists
Step8: No we're going to take a closer look at the reads from each of these samples. First, we'll need to be able to get the readgroupset IDs for each sample from the BigQuery table. To do this, we'll define a parameterized function
Step9: Let's take a look at how this will work
Step10: Ok, so we see that for this sample, we have two readgroupset IDs, one based on DNA-seq and one based on RNA-seq. This is what we expect, based on how we chose this list of samples.
Now we'll define a function we can re-use that calls the GA4GH API reads.search method to find all reads that overlap the V600 mutation position. Note that we will query all of the readgroupsets that we get for each sample at the same time by passing in a list of readGroupSetIds. Once we have the reads, we'll organized them into a dictionary based on the local context centered on the mutation hotspot.
Step11: Here we define the position (0-based) of the BRAF V600 mutation
Step12: OK, now we can loop over all of the samples we found earlier | Python Code:
!pip install --upgrade google-api-python-client==1.4.2
Explanation: Working with BigQuery tables and the Genomics API
Case Study: BRAF V600 mutations in CCLE cell-lines
In this notebook we'll show you how you might combine information available in BigQuery tables with sequence-reads that have been imported into Google Genomics. We'll be using the open-access CCLE data for this example.
You'll need to make sure that your project has the necessary APIs enabled, so take a look at the Getting started with Google Genomics page, and be sure to also have a look at this Getting started with the Genomics API tutorial notebook available on github.
We'll be using the Google Python API client so we'll need to install that first using the pip package manager.
NOTE that Datalab is currently using an older version of the oauth2client (1.4.12) and as a result we need to install an older version of the google-api-python-client that supports it.
End of explanation
from httplib2 import Http
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
http = Http()
credentials.authorize(http)
Explanation: Next we're going to need to authenticate using the service account on the Datalab host.
End of explanation
from apiclient import discovery
ggSvc = discovery.build ( 'genomics', 'v1', http=http )
Explanation: Now we can create a client for the Genomics API. NOTE that in order to use the Genomics API, you need to have enabled it for your GCP project.
End of explanation
import gcp.bigquery as bq
import pandas as pd
import time
Explanation: We're also going to want to work with BigQuery, so we'll need the biguery module. We will also be using the pandas and time modules.
End of explanation
%%sql
SELECT CCLE_name, Hugo_Symbol, Protein_Change, Genome_Change
FROM [isb-cgc:ccle_201602_alpha.Mutation_calls]
WHERE ( Hugo_Symbol="BRAF" AND Protein_Change CONTAINS "p.V600" )
ORDER BY Cell_line_primary_name
LIMIT 5
Explanation: The ISB-CGC group has assembled metadata as well as molecular data from the CCLE project into an open-access BigQuery dataset called isb-cgc:ccle_201602_alpha. In this notebook we will make use of two tables in this dataset: Mutation_calls and DataFile_info. You can explore the entire dataset using the BigQuery web UI.
Let's say that we're interested in cell-lines with BRAF V600 mutations, and in particular we want to see if there is evidence in both the DNA-seq and the RNA-seq data for these mutations. Let's start by making sure that there are some cell-lines with these mutations in our dataset:
End of explanation
%%sql --module get_mutated_samples
SELECT CCLE_name
FROM [isb-cgc:ccle_201602_alpha.Mutation_calls]
WHERE ( Hugo_Symbol="BRAF" AND Protein_Change CONTAINS "p.V600" )
ORDER BY Cell_line_primary_name
r = bq.Query(get_mutated_samples).results()
list1 = r.to_dataframe()
print " Found %d samples with a BRAF V600 mutation. " % len(list1)
Explanation: OK, so let's get the complete list of cell-lines with this particular mutation:
End of explanation
%%sql --module get_samples_with_data
SELECT
a.CCLE_name AS CCLE_name
FROM (
SELECT
CCLE_name
FROM
[isb-cgc:ccle_201602_alpha.DataFile_info]
WHERE
( Pipeline CONTAINS "DNA"
AND GG_readgroupset_id<>"NULL" ) ) a
JOIN (
SELECT
CCLE_name
FROM
[isb-cgc:ccle_201602_alpha.DataFile_info]
WHERE
( Pipeline CONTAINS "RNA"
AND GG_readgroupset_id<>"NULL" ) ) b
ON
a.CCLE_name = b.CCLE_name
r = bq.Query(get_samples_with_data).results()
list2 = r.to_dataframe()
print " Found %d samples with both DNA-seq and RNA-seq reads. " % len(list2)
Explanation: Now we want to know, from the DataFile_info table, which cell lines have both DNA-seq and RNA-seq data imported into Google Genomics. (To find these samples, we will look for samples that have non-null readgroupset IDs from "DNA" and "RNA" pipelines.)
End of explanation
list3 = pd.merge ( list1, list2, how='inner', on=['CCLE_name'] )
print " Found %d mutated samples with DNA-seq and RNA-seq data. " % len(list3)
Explanation: Now let's find out which samples are in both of these lists:
End of explanation
%%sql --module get_readgroupsetid
SELECT Pipeline, GG_readgroupset_id
FROM [isb-cgc:ccle_201602_alpha.DataFile_info]
WHERE CCLE_name=$c AND GG_readgroupset_id<>"NULL"
Explanation: No we're going to take a closer look at the reads from each of these samples. First, we'll need to be able to get the readgroupset IDs for each sample from the BigQuery table. To do this, we'll define a parameterized function:
End of explanation
aName = list3['CCLE_name'][0]
print aName
ids = bq.Query(get_readgroupsetid,c=aName).to_dataframe()
print ids
Explanation: Let's take a look at how this will work:
End of explanation
chr = "7"
pos = 140453135
width = 11
rgsList = ids['GG_readgroupset_id'].tolist()
def getReads ( rgsList, pos, width):
payload = { "readGroupSetIds": rgsList,
"referenceName": chr,
"start": pos-(width/2),
"end": pos+(width/2),
"pageSize": 2048
}
r = ggSvc.reads().search(body=payload).execute()
context = {}
for a in r['alignments']:
rgsid = a['readGroupSetId']
seq = a['alignedSequence']
seqStartPos = int ( a['alignment']['position']['position'] )
relPos = pos - (width/2) - seqStartPos
if ( relPos >=0 and relPos+width<len(seq) ):
# print rgsid, seq[relPos:relPos+width]
c = seq[relPos:relPos+width]
if (c not in context):
context[c] = {}
context[c][rgsid] = 1
else:
if (rgsid not in context[c]):
context[c][rgsid] = 1
else:
context[c][rgsid] += 1
for c in context:
numReads = 0
for a in context[c]:
numReads += context[c][a]
# write it out only if we have information from two or more readgroupsets
if ( numReads>3 or len(context[c])>1 ):
print " --> ", c, context[c]
Explanation: Ok, so we see that for this sample, we have two readgroupset IDs, one based on DNA-seq and one based on RNA-seq. This is what we expect, based on how we chose this list of samples.
Now we'll define a function we can re-use that calls the GA4GH API reads.search method to find all reads that overlap the V600 mutation position. Note that we will query all of the readgroupsets that we get for each sample at the same time by passing in a list of readGroupSetIds. Once we have the reads, we'll organized them into a dictionary based on the local context centered on the mutation hotspot.
End of explanation
chr = "7"
pos = 140453135
width = 11
Explanation: Here we define the position (0-based) of the BRAF V600 mutation:
End of explanation
for aName in list3['CCLE_name']:
print " "
print " "
print aName
r = bq.Query(get_readgroupsetid,c=aName).to_dataframe()
for i in range(r.shape[0]):
print " ", r['Pipeline'][i], r['GG_readgroupset_id'][i]
rgsList = r['GG_readgroupset_id'].tolist()
getReads ( rgsList, pos, width)
Explanation: OK, now we can loop over all of the samples we found earlier:
End of explanation |
12,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load Data
In order to expediate the testing process, I added a debug flag to the pipeline method in our pipeline file which outputs the fixed and moving images prior to registration
Step1: Visualization Function
Step2: Registration Functions | Python Code:
import sys
sys.path.append('../code/functions')
sys.path.append('/home/simpleElastix/build/SimpleITK-build/Wrapping/Python')
import pickle
import cv2
import time
import SimpleITK as sitk
import numpy as np
import matplotlib.pyplot as plt
import nibabel as nib
from cluster import Cluster
from tiffIO import loadTiff, unzipChannels
from connectLib import adaptiveThreshold
from hyperReg import get3DRigid, parseTransformFile, apply3DRigid, apply3DRigidVolume
movingImg = pickle.load(open('../code/functions/movingDebug.io', 'r'))
fixedImg = pickle.load(open('../code/functions/fixedDebug.io', 'r'))
movingLmk = pickle.load(open('../code/functions/movingLDebug.io', 'r'))
fixedLmk = pickle.load(open('../code/functions/fixedLDebug.io', 'r'))
testFixedLmk = fixedLmk[10:15]
testMovingLmk = movingLmk[10:15]
testFixedImg = fixedImg[10:15]
testMovingImg = movingImg[10:15]
plt.figure()
plt.imshow(movingImg[10], cmap='gray')
plt.title('Moving Image Post Connected Components')
plt.show()
plt.figure()
plt.imshow(movingImg[10], cmap='gray')
plt.title('Fixed Image Post Connected Components')
plt.show()
plt.figure()
plt.imshow(movingLmk[10], cmap='gray')
plt.title('Moving Image Landmarks')
plt.show()
plt.figure()
plt.imshow(fixedLmk[10], cmap='gray')
plt.title('Fixed Image Landmarks')
plt.show()
Explanation: Load Data
In order to expediate the testing process, I added a debug flag to the pipeline method in our pipeline file which outputs the fixed and moving images prior to registration
End of explanation
def toDiff(imgA, imgB):
ret = np.empty((imgA.shape[0], imgA.shape[1], 3), dtype=np.uint8)
for y in range(imgA.shape[0]):
for x in range(imgA.shape[1]):
if imgA[y][x] and not imgB[y][x]:
ret[y][x][0] = 255
ret[y][x][1] = 0
ret[y][x][2] = 0
elif not imgA[y][x] and imgB[y][x]:
ret[y][x][0] = 0
ret[y][x][1] = 255
ret[y][x][2] = 0
elif imgA[y][x] and imgB[y][x]:
ret[y][x][0] = 255
ret[y][x][1] = 0
ret[y][x][2] = 255
else:
ret[y][x][0] = 255
ret[y][x][1] = 255
ret[y][x][2] = 255
return ret
def visDiff(sliceA, sliceB):
disp = toDiff(sliceA, sliceB)
return disp
def visVolDiff(volumeA, volumeB):
for i in range(volumeA.shape[0]):
plt.figure()
plt.title('Disperity at z=' + str(i))
plt.imshow(visDiff(volumeA[i], volumeB[i]))
plt.show()
Explanation: Visualization Function
End of explanation
def preproc(img):
binImg = adaptiveThreshold(img, 5, 5)
binImg*=255
outImg = np.stack([cv2.erode(sub, None, 1) for sub in binImg])
outImg = np.stack([cv2.dilate(sub, None, 2) for sub in outImg])
return outImg
def register(landmarks1, landmarks2, additionalParams):
SimpleElastix = sitk.SimpleElastix()
SimpleElastix.LogToConsoleOn()
img1 = nib.Nifti1Image(preproc(landmarks1), np.eye(4))
nib.save(img1, 'fixed.nii')
img2 = nib.Nifti1Image(preproc(landmarks2), np.eye(4))
nib.save(img2, 'moving.nii')
SimpleElastix.SetFixedImage(sitk.ReadImage('fixed.nii'))
SimpleElastix.SetMovingImage(sitk.ReadImage('moving.nii'))
pMap = sitk.GetDefaultParameterMap('rigid')
for elem in additionalParams:
pMap[elem[0]]=[elem[1]]
SimpleElastix.SetParameterMap(pMap)
SimpleElastix.Execute()
t = SimpleElastix.GetTransformParameterMap()
sitk.WriteParameterFile(t[0], 'transform.txt')
imgFilter = sitk.SimpleTransformix()
imgFilter.SetTransformParameterMap(t[0])
imgFilter.PrintParameterMap()
return imgFilter
start = time.time()
myTransform = register(fixedLmk, movingLmk,
[['MaximumNumberOfSamplingAttempts','200'],
['MaximumNumberOfIterations', '700'],
['Metric', 'AdvancedMeanSquares']]
)
end = time.time()
print end - start
params = parseTransformFile('transform.txt')
regVolume = apply3DRigidVolume(fixedLmk,
params['matrix'],
params['originZ'],
params['originY'],
params['originX'])
visVolDiff(preproc(fixedLmk[140:145]), preproc(movingLmk[140:145]))
visVolDiff(preproc(regVolume[140:145]), preproc(movingLmk[140:145]))
Explanation: Registration Functions
End of explanation |
12,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercises Electric Machinery Fundamentals
Chapter 1
Problem 1-16
Step1: Description
The core shown in Figure P1-2
Step2: Sketch the voltage present at the terminals of the coil.
SOLUTION
By Lenz’ Law, an increasing flux in the direction shown on the core will produce a voltage that tends to oppose the increase. This voltage will be the same polarity as the direction shown on the core, so it will be positive.
The induced voltage in the core is given by the equation
Step3: Lets pretty-print the result
Step4: The resulting voltage is plotted below | Python Code:
%pylab notebook
Explanation: Excercises Electric Machinery Fundamentals
Chapter 1
Problem 1-16
End of explanation
N = 500
dphi = array([0.010, -0.020, 0.010, 0.010]) # [Wb]
dt = array([2e-3 , 3e-3, 2e-3, 1e-3]) # [s]
Explanation: Description
The core shown in Figure P1-2:
<img src="figs/FigC_P1-2.jpg" width="60%">
has the flux $\phi$ shown in Figure P1-12:
<img src="figs/FigC_P1-12.jpg" width="70%">
End of explanation
e_ind = N * dphi/dt
e_ind
Explanation: Sketch the voltage present at the terminals of the coil.
SOLUTION
By Lenz’ Law, an increasing flux in the direction shown on the core will produce a voltage that tends to oppose the increase. This voltage will be the same polarity as the direction shown on the core, so it will be positive.
The induced voltage in the core is given by the equation:
$$e_\text{ind} = N \frac{d\phi}{dt}$$
so the voltage in the windings will be:
End of explanation
t = 0 # time [s], whose initial value is set here
# print the table head
print('''
|--------------+----------|
| Time | e_ind |
|--------------+----------|''')
# We use a simple for-loop to print a row per result:
for i in range(4):
print('| {:.0f} < t < {:.0f} ms | {:>5.2f} kV |'.format(
*( t*1000, # start of time inteval [ms]
(t+dt[i])*1000, # end time of the inteval [ms]
e_ind[i]/1000 # value of e_ind [V]
)))
t = t + dt[i]
print('|=========================|')
Explanation: Lets pretty-print the result:
End of explanation
T = [0, dt[0], dt[0], dt[0]+dt[1], dt[0]+dt[1],
dt[0]+dt[1]+dt[2], dt[0]+dt[1]+dt[2], dt[0]+dt[1]+dt[2]+dt[3]]
e = [e_ind[0], e_ind[0], e_ind[1], e_ind[1],
e_ind[2], e_ind[2], e_ind[3], e_ind[3]]
plot(array(T)*1000, array(e)/1000)
title('Plot of Voltage vs Time')
xlabel('Time (ms)')
ylabel('Induced Voltage (kV)')
axis([0,8,-4,6]) # set the axis range
grid()
Explanation: The resulting voltage is plotted below:
End of explanation |
12,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using grlc from python
Being written in python itself, it is easy to use grlc from python. Here we show how to use grlc to run a SPARQL query which is stored on github.
First we start by importing grlc (and a couple other libraries we use for working with the data).
Step1: We can load the grlc specification for a github repository. For example, the grlc test query repository https
Step2: We can have a look at one item of the returned specification.
Step3: This specification item will execute the following SPARQL query
Step4: We can execute this query by calling the API point in our generated API
Step5: We can use dispatch_query functions to load data from a specific query (description in this case). For this example, we are loading data in text/csv format.
NOTE
Step6: Now we just transform these results to a pandas dataframe.
Step7: Grlc via http
Another alternative is to load data via a running grlc server (in this case grlc.io). | Python Code:
import json
import pandas as pd
from io import StringIO
import grlc
import grlc.utils as utils
import grlc.swagger as swagger
Explanation: Using grlc from python
Being written in python itself, it is easy to use grlc from python. Here we show how to use grlc to run a SPARQL query which is stored on github.
First we start by importing grlc (and a couple other libraries we use for working with the data).
End of explanation
user = 'CLARIAH'
repo = 'grlc-queries'
spec, warning = swagger.build_spec(user, repo)
Explanation: We can load the grlc specification for a github repository. For example, the grlc test query repository https://github.com/CLARIAH/grlc-queries.
End of explanation
print(json.dumps(spec[0], indent=2))
Explanation: We can have a look at one item of the returned specification.
End of explanation
print(spec[0]['query'])
Explanation: This specification item will execute the following SPARQL query:
End of explanation
print(spec[0]['call_name'])
Explanation: We can execute this query by calling the API point in our generated API:
End of explanation
query_name = 'description'
acceptHeader = 'text/csv'
data, code, headers = utils.dispatch_query(user, repo, query_name, acceptHeader=acceptHeader)
Explanation: We can use dispatch_query functions to load data from a specific query (description in this case). For this example, we are loading data in text/csv format.
NOTE: description query loads data from dbpedia.org -- the endpoint is specified in the repository endpoint.txt file.
End of explanation
data_grlc = pd.read_csv(StringIO(data))
data_grlc.head(10)
Explanation: Now we just transform these results to a pandas dataframe.
End of explanation
import requests
headers = {'accept': 'text/csv'}
resp = requests.get("http://grlc.io/api-git/CLARIAH/grlc-queries/description", headers=headers)
data_requests = pd.read_csv(StringIO(resp.text))
data_requests.head(10)
Explanation: Grlc via http
Another alternative is to load data via a running grlc server (in this case grlc.io).
End of explanation |
12,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sections
Introduction to Sequential Backward Selection
Further Reading
Iris Example
Wine Data Example
Gridsearch Example 1
Gridsearch Example 2
Introduction to Sequential Backward Selection
In order to avoid the Curse of Dimensionality, pattern classification is often accompanied by Dimensionality Reduction, which also has the nice side-effect of increasing the computational performance. Common techniques are projection-based, such as Principal Component Analysis (PCA) (unsupervised) or Linear Discriminant (LDA) (supervised). It shall be noted though that regularization in classification models such as Logistic Regression, Support Vector Machines, or Neural Networks is to be preferred over using dimensionality reduction to avoid overfitting. However, dimensionality reduction is still a useful data compression technique to increase computational efficiency and data storage problems.
An alternative to a projection-based dimensionality reduction approach is the so-called Feature Selection, and here, we will take a look at some of the established algorithms to tackle this combinatorial search problem
Step1: <br>
<br>
Wine Data Example
[back to top]
Step2: <br>
<br>
Gridsearch Example 1
[back to top]
Selecting the number of features in a pipeline.
Step3: <br>
<br>
Gridsearch Example 2
[back to top]
Tuning the estimator used for feature selection. Note that the current implementation requires to search for the weights in both the classifier and the SBS transformer separately.
Step4: The final feature subset can then be obtained as follows | Python Code:
from mlxtend.sklearn import SBS
from sklearn.neighbors import KNeighborsClassifier
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
knn = KNeighborsClassifier(n_neighbors=4)
sbs = SBS(knn, k_features=2, scoring='accuracy', cv=5)
sbs.fit(X, y)
print('Indices of selected features:', sbs.indices_)
print('CV score of selected subset:', sbs.k_score_)
print('New feature subset:')
sbs.transform(X)[0:5]
Explanation: Sections
Introduction to Sequential Backward Selection
Further Reading
Iris Example
Wine Data Example
Gridsearch Example 1
Gridsearch Example 2
Introduction to Sequential Backward Selection
In order to avoid the Curse of Dimensionality, pattern classification is often accompanied by Dimensionality Reduction, which also has the nice side-effect of increasing the computational performance. Common techniques are projection-based, such as Principal Component Analysis (PCA) (unsupervised) or Linear Discriminant (LDA) (supervised). It shall be noted though that regularization in classification models such as Logistic Regression, Support Vector Machines, or Neural Networks is to be preferred over using dimensionality reduction to avoid overfitting. However, dimensionality reduction is still a useful data compression technique to increase computational efficiency and data storage problems.
An alternative to a projection-based dimensionality reduction approach is the so-called Feature Selection, and here, we will take a look at some of the established algorithms to tackle this combinatorial search problem: Sequential Backward Selection (SBS).
Let's summarize its mechanics in words:
SBS starts with the original $d$-dimensional feature set and sequentially removes features from this set until the subset reaches a desired (user-specified) size $k$ where $k < d$. In every iteration $i$, the subset $d-i$ dimensional subset is evaluated using a criterion function to determine the least informative feature to be removed.
The criterion function is typically the performance of the classifier measured via cross validation.
Let's consider the following example where we have a dataset that consists of 3 features:
Original feature set: ${x_1, x_2, x_3}$
In order to determine the least informative feature, we create 2-dimensional feature subsets and measure the performance (e.g., accuracy) of the classifier on each of those subset:
1: ${x_1, x_2}$ -> 0.96
2: ${x_1, x_3}$ -> 0.87
3: ${x_2, x_3}$ -> 0.77
Based on the accuracy measures, we would then eliminate feature $x_3$ and repeat this procedure until we reached the number of features to select. E.g., if we'd want to select 2 features, we'd stop at this point and select the feature subset ${x_1, x_2$}.
Note that this algorithm is considered as "subpoptimal" in contrast to an exhaustive search, which is often computationally not feasible, though.
Further Reading
[back to top]
F. Ferri, P. Pudil, M. Hatef, and J. Kittler investigated the performance of different Sequential Selection Algorithms for Feature Selection on different scales and reported their results in
"Comparative Study of Techniques for Large Scale Feature Selection," Pattern Recognition in Practice IV, E. Gelsema and L. Kanal, eds., pp. 403-413. Elsevier Science B.V., 1994.
Choosing an "appropriate" algorithm really depends on the problem - the size and desired recognition rate and computational performance. Thus, I want to encourage you to take (at least) a brief look at their paper and the results they obtained from experimenting with different problems feature space dimensions.
<br>
<br>
Iris Example
[back to top]
End of explanation
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
X = df.iloc[:, 1:].values
y = df.iloc[:, 0].values
df.head()
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
scr = StandardScaler()
X_std = scr.fit_transform(X)
knn = KNeighborsClassifier(n_neighbors=4)
# selecting features
sbs = SBS(knn, k_features=1, scoring='accuracy', cv=5)
sbs.fit(X_std, y)
# plotting performance of feature subsets
k_feat = [len(k) for k in sbs.subsets_]
plt.plot(k_feat, sbs.scores_, marker='o')
plt.ylabel('Accuracy')
plt.xlabel('Number of features')
plt.show()
Explanation: <br>
<br>
Wine Data Example
[back to top]
End of explanation
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
from mlxtend.sklearn import SBS
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_iris
##########################
### Loading data
##########################
iris = load_iris()
X = iris.data
y = iris.target
##########################
### Setting up pipeline
##########################
knn = KNeighborsClassifier(n_neighbors=4)
sbs = SBS(estimator=knn, k_features=2, scoring='accuracy', cv=5)
pipeline = Pipeline([
('scr', StandardScaler()),
('sel', sbs),
('clf', knn)])
parameters = {'sel__k_features': [1,2,3,4]}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=1, verbose=1)
##########################
### Running GridSearch
##########################
grid_search.fit(X, y)
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters set:")
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))
Explanation: <br>
<br>
Gridsearch Example 1
[back to top]
Selecting the number of features in a pipeline.
End of explanation
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
from mlxtend.sklearn import SBS
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_iris
##########################
### Loading data
##########################
iris = load_iris()
X = iris.data
y = iris.target
##########################
### Setting up pipeline
##########################
knn = KNeighborsClassifier(n_neighbors=4)
sbs = SBS(estimator=knn, k_features=2, scoring='accuracy', cv=5)
pipeline = Pipeline([
('scr', StandardScaler()),
('sel', sbs),
('clf', knn)])
parameters = {'sel__k_features': [1, 2, 3, 4],
'sel__estimator__n_neighbors': [4, 5, 6],
'clf__n_neighbors': [4, 5, 6]}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=1, verbose=1)
##########################
### Running GridSearch
##########################
grid_search.fit(X, y)
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters set:")
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))
Explanation: <br>
<br>
Gridsearch Example 2
[back to top]
Tuning the estimator used for feature selection. Note that the current implementation requires to search for the weights in both the classifier and the SBS transformer separately.
End of explanation
print('Best feature subset:')
grid_search.best_estimator_.steps[1][1].indices_
Explanation: The final feature subset can then be obtained as follows:
End of explanation |
12,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load IPython support for working with MPI tasks
Step1: Let's also load the plotting and numerical libraries so we have them ready for visualization later on.
Step2: Now, we load the MPI libraries into the engine namespaces, and do a simple printing of their MPI rank information to verify that all nodes are operational and they match our cluster's real capacity.
Here, we are making use of IPython's special %%px cell magic, which marks the entire cell for parallel execution. This means that the code below will not run in this notebook's kernel, but instead will be sent to all engines for execution there. In this way, IPython makes it very natural to control your entire cluster from within the notebook environment
Step4: We write a utility that reorders a list according to the mpi ranks of the engines, since all gather operations will return data in engine id order, not in MPI rank order. We'll need this later on when we want to reassemble in IPython data structures coming from all the engines
Step5: We now define a local (to this notebook) plotting function that fetches data from the engines' global namespace. Once it has retrieved the current state of the relevant variables, it produces and returns a figure | Python Code:
from ipyparallel import Client, error
cluster = Client()
view = cluster[:]
Explanation: Load IPython support for working with MPI tasks
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Let's also load the plotting and numerical libraries so we have them ready for visualization later on.
End of explanation
%%px
# MPI initialization, library imports and sanity checks on all engines
from mpi4py import MPI
# Load data publication API so engines can send data to notebook client
from ipykernel.datapub import publish_data
import numpy as np
import time
mpi = MPI.COMM_WORLD
bcast = mpi.bcast
barrier = mpi.barrier
rank = mpi.rank
size = mpi.size
print("MPI rank: %i/%i" % (mpi.rank,mpi.size))
Explanation: Now, we load the MPI libraries into the engine namespaces, and do a simple printing of their MPI rank information to verify that all nodes are operational and they match our cluster's real capacity.
Here, we are making use of IPython's special %%px cell magic, which marks the entire cell for parallel execution. This means that the code below will not run in this notebook's kernel, but instead will be sent to all engines for execution there. In this way, IPython makes it very natural to control your entire cluster from within the notebook environment:
End of explanation
ranks = view['rank']
engine_mpi = np.argsort(ranks)
def mpi_order(seq):
Return elements of a sequence ordered by MPI rank.
The input sequence is assumed to be ordered by engine ID.
return [seq[x] for x in engine_mpi]
Explanation: We write a utility that reorders a list according to the mpi ranks of the engines, since all gather operations will return data in engine id order, not in MPI rank order. We'll need this later on when we want to reassemble in IPython data structures coming from all the engines: IPython will collect the data ordered by engine ID, but our code creates data structures based on MPI rank, so we need to map from one indexing scheme to the other. This simple function does the job:
End of explanation
mpi_order(view.apply_sync(lambda : mpi.rank))
%%px --noblock
import numpy as np
from ipykernel.datapub import publish_data
N = 1000
stride = N // size
x = np.linspace(0, 10, N)[stride * rank:stride*(rank+1)]
publish_data({'x': x})
for i in range(10):
y = np.sin(x * i)
publish_data({'y': y, 'i': i})
time.sleep(1)
ar = _
def stitch_data(ar, key):
data = mpi_order(ar.data)
parts = [ d[key] for d in data ]
return np.concatenate(parts)
ar.wait(1)
while not ar.ready():
clear_output(wait=True)
x = stitch_data(ar, 'x')
y = stitch_data(ar, 'y')
plt.plot(x, y)
plt.show()
ar.wait(1)
ar.get()
Explanation: We now define a local (to this notebook) plotting function that fetches data from the engines' global namespace. Once it has retrieved the current state of the relevant variables, it produces and returns a figure:
End of explanation |
12,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook demonstrates how BioThings Explorer can be used to answer the following query
Step1: Step 2
Step2: The df object contains the full output from BioThings Explorer. Each row shows one path that joins the input node (PRDX1) to an intermediate node (a gene or protein) to an ending node (a chemical compound). The data frame includes a set of columns with additional details on each node and edge (including human-readable labels, identifiers, and sources). Let's remove all examples where the output_name (the compound label) is None, and specifically focus on paths with specific mechanistic predicates decreasesActivityOf and targetedBy.
Filter for drugs that targets genes which decrease the activity of PRDX1
Step3: Step 3 | Python Code:
from biothings_explorer.hint import Hint
ht = Hint()
prdx1 = ht.query("PRDX1")['Gene'][0]
prdx1
Explanation: Introduction
This notebook demonstrates how BioThings Explorer can be used to answer the following query:
"Finding Marketed Drugs that Might Treat an Unknown Syndrome by Perturbing the Disease Mechanism Pathway"
This query corresponds to Tidbit 4 which was formulated as a demonstration of the NCATS Translator program.
Background of BTE: BioThings Explorer can answer two classes of queries -- "EXPLAIN" and "PREDICT". EXPLAIN queries are described in EXPLAIN_demo.ipynb, and PREDICT queries are described in PREDICT_demo.ipynb. Here, we describe PREDICT queries and how to use BioThings Explorer to execute them. A more detailed overview of the BioThings Explorer systems is provided in these slides.
Background of TIDBIT 04:
A five-year-old patient was brought to the emergency room with recurrent polymicrobial lung infections, only 29% small airway function and was unresponsive to antibiotics.
The patient’s medical records included a genetics report from age 1, which showed a 1p34.1 chromosomal duplication encompassing 1.9 Mb, including the PRDX1 gene, which encodes Peroxiredoxin 1. The gene has been linked to airway disease in both rats and humans, and is known to act as an agonist of toll-like receptor 4 (TLR4), a pro-inflammatory receptor. In addition, two patients at another clinic were found to have 1p34.1 duplications:
One patient with a duplication including PRDX1 died with similar phenotypes
One patient with a duplication that did NOT include PRDX1 showed no airway disease phenotype
While recurrent lung infections are typically treated with antibiotics, this patient was unresponsive to standard treatments. The patient’s earlier genetics report and data from other patients with similar duplications gave the physician evidence that PRDX1 may play a role in the disease, but no treatments directly related to the gene were known. With this information in mind, the physician asked a researcher familiar with Translator to try to find possible treatments for this patient.
How Might Translator Help?
The patient’s duplication of the 1p34.1 region of chromosome 1 gave Translator researchers a good place to start. Since PRDX1 is an agonist of TLR4, the duplication of the PRDX1 gene likely causes overexpression of PRDX1, which could lead to overactivity of both of the gene products. The researcher decided to try to find drugs that could be used to reduce the activity of those two proteins. An exhaustive search of chemical databases and PubMed to find safe drug options could take days to weeks.
For a known genetic mutation, can Translator be used to quickly find existing modulators to compensate for the dysfunctional gene product?
Step 1: Find representation of "PRDX1" in BTE
In this step, BioThings Explorer translates our query string "PRDX1" into BioThings objects, which contain mappings to many common identifiers. Generally, the top result returned by the Hint module will be the correct item, but you should confirm that using the identifiers shown.
Search terms can correspond to any child of BiologicalEntity from the Biolink Model, including DiseaseOrPhenotypicFeature (e.g., "lupus"), ChemicalSubstance (e.g., "acetaminophen"), Gene (e.g., "CDK2"), BiologicalProcess (e.g., "T cell differentiation"), and Pathway (e.g., "Citric acid cycle").
End of explanation
from biothings_explorer.user_query_dispatcher import FindConnection
fc = FindConnection(input_obj=prdx1, output_obj='ChemicalSubstance', intermediate_nodes=['Gene'])
fc.connect(verbose=True)
df = fc.display_table_view()
Explanation: Step 2: Find drugs that are associated with genes which are associated with PRDX1
In this section, we find all paths in the knowledge graph that connect PRDX1 to any entity that is a chemical compound. To do that, we will use FindConnection. This class is a convenient wrapper around two advanced functions for query path planning and query path execution. More advanced features for both query path planning and query path execution are in development and will be documented in the coming months.
The parameters for FindConnection are described below:
End of explanation
dfFilt = df.loc[df['output_name'].notnull()].query('pred1 == "decreasesActivityOf" and pred2 == "targetedBy"')
dfFilt
dfFilt.node1_id.unique()
dfFilt.node1_name.unique()
Explanation: The df object contains the full output from BioThings Explorer. Each row shows one path that joins the input node (PRDX1) to an intermediate node (a gene or protein) to an ending node (a chemical compound). The data frame includes a set of columns with additional details on each node and edge (including human-readable labels, identifiers, and sources). Let's remove all examples where the output_name (the compound label) is None, and specifically focus on paths with specific mechanistic predicates decreasesActivityOf and targetedBy.
Filter for drugs that targets genes which decrease the activity of PRDX1
End of explanation
import requests
# query pfocr to see if PRDX1 and VEGFA is in the same pathway figure
doc = requests.get('https://pending.biothings.io/pfocr/query?q=associatedWith.genes:5052 AND associatedWith.genes:7124').json()
doc
Explanation: Step 3: Evaluating Paths based on published pathway figures
Let's see if PRDX1 (entrez:5052) is in the same pathway as TNF (entrez:7124) using our newly created API PFOCR
End of explanation |
12,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.2 - Deviner la langue d'un texte (correction)
Calcul d'un score pour détecter la langue d'un texte. Ce notebook aborde les dictionnaires, les fichiers et les graphiques (correction).
Step1: Q1
Step2: Les problèmes d'encoding sont toujours délicats. Un encoding définit la façon dont une séquence d'octets représente une chaîne de caractères. Il y a 256 valeurs possible d'octets et la langue chinoise contient beaucoup plus de caractères. Il faut donc utiliser plusieurs octets pour représenter un caractère un peu comme le Morse qui n'utilise que deux symboles pour représenter 26 lettres. Quand on ne connaît pas l'encoding, on utilise un module chardet et la fonction detect.
Step3: On teste la fonction avec un petit fichier qu'on crée pour l'occasion.
Step4: Visiblement, ce n'est pas toujours évident mais suffisant pour ce qu'on veut en faire à savoir des statistiques. Les problèmes avec la langue latine devraient être statistiquement négligeables pour ce que nous souhaitons en faire.
Step5: Q2
Step6: Le module collections propose un objet Counter qui implémente ce calcul.
Step7: Comme pour beaucoup de fonctions faisant partie des extensions du langage Python, elle est plus rapide que la version que nous pourrions implémenter.
Step8: Q3
Step9: Q4
Step10: On regarde les valeurs obtenus pour les articles du monde et du new-york time.
Step11: Ca marche plutôt bien.
Q5
Step12: Les textes anglais et français apparaissent bien séparés. On place les autres.
Step13: On ajoute le nom du texte sur le graphique. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
Explanation: 1A.2 - Deviner la langue d'un texte (correction)
Calcul d'un score pour détecter la langue d'un texte. Ce notebook aborde les dictionnaires, les fichiers et les graphiques (correction).
End of explanation
def read_file(filename):
with open(filename, "r", encoding="utf-8") as f:
return f.read()
Explanation: Q1 : lire un fichier
End of explanation
import chardet
def read_file_enc(filename):
with open(filename, "rb") as f:
b = f.read()
res = chardet.detect(b)
enc = res["encoding"]
return res, b.decode(enc)
Explanation: Les problèmes d'encoding sont toujours délicats. Un encoding définit la façon dont une séquence d'octets représente une chaîne de caractères. Il y a 256 valeurs possible d'octets et la langue chinoise contient beaucoup plus de caractères. Il faut donc utiliser plusieurs octets pour représenter un caractère un peu comme le Morse qui n'utilise que deux symboles pour représenter 26 lettres. Quand on ne connaît pas l'encoding, on utilise un module chardet et la fonction detect.
End of explanation
with open("texte.txt", "w", encoding="utf-8") as f:
f.write("un corbeau sur un arbre perché tenait en son bec un fromage")
lu = read_file("texte.txt")
lu
enc, lu = read_file_enc('texte.txt')
lu
Explanation: On teste la fonction avec un petit fichier qu'on crée pour l'occasion.
End of explanation
enc
Explanation: Visiblement, ce n'est pas toujours évident mais suffisant pour ce qu'on veut en faire à savoir des statistiques. Les problèmes avec la langue latine devraient être statistiquement négligeables pour ce que nous souhaitons en faire.
End of explanation
def histogram(texte):
d = {}
for c in texte:
d[c] = d.get(c, 0) + 1
return d
histogram(lu)
Explanation: Q2 : histogramme
End of explanation
from collections import Counter
def histogram2(texte):
return Counter(texte)
histogram2(lu)
Explanation: Le module collections propose un objet Counter qui implémente ce calcul.
End of explanation
%timeit histogram(lu)
%timeit histogram2(lu)
Explanation: Comme pour beaucoup de fonctions faisant partie des extensions du langage Python, elle est plus rapide que la version que nous pourrions implémenter.
End of explanation
def normalize(hist):
s = sum(hist.values())
if s == 0:
return {}
else:
return {k: v/s for k,v in hist.items()}
normalize(histogram2(lu))
Explanation: Q3 : normalisation
End of explanation
from pyensae.datasource import download_data
texts = download_data("articles.zip")
h = {text: normalize(histogram2(read_file_enc(text)[1])).get('h', 0) for text in texts}
h
Explanation: Q4 : calcul
On essaye avec la fréquence de la lettre H. (données : articles.zip)
End of explanation
import matplotlib.pyplot as plt
lemonde = list(sorted(v for k,v in h.items() if "lemonde" in k))
ny = list(sorted(v for k,v in h.items() if "times" in k))
plt.plot(lemonde, label="lemonde")
plt.plot(ny, label="times")
plt.legend()
Explanation: On regarde les valeurs obtenus pour les articles du monde et du new-york time.
End of explanation
scores = {}
for text in texts:
norm = normalize(histogram2(read_file_enc(text)[1]))
h, w = norm.get('h', 0), norm.get('w', 0)
scores[text] = (h, w)
lem = [v for k,v in scores.items() if "lemonde" in k]
ny = [v for k,v in scores.items() if "times" in k]
plt.scatter(x=[_[0] for _ in lem], y=[_[1] for _ in lem], label="lemonde")
plt.scatter(x=[_[0] for _ in ny], y=[_[1] for _ in ny], label="times")
plt.xlabel("h")
plt.ylabel("w")
plt.legend()
Explanation: Ca marche plutôt bien.
Q5 : score
On recommence avec deux lettres et on trace un nuage de points pour les articles des mêmes journaux.
End of explanation
other = [v for k,v in scores.items() if "times" not in k and "monde" not in k]
plt.scatter(x=[_[0] for _ in lem], y=[_[1] for _ in lem], label="lemonde")
plt.scatter(x=[_[0] for _ in ny], y=[_[1] for _ in ny], label="times")
plt.scatter(x=[_[0] for _ in other], y=[_[1] for _ in other], label="autres", s=15)
plt.xlabel("h")
plt.ylabel("w")
plt.legend()
Explanation: Les textes anglais et français apparaissent bien séparés. On place les autres.
End of explanation
text_others = [k for k,v in scores.items() if "times" not in k and "monde" not in k]
fig, ax = plt.subplots(1, 1, figsize=(10,10))
ax.scatter(x=[_[0] for _ in lem], y=[_[1] for _ in lem], label="lemonde")
ax.scatter(x=[_[0] for _ in ny], y=[_[1] for _ in ny], label="times")
ax.scatter(x=[_[0] for _ in other], y=[_[1] for _ in other], label="autres", s=15)
for (x,y), t in zip(other, text_others):
ax.text(x, y, t, ha='right', rotation=-30, wrap=True)
ax.set_xlabel("h")
ax.set_ylabel("w")
ax.legend()
Explanation: On ajoute le nom du texte sur le graphique.
End of explanation |
12,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tables to Networks, Networks to Tables
Networks can be represented in a tabular form in two ways
Step1: At this point, we have our stations and trips data loaded into memory.
How we construct the graph depends on the kind of questions we want to answer, which makes the definition of the "unit of consideration" (or the entities for which we are trying to model their relationships) is extremely important.
Let's try to answer the question
Step2: Then, let's iterate over the stations DataFrame, and add in the node attributes.
Step3: In order to answer the question of "which stations are important", we need to specify things a bit more. Perhaps a measure such as betweenness centrality or degree centrality may be appropriate here.
The naive way would be to iterate over all the rows. Go ahead and try it at your own risk - it may take a long time
Step4: Exercise
Flex your memory muscles
Step5: Exercise
Create a new graph, and filter out the edges such that only those with more than 100 trips taken (i.e. count >= 100) are left.
Step6: Let's now try drawing the graph.
Exercise
Use nx.draw(my_graph) to draw the filtered graph to screen.
Step7: Exercise
Try visualizing the graph using a CircosPlot. Order the nodes by their connectivity in the original graph, but plot only the filtered graph edges.
Step8: In this visual, nodes are sorted from highest connectivity to lowest connectivity in the unfiltered graph.
Edges represent only trips that were taken >100 times between those two nodes.
Some things should be quite evident here. There are lots of trips between the highly connected nodes and other nodes, but there are local "high traffic" connections between stations of low connectivity as well (nodes in the top-right quadrant).
Saving NetworkX Graph Files
NetworkX's API offers many formats for storing graphs to disk. If you intend to work exclusively with NetworkX, then pickling the file to disk is probably the easiest way.
To write to disk | Python Code:
# This block of code checks to make sure that a particular directory is present.
if "divvy_2013" not in os.listdir('datasets/'):
print('Unzip the divvy_2013.zip file in the datasets folder.')
stations = pd.read_csv('datasets/divvy_2013/Divvy_Stations_2013.csv', parse_dates=['online date'], index_col='id', encoding='utf-8')
stations
trips = pd.read_csv('datasets/divvy_2013/Divvy_Trips_2013.csv',
parse_dates=['starttime', 'stoptime'],
index_col=['trip_id'])
trips = trips.sort()
trips
Explanation: Tables to Networks, Networks to Tables
Networks can be represented in a tabular form in two ways: As an adjacency list with edge attributes stored as columnar values, and as a node list with node attributes stored as columnar values.
Storing the network data as a single massive adjacency table, with node attributes repeated on each row, can get unwieldy, especially if the graph is large, or grows to be so. One way to get around this is to store two files: one with node data and node attributes, and one with edge data and edge attributes.
The Divvy bike sharing dataset is one such example of a network data set that has been stored as such.
Loading Node Lists and Adjacency Lists
Let's use the Divvy bike sharing data set as a starting point. The Divvy data set is comprised of the following data:
Stations and metadata (like a node list with attributes saved)
Trips and metadata (like an edge list with attributes saved)
The README.txt file in the Divvy directory should help orient you around the data.
End of explanation
G = nx.DiGraph()
Explanation: At this point, we have our stations and trips data loaded into memory.
How we construct the graph depends on the kind of questions we want to answer, which makes the definition of the "unit of consideration" (or the entities for which we are trying to model their relationships) is extremely important.
Let's try to answer the question: "What are the most popular trip paths?" In this case, the bike station is a reasonable "unit of consideration", so we will use the bike stations as the nodes.
To start, let's initialize an directed graph G.
End of explanation
for r, d in stations.iterrows(): # call the pandas DataFrame row-by-row iterator
G.add_node(r, attr_dict=d.to_dict())
Explanation: Then, let's iterate over the stations DataFrame, and add in the node attributes.
End of explanation
# # Run the following code at your own risk :)
# for r, d in trips.iterrows():
# start = d['from_station_id']
# end = d['to_station_id']
# if (start, end) not in G.edges():
# G.add_edge(start, end, count=1)
# else:
# G.edge[start][end]['count'] += 1
for (start, stop), d in trips.groupby(['from_station_id', 'to_station_id']):
G.add_edge(start, stop, count=len(d))
Explanation: In order to answer the question of "which stations are important", we need to specify things a bit more. Perhaps a measure such as betweenness centrality or degree centrality may be appropriate here.
The naive way would be to iterate over all the rows. Go ahead and try it at your own risk - it may take a long time :-). Alternatively, I would suggest doing a pandas groupby.
End of explanation
from collections import Counter
# Count the number of edges that have x trips recorded on them.
trip_count_distr = Counter([d['count'] for _, _, d in G.edges(data=True)])
# Then plot the distribution of these
plt.scatter(list(trip_count_distr.keys()), list(trip_count_distr.values()), alpha=0.1)
plt.yscale('log')
plt.xlabel('num. of trips')
plt.ylabel('num. of edges')
Explanation: Exercise
Flex your memory muscles: can you make a scatter plot of the distribution of the number edges that have a certain number of trips?
End of explanation
# Filter the edges to just those with more than 100 trips.
G_filtered = G.copy()
for u, v, d in G.edges(data=True):
if d['count'] < 100:
G_filtered.remove_edge(u,v)
len(G_filtered.edges())
Explanation: Exercise
Create a new graph, and filter out the edges such that only those with more than 100 trips taken (i.e. count >= 100) are left.
End of explanation
nx.draw(G_filtered)
Explanation: Let's now try drawing the graph.
Exercise
Use nx.draw(my_graph) to draw the filtered graph to screen.
End of explanation
nodes = sorted(G_filtered.nodes(), key=lambda x:len(G.neighbors(x)))
edges = G_filtered.edges()
edgeprops = dict(alpha=0.1)
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/divvy.png', dpi=300)
Explanation: Exercise
Try visualizing the graph using a CircosPlot. Order the nodes by their connectivity in the original graph, but plot only the filtered graph edges.
End of explanation
nx.write_gpickle(G, 'datasets/divvy_2013/divvy_graph.pkl')
G = nx.read_gpickle('datasets/divvy_2013/divvy_graph.pkl')
G.nodes(data=True)
Explanation: In this visual, nodes are sorted from highest connectivity to lowest connectivity in the unfiltered graph.
Edges represent only trips that were taken >100 times between those two nodes.
Some things should be quite evident here. There are lots of trips between the highly connected nodes and other nodes, but there are local "high traffic" connections between stations of low connectivity as well (nodes in the top-right quadrant).
Saving NetworkX Graph Files
NetworkX's API offers many formats for storing graphs to disk. If you intend to work exclusively with NetworkX, then pickling the file to disk is probably the easiest way.
To write to disk:
nx.write_gpickle(G, handle)
To load from disk:
G = nx.read_gpickle(handle)
End of explanation |
12,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transform EEG data using current source density (CSD)
This script shows an example of how to use CSD [1] [2] [3]_.
CSD takes the spatial Laplacian of the sensor signal (derivative in both
x and y). It does what a planar gradiometer does in MEG. Computing these
spatial derivatives reduces point spread. CSD transformed data have a sharper
or more distinct topography, reducing the negative impact of volume conduction.
Step1: Load sample subject data
Step2: Plot the raw data and CSD-transformed raw data
Step3: Also look at the power spectral densities
Step4: CSD can also be computed on Evoked (averaged) data.
Here we epoch and average the data so we can demonstrate that.
Step5: First let's look at how CSD affects scalp topography
Step6: CSD has parameters stiffness and lambda2 affecting smoothing and
spline flexibility, respectively. Let's see how they affect the solution | Python Code:
# Authors: Alex Rockhill <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
Explanation: Transform EEG data using current source density (CSD)
This script shows an example of how to use CSD [1] [2] [3]_.
CSD takes the spatial Laplacian of the sensor signal (derivative in both
x and y). It does what a planar gradiometer does in MEG. Computing these
spatial derivatives reduces point spread. CSD transformed data have a sharper
or more distinct topography, reducing the negative impact of volume conduction.
End of explanation
raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif',
preload=True)
events = mne.find_events(raw)
raw = raw.pick_types(meg=False, eeg=True, eog=True, ecg=True, stim=False,
exclude=raw.info['bads'])
raw.set_eeg_reference(projection=True).apply_proj()
Explanation: Load sample subject data
End of explanation
raw_csd = mne.preprocessing.compute_current_source_density(raw)
raw.plot()
raw_csd.plot()
Explanation: Plot the raw data and CSD-transformed raw data:
End of explanation
raw.plot_psd()
raw_csd.plot_psd()
Explanation: Also look at the power spectral densities:
End of explanation
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5,
preload=True)
evoked = epochs['auditory'].average()
Explanation: CSD can also be computed on Evoked (averaged) data.
Here we epoch and average the data so we can demonstrate that.
End of explanation
times = np.array([-0.1, 0., 0.05, 0.1, 0.15])
evoked_csd = mne.preprocessing.compute_current_source_density(evoked)
evoked.plot_joint(title='Average Reference', show=False)
evoked_csd.plot_joint(title='Current Source Density')
Explanation: First let's look at how CSD affects scalp topography:
End of explanation
fig, ax = plt.subplots(4, 4)
fig.subplots_adjust(hspace=0.5)
fig.set_size_inches(10, 10)
for i, lambda2 in enumerate([0, 1e-7, 1e-5, 1e-3]):
for j, m in enumerate([5, 4, 3, 2]):
this_evoked_csd = mne.preprocessing.compute_current_source_density(
evoked, stiffness=m, lambda2=lambda2)
this_evoked_csd.plot_topomap(
0.1, axes=ax[i, j], outlines='skirt', contours=4, time_unit='s',
colorbar=False, show=False)
ax[i, j].set_title('stiffness=%i\nλ²=%s' % (m, lambda2))
Explanation: CSD has parameters stiffness and lambda2 affecting smoothing and
spline flexibility, respectively. Let's see how they affect the solution:
End of explanation |
12,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Images
Step1: Get the Header-Data-Units (hdu's) from a fits file. This particular one only has 1.
Step2: This 4x3x2 matrix can actually also be generated from scratch using basic numpy
Step3: We now want to take a plane from this cube, and plot this in a heatmap or contour map. We are now faced deciding how columns and rows translate to X and Y on a plot. Math, Astronomy, Geography and Image Processing groups all differ a bit how they prefer to see this, so numpy comes with a number of function to help you with this
Step4: Note that for a small 4x3 matrix this image has been artificially made smooth by interpolating in imshow(); however you can already see that the integer coordinates are at the center of a cell
Step5: if you want to print the array values on the terminal with 0 at the bottom left, use the np.flipup() function
Step6: Arrays in numpy are in C-order (row-major) by default, but you can actually change it to Fortran-order (column-major)
Step8: CASA
CASA is a python package used in radio astronomy (ALMA, VLA etc.), but is peculiar in the sense that it caters to astronomers with a fortran background, or mathematicians with a DATA(x,y) expectation
Step9: Arrray Transposing
Step10: Inner and Outer loop order of execution
Set up a (random) square matrix and vector. Multiply the matrix with a vector and measure the performance difference if you order the loops differently.
Step11: Matrix Inversion | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# import pyfits as fits # deprecated
from astropy.io import fits
Explanation: Images: rows, columns and all that jazzy mess....
Two dimensional data arrays are normally stored in column-major or row-major order. In row-major order adjacent elements in a row are stored next to each other in memory. In column-major order adjacent elements in a column are stored next to each other in memory. See also https://en.wikipedia.org/wiki/Matrix_representation
For the usual mathematical matrix notation $A_{ij}$, where $i$ is the row, and $j$ the column, we have in the case of a $3x4$ matrix:
$$
A = \begin{bmatrix}
a_{11} & a_{12} & a_{13} & a_{14}\
a_{21} & a_{22} & a_{23} & a_{24}\
a_{31} & a_{32} & a_{33} & a_{34}\
\end{bmatrix}
$$
Classic languages such as Fortran store their arrays in so-called column-major order. FDATA(NR,NC), and indices started at 1 with the first versions.
More modern language, such a C, store their arrays in row-major order, CDATA[NR][NC], with indices starting at 0.
col major: fdata(1,1), fdata(2,1), ... first index runs fastest
row major: cdata[0][0], cdata[0][1], ... last index runs fastest
Examples of column major are: Fortran, [FITS], MatLab, IDL, R, Julia
Examples of row major are: C, Python, (java)
Images are often referred to in X and Y coordinates, like a mathematical system. The origin would be at (0,0) in the lower left corner. Image processing software normally puts the (0,0) origin at the top left corner, which corresponds a bit how the matrix above is printed. This, together with row-major and column-major can make it challenging to interchange data and plot them on the screen.
Add to this that for very large data, re-ordering axes can be a very expensive operation.
See also https://en.wikipedia.org/wiki/Iliffe_vector for another view on storing data in multi-dimensional arrays.
End of explanation
hdu = fits.open('../data/cube432.fits')
print(len(hdu))
h = hdu[0].header
d = hdu[0].data
print(d.shape)
print(d)
Explanation: Get the Header-Data-Units (hdu's) from a fits file. This particular one only has 1.
End of explanation
d1 = np.zeros(2*3*4).reshape(2,3,4)
for z in range(2):
for y in range(3):
for x in range(4):
d1[z,y,x] = x + 10*y + 100*z
print(d1)
print(d1.flatten())
# are two arrays the same (or close enough?)
np.allclose(d,d1)
Explanation: This 4x3x2 matrix can actually also be generated from scratch using basic numpy:
End of explanation
p0 = d[0,:,:]
p1 = d[1,:,:]
print(np.flipud(p0))
plt.imshow(p0)
plt.colorbar()
plt.matshow(p0,origin='lower')
Explanation: We now want to take a plane from this cube, and plot this in a heatmap or contour map. We are now faced deciding how columns and rows translate to X and Y on a plot. Math, Astronomy, Geography and Image Processing groups all differ a bit how they prefer to see this, so numpy comes with a number of function to help you with this:
np.reshape
np.transpose (or T)
np.flipud
np.fliprd
np.rot90
np.swapaxes
np.moveaxis
the important thing to realize is that they all give a new view of the array, which often is more efficient as moving the actual values.
End of explanation
plt.imshow(p0,interpolation='none')
plt.colorbar()
Explanation: Note that for a small 4x3 matrix this image has been artificially made smooth by interpolating in imshow(); however you can already see that the integer coordinates are at the center of a cell: (0.0) is the center of the lower left cell. This is a little more obvious when you turn off interpolation:
End of explanation
print(np.flipud(p0))
plt.imshow(np.flipud(p0),interpolation='none')
plt.colorbar()
Explanation: if you want to print the array values on the terminal with 0 at the bottom left, use the np.flipup() function:
End of explanation
d2 = np.arange(3*4).reshape(3,4,order='C')
d3 = np.arange(3*4).reshape(3,4,order='F')
print('C\n',d2)
print('F\n',d3)
d3.transpose()
Explanation: Arrays in numpy are in C-order (row-major) by default, but you can actually change it to Fortran-order (column-major):
End of explanation
try:
import casacore.images.image as image
print("we have casacore")
im = image('../data/cube432.fits')
print(im.shape()) # -> [2, 3, 4]
print(im.datatype()) # -> 'float'
d=im.getdata()
m=im.getmask()
print(d.shape) # -> (2,3,4)
print(d[0,0,0],m[0,0,0])
[[[[ 0. 1. 2. 3.]
[ 10. 11. 12. 13.]
[ 20. 21. 22. 23.]]
[[ 100. 101. 102. 103.]
[ 110. 111. 112. 113.]
[ 120. 121. 122. 123.]]
except:
print("no casacore")
import numpy.ma as ma
a = np.arange(4)
am = ma.masked_equal(a,2)
print(a.sum(),am.sum())
print(am.data,am.mask)
Explanation: CASA
CASA is a python package used in radio astronomy (ALMA, VLA etc.), but is peculiar in the sense that it caters to astronomers with a fortran background, or mathematicians with a DATA(x,y) expectation: CASA uses column-major arrays with an index starting at 0. CASA images can also store a mask alongside the data, but the logic is the reverse from the masking used in numpy.ma: in CASA a True means a good data point, in numpy it means a bad point!
Notebooks don't work within casa (yet), but if you install casacore in your local python, the examples below should work. The kernsuite software should give you one easy option to install casacore, another way is to compile the code directly from https://github.com/casacore/casacore
Hence the example here is shown inline, and not in the notebook form yet. (note CASA currently uses python2)
```
casa
ia.open('../data/cube432.fits')
d1 = ia.getchunk()
d1.shape
(4,3,2)
d1[3,2,1]
123.0
print d1
[[[ 0. 100.]
[ 10. 110.]
[ 20. 120.]]
[[ 1. 101.]
[ 11. 111.]
[ 21. 121.]]
[[ 2. 102.]
[ 12. 112.]
[ 22. 122.]]
[[ 3. 103.]
[ 13. 113.]
[ 23. 123.]]]
p0 = d1[:,:,0]
print p0
[[ 0. 10. 20.]
[ 1. 11. 21.]
[ 2. 12. 22.]
[ 3. 13. 23.]]
print np.flipud(np.rot90(p0))
[[ 0. 1. 2. 3.]
[ 10. 11. 12. 13.]
[ 20. 21. 22. 23.]]
print np.flipud(np.rot90(p0)).flatten()
[ 0. 1. 2. 3. 10. 11. 12. 13. 20. 21. 22. 23.]
mask boolean in CASA is the opposite of the one in numpy.ma
d1m = ia.getchunk(getmask=True)
print d1[0,0,0],d1m[0,0,0]
0.0 True
or create the array from scratch
ia.fromshape(shape=[4,3,2])
p2 = ia.getchunk()
p2.shape
(4,3,2)
etc.etc.
```
casacore and casacore-python
Using just casacore, you will find the equivalent getchunk() is now called getdata() and converts to a proper numpy array without the need for np.rot90() and np.flipud(). The casacore-python version is able to work in python3 as well.
End of explanation
%%time
n = 100
n1 = n
n2 = n+1
n3 = n+2
np.random.seed(123)
a = np.random.normal(size=n1*n2*n3).reshape(n1,n2,n3)
print(len(a.flatten()))
print(a[0,0,0])
a.flatten()[0]=-1
print(a[0,0,0]) # how come?
%%time
b = a.transpose()
# note B is another view of A
Explanation: Arrray Transposing
End of explanation
%%time
n = 2
m = n+1
np.random.seed(123)
a = np.random.normal(size=m*n).reshape(m,n)
x = np.random.normal(size=n)
print(x[0])
#
#a = np.arange(n*n).reshape(n,n)
#x = np.arange(n)
%%time
b = np.matmul(a,x)
print(a.shape,x.shape,b.shape)
%%time
b1 = np.zeros(m)
for i in range(m):
for j in range(n):
b1[i] = b1[i] + a[i,j]*x[j]
%%time
b2 = np.zeros(m)
for i in range(m):
ai = a[i,:]
b2[i] = np.inner(ai,x)
%%time
b3 = np.zeros(m)
for j in range(n):
for i in range(m):
b3[i] = b3[i] + a[i,j]*x[j]
if n < 3:
print('a',a,'\nx',x)
print('b',b,'\nb1',b1,'\nb2',b2,'\nb3',b3)
else:
print(n)
Explanation: Inner and Outer loop order of execution
Set up a (random) square matrix and vector. Multiply the matrix with a vector and measure the performance difference if you order the loops differently.
End of explanation
from numpy.linalg import inv
n = 2
a1 = np.random.normal(size=n*n).reshape(n,n)
%%time
ainv = inv(a1)
print(a1)
print(ainv)
i1=np.matmul(a1,ainv)
i0=np.eye(n)
print(np.allclose(i0,i1,atol=1e-10))
print(i1)
Explanation: Matrix Inversion
End of explanation |
12,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Strategies
The strategy object describes the behaviour of an agent, given its vocabulary. The main algorithms that vary among strategies are
Step1: Let's create a strategy. We will also need two vocabularies to work on (speaker and hearer). (more info on other strategy types
Step2: Now that we have a strategy, we can test the different functions. Exec
!! Vocabularies are modified, but this way you can observe progressive growth of the number of links !!
Step3: Here you can modify by hand the 2 vocabularies before re-executing the code
Step4: Approximation of the probability density of the different procedures of the strategy | Python Code:
import naminggamesal.ngstrat as ngstrat
import naminggamesal.ngvoc as ngvoc
Explanation: Strategies
The strategy object describes the behaviour of an agent, given its vocabulary. The main algorithms that vary among strategies are:
* how to choose a link (meaning-word) to enact,
* how to guess a meaning from a word
* how to update the vocabulary
End of explanation
M = 5
W = 10
voc_cfg = {
'voc_type':'matrix',
'M':M,
'W':W
}
nlink = 0
voctest_speaker = ngvoc.Vocabulary(**voc_cfg)
for i in range(0, nlink):
voctest_speaker.add(random.randint(0,M-1), random.randint(0,W-1), 0.2)
voctest_hearer=ngvoc.Vocabulary(**voc_cfg)
for i in range(0, nlink):
voctest_hearer.add(random.randint(0,M-1), random.randint(0,W-1), 0.2)
print("Speaker:")
print(voctest_speaker)
print(" ")
print("Hearer:")
print(voctest_hearer)
strat_cfg={"strat_type":"success_threshold_epirob",'vu_cfg':{'vu_type':'BLIS'},}
teststrat=ngstrat.Strategy(**strat_cfg)
teststrat
Explanation: Let's create a strategy. We will also need two vocabularies to work on (speaker and hearer). (more info on other strategy types: Design_newStrategy.ipynb)
End of explanation
memory_s=teststrat.init_memory(voctest_speaker) #Not important for the naive strategy, here it simply is {}
memory_h=teststrat.init_memory(voctest_hearer)
print("Initial vocabulary of the speaker:")
print(voctest_speaker)
print(" ")
print("Initial vocabulary of the hearer:")
print(voctest_hearer)
print(" ")
ms=teststrat.pick_m(voctest_speaker,memory_s,context=[])
print("Meaning chosen by speaker:")
print(ms)
print (" ")
w=teststrat.pick_w(voc=voctest_speaker,mem=memory_s,m=ms,context=[])
print("Word uttered by speaker:")
print(w)
print (" ")
mh=teststrat.guess_m(w,voctest_hearer,memory_h,context=[])
print("Meaning interpreted by hearer:")
print(mh)
print (" ")
if (ms==mh):
print("Success!")
bool_succ = 1
else:
bool_succ = 0
print("Failure!")
print(" ")
teststrat.update_speaker(ms,w,mh,voctest_speaker,memory_s,bool_succ)
teststrat.update_hearer(ms,w,mh,voctest_hearer,memory_h,bool_succ)
print("Updated vocabulary of the speaker:")
print(voctest_speaker)
print(" ")
print("Updated vocabulary of the hearer:")
print(voctest_hearer)
Explanation: Now that we have a strategy, we can test the different functions. Exec
!! Vocabularies are modified, but this way you can observe progressive growth of the number of links !!
End of explanation
#voctest_speaker.add(0,0,1)
#voctest_speaker.add(0,0,0)
#voctest_hearer.add(0,0,1)
#voctest_hearer.add(0,0,0)
print("Speaker:")
print(voctest_speaker)
print(" ")
print("Hearer:")
print(voctest_hearer)
Explanation: Here you can modify by hand the 2 vocabularies before re-executing the code:
End of explanation
voctest_speaker.visual()
teststrat.visual(voc=voctest_speaker,mem=memory_s,vtype="pick_m",iterr=500)
voctest_speaker.visual()
teststrat.visual(voc=voctest_speaker,mem=memory_s,vtype="pick_mw",iterr=500)
voctest_speaker.visual()
teststrat.visual(voc=voctest_speaker,mem=memory_s,vtype="pick_w",iterr=500)
voctest_hearer.visual()
teststrat.visual(voc=voctest_hearer,mem=memory_h,vtype="guess_m",iterr=500)
dir(teststrat)
teststrat.voc_update
Explanation: Approximation of the probability density of the different procedures of the strategy:
End of explanation |
12,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter02
1 Discrete random variable
1.1 (0-1) distribution
$P(X=k)=p^k(1-p)^{1-k}, k=0,1 \space (0<p<1)$
1.2 binomial distribution
$P(X=k)=C_n^kp^k(1-p)^{n-k}, \space k=0,1,\ldots,n \space (0<p<1)$
Marked as $X \sim B(n,p)$
1.3 possion Distribution
$P(X=k)=\frac{\lambda^k e^{-\lambda}}{k!} \space \lambda > 0,\space k=0,1,2,\ldots$.
Marked as $X\sim \pi(\lambda)$
If $X \sim B(n,p)$ and $n$ is big enough and $p$ is small enough, then
Step1: 2.2 exponential distribution
$$f(x)=
\begin{cases}
\lambda e^{-\lambda x}, x >0 \
0, x \le 0
\end{cases}$$
Step2: 2.3 normal distribution
$$f(x)=
\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^{2}}{2\sigma^2}}
$$
where | Python Code:
import numpy as np
import matplotlib.pyplot as plt
plt.plot([1,2], [1,1], linewidth=2,c='k')
plt.plot([1,1], [0,1],'k--', linewidth=2)
plt.plot([2,2], [0,1],'k--', linewidth=2)
plt.plot([0,1], [1,1],'k--')
plt.xticks([1,2],[r'$a$',r'$b$'])
plt.yticks([1],[r'$\frac{1}{b-a}$'])
plt.xlabel('x')
plt.ylabel(r'$f(x)$')
plt.axis([0, 2.5, 0, 2])
gca = plt.gca()
gca.spines['right'].set_color('none')
gca.spines['top'].set_color('none')
gca.xaxis.set_ticks_position('bottom')
gca.yaxis.set_ticks_position('left')
plt.show()
Explanation: Chapter02
1 Discrete random variable
1.1 (0-1) distribution
$P(X=k)=p^k(1-p)^{1-k}, k=0,1 \space (0<p<1)$
1.2 binomial distribution
$P(X=k)=C_n^kp^k(1-p)^{n-k}, \space k=0,1,\ldots,n \space (0<p<1)$
Marked as $X \sim B(n,p)$
1.3 possion Distribution
$P(X=k)=\frac{\lambda^k e^{-\lambda}}{k!} \space \lambda > 0,\space k=0,1,2,\ldots$.
Marked as $X\sim \pi(\lambda)$
If $X \sim B(n,p)$ and $n$ is big enough and $p$ is small enough, then: $$P(X=k)=C_n^kp^k(1-p)^{n-k} \approx \frac{\lambda^k e^{-\lambda}}{k!} $$ where: $\lambda=np$
1.4 geometric distribution
$P(X=k)=(1-p)^{k-1}p, \space k=1,2,3,\dots$
2 Continues random variables
2.1 uniform distribution
$$f(x)=\begin{cases}
\frac{1}{b-a}, a<x<b \
0, others
\end{cases}$$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
lam = 5
x = np.linspace(0.01, 1, 1000)
f_x = lam * np.power(np.e, -1.0*lam*x)
plt.plot(x, f_x)
plt.xlabel('x')
plt.ylabel(r'$f(x)$')
plt.axis([0, 1.1, 0, 2])
plt.xticks(())
plt.yticks(())
gca = plt.gca()
gca.spines['right'].set_color('none')
gca.spines['top'].set_color('none')
gca.xaxis.set_ticks_position('bottom')
gca.yaxis.set_ticks_position('left')
plt.show()
Explanation: 2.2 exponential distribution
$$f(x)=
\begin{cases}
\lambda e^{-\lambda x}, x >0 \
0, x \le 0
\end{cases}$$
End of explanation
import math
import numpy as np
import matplotlib.pyplot as plt
def gauss(x, mu, sigma):
return 1.0 / (np.sqrt(2*np.pi) * sigma) * np.power(np.e, -1.0 * np.power(x-1, 2)/ (2))
mu = 1.0
sigma = 1.0
x = np.linspace(-2.0, 4.0, 1000)
y = gauss(x, mu, sigma)
plt.plot(x, y)
plt.xticks([1.0], [r'$\mu$'])
plt.yticks(())
plt.xlabel('x')
plt.plot([1.0, 1.0],[0.0, gauss(1.0, mu, sigma)], 'k--')
plt.plot([0,1.0],[gauss(1.0, mu, sigma),gauss(1.0, mu, sigma)],'k--')
plt.text(0-.2, gauss(1.0, mu, sigma), r'$\frac{1}{\sqrt{2\pi}}$',ha='right', va='center')
plt.axis([-3,5,0,0.5])
#plt.(r'$\frac{1}{\sqrt{2\pi}}e^{-\frac{(x-1)^{2}}{2\sigma^2}}$')
gca = plt.gca()
gca.spines['right'].set_color('none')
gca.spines['top'].set_color('none')
gca.xaxis.set_ticks_position('bottom')
gca.spines['bottom'].set_position(('data',0))
gca.yaxis.set_ticks_position('left')
gca.spines['left'].set_position(('data',0))
plt.show()
Explanation: 2.3 normal distribution
$$f(x)=
\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^{2}}{2\sigma^2}}
$$
where: $-\infty < x < \infty$
End of explanation |
12,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inverse Kinematics (2D)
Step1: Coordinate Transformation
Step2: Parameters of robot arm
Step3: Forward Kinematics
Step4: Inverse Kinematics
Numerical Solution with Jacobian
NOTE | Python Code:
%matplotlib notebook
from matplotlib import pylab as plt
from numpy import sin, cos, pi, matrix, random, linalg, asarray
from scipy.linalg import pinv
from __future__ import division
from math import atan2
from IPython import display
from ipywidgets import interact, fixed
Explanation: Inverse Kinematics (2D)
End of explanation
def trans(x, y, a):
'''create a 2D transformation'''
s = sin(a)
c = cos(a)
return matrix([[c, -s, x],
[s, c, y],
[0, 0, 1]])
def from_trans(m):
'''get x, y, theta from transform matrix'''
return [m[0, -1], m[1, -1], atan2(m[1, 0], m[0, 0])]
trans(0, 0, 0)
Explanation: Coordinate Transformation
End of explanation
l = [0, 3, 2, 1]
#l = [0, 3, 2, 1, 1]
#l = [0, 3, 2, 1, 1, 1]
#l = [1] * 30
N = len(l) - 1 # number of links
max_len = sum(l)
a = random.random_sample(N) # angles of joints
T0 = trans(0, 0, 0) # base
Explanation: Parameters of robot arm
End of explanation
def forward_kinematics(T0, l, a):
T = [T0]
for i in range(len(a)):
Ti = T[-1] * trans(l[i], 0, a[i])
T.append(Ti)
Te = T[-1] * trans(l[-1], 0, 0) # end effector
T.append(Te)
return T
def show_robot_arm(T):
plt.cla()
x = [Ti[0,-1] for Ti in T]
y = [Ti[1,-1] for Ti in T]
plt.plot(x, y, '-or', linewidth=5, markersize=10)
plt.plot(x[-1], y[-1], 'og', linewidth=5, markersize=10)
plt.xlim([-max_len, max_len])
plt.ylim([-max_len, max_len])
ax = plt.axes()
ax.set_aspect('equal')
t = atan2(T[-1][1, 0], T[-1][0,0])
ax.annotate('[%.2f,%.2f,%.2f]' % (x[-1], y[-1], t), xy=(x[-1], y[-1]), xytext=(x[-1], y[-1] + 0.5))
plt.show()
return ax
Explanation: Forward Kinematics
End of explanation
theta = random.random(N) * 1e-5
lambda_ = 1
max_step = 0.1
def inverse_kinematics(x_e, y_e, theta_e, theta):
target = matrix([[x_e, y_e, theta_e]]).T
for i in range(1000):
Ts = forward_kinematics(T0, l, theta)
Te = matrix([from_trans(Ts[-1])]).T
e = target - Te
e[e > max_step] = max_step
e[e < -max_step] = -max_step
T = matrix([from_trans(i) for i in Ts[1:-1]]).T
J = Te - T
dT = Te - T
J[0, :] = -dT[1, :] # x
J[1, :] = dT[0, :] # y
J[-1, :] = 1 # angular
d_theta = lambda_ * pinv(J) * e
theta += asarray(d_theta.T)[0]
if linalg.norm(d_theta) < 1e-4:
break
return theta
T = forward_kinematics(T0, l, theta)
show_robot_arm(T)
Te = matrix([from_trans(T[-1])])
@interact(x_e=(0, max_len, 0.01), y_e=(-max_len, max_len, 0.01), theta_e=(-pi, pi, 0.01), theta=fixed(theta))
def set_end_effector(x_e=Te[0,0], y_e=Te[0,1], theta_e=Te[0,2], theta=theta):
theta = inverse_kinematics(x_e, y_e, theta_e, theta)
T = forward_kinematics(T0, l, theta)
show_robot_arm(T)
return theta
Explanation: Inverse Kinematics
Numerical Solution with Jacobian
NOTE: while numerical inverse kinematics is easy to implemente, two issues have to be keep in mind:
* stablility: the correction step (lambda_) has to be small, but it will take longer time to converage
* singularity: there are singularity poses (all 0, for example), the correction will be 0, so the algorithm won't work. That's why many robots bends its leg when walking
End of explanation |
12,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pivoted document length normalization
It is seen that in many cases normalizing the tfidf weights for each terms tends to favor weight of terms of the documents with shorter length. Pivoted document length normalization scheme brings a pivoting scheme on the table which can be used to counter the effect of this bias for short documents by making tfidf independent of the document length.
This is achieved by tilting the normalization curve along the pivot point defined by user with some slope. Roughly following the equation -
pivoted_norm = (1 - slope) * pivot + slope * old_norm
This scheme is proposed in the paper pivoted document length normalization
Overall this approach can in many cases help increase the accuracy of the model where the document lengths are hugely varying in the enitre corpus.
Step1: Get TFIDF scores for corpus without pivoted document length normalisation
Step2: Get TFIDF scores for corpus with pivoted document length normalisation testing on various values of alpha.
Step3: Visualizing the pivoted normalization
Since cosine normalization favors retrieval of short documents from the plot we can see that when slope was 1 (when pivoted normalisation was not applied) short documents with length of around 500 had very good score hence the bias for short documents can be seen. As we varied the value of slope from 1 to 0 we introdcued a new bias for long documents to counter the bias caused by cosine normalisation. Therefore at a certain point we got an optimum value of slope which is 0.5 where the overall accuracy of the model is increased. | Python Code:
%matplotlib inline
from sklearn.linear_model import LogisticRegression
from gensim.corpora import Dictionary
from gensim.sklearn_api.tfidf import TfIdfTransformer
from gensim.matutils import corpus2csc
import numpy as np
import matplotlib.pyplot as py
import gensim.downloader as api
# This function returns the model accuracy and indivitual document prob values using
# gensim's TfIdfTransformer and sklearn's LogisticRegression
def get_tfidf_scores(kwargs):
tfidf_transformer = TfIdfTransformer(**kwargs).fit(train_corpus)
X_train_tfidf = corpus2csc(tfidf_transformer.transform(train_corpus), num_terms=len(id2word)).T
X_test_tfidf = corpus2csc(tfidf_transformer.transform(test_corpus), num_terms=len(id2word)).T
clf = LogisticRegression().fit(X_train_tfidf, y_train)
model_accuracy = clf.score(X_test_tfidf, y_test)
doc_scores = clf.decision_function(X_test_tfidf)
return model_accuracy, doc_scores
# Sort the document scores by their scores and return a sorted list
# of document score and corresponding document lengths.
def sort_length_by_score(doc_scores, X_test):
doc_scores = sorted(enumerate(doc_scores), key=lambda x: x[1])
doc_leng = np.empty(len(doc_scores))
ds = np.empty(len(doc_scores))
for i, _ in enumerate(doc_scores):
doc_leng[i] = len(X_test[_[0]])
ds[i] = _[1]
return ds, doc_leng
nws = api.load("20-newsgroups")
cat1, cat2 = ('sci.electronics', 'sci.space')
X_train = []
X_test = []
y_train = []
y_test = []
for i in nws:
if i["set"] == "train" and i["topic"] == cat1:
X_train.append(i["data"])
y_train.append(0)
elif i["set"] == "train" and i["topic"] == cat2:
X_train.append(i["data"])
y_train.append(1)
elif i["set"] == "test" and i["topic"] == cat1:
X_test.append(i["data"])
y_test.append(0)
elif i["set"] == "test" and i["topic"] == cat2:
X_test.append(i["data"])
y_test.append(1)
id2word = Dictionary([_.split() for _ in X_train])
train_corpus = [id2word.doc2bow(i.split()) for i in X_train]
test_corpus = [id2word.doc2bow(i.split()) for i in X_test]
print(len(X_train), len(X_test))
# We perform our analysis on top k documents which is almost top 10% most scored documents
k = len(X_test) / 10
Explanation: Pivoted document length normalization
It is seen that in many cases normalizing the tfidf weights for each terms tends to favor weight of terms of the documents with shorter length. Pivoted document length normalization scheme brings a pivoting scheme on the table which can be used to counter the effect of this bias for short documents by making tfidf independent of the document length.
This is achieved by tilting the normalization curve along the pivot point defined by user with some slope. Roughly following the equation -
pivoted_norm = (1 - slope) * pivot + slope * old_norm
This scheme is proposed in the paper pivoted document length normalization
Overall this approach can in many cases help increase the accuracy of the model where the document lengths are hugely varying in the enitre corpus.
End of explanation
params = {}
model_accuracy, doc_scores = get_tfidf_scores(params)
print(model_accuracy)
print(
"Normal cosine normalisation favors short documents as our top {} "
"docs have a smaller mean doc length of {:.3f} compared to the corpus mean doc length of {:.3f}"
.format(
k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(),
sort_length_by_score(doc_scores, X_test)[1].mean()
)
)
Explanation: Get TFIDF scores for corpus without pivoted document length normalisation
End of explanation
best_model_accuracy = 0
optimum_slope = 0
for slope in np.arange(0, 1.1, 0.1):
params = {"pivot": 10, "slope": slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
if model_accuracy > best_model_accuracy:
best_model_accuracy = model_accuracy
optimum_slope = slope
print("Score for slope {} is {}".format(slope, model_accuracy))
print("We get best score of {} at slope {}".format(best_model_accuracy, optimum_slope))
params = {"pivot": 10, "slope": optimum_slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
print(model_accuracy)
print(
"With pivoted normalisation top {} docs have mean length of {:.3f} "
"which is much closer to the corpus mean doc length of {:.3f}"
.format(
k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(),
sort_length_by_score(doc_scores, X_test)[1].mean()
)
)
Explanation: Get TFIDF scores for corpus with pivoted document length normalisation testing on various values of alpha.
End of explanation
best_model_accuracy = 0
optimum_slope = 0
w = 2
h = 2
f, axarr = py.subplots(h, w, figsize=(15, 7))
it = 0
for slope in [1, 0.2]:
params = {"pivot": 10, "slope": slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
if model_accuracy > best_model_accuracy:
best_model_accuracy = model_accuracy
optimum_slope = slope
doc_scores, doc_leng = sort_length_by_score(doc_scores, X_test)
y = abs(doc_scores[:k, np.newaxis])
x = doc_leng[:k, np.newaxis]
py.subplot(1, 2, it+1).bar(x, y, linewidth=10.)
py.title("slope = " + str(slope) + " Model accuracy = " + str(model_accuracy))
py.ylim([0, 4.5])
py.xlim([0, 3200])
py.xlabel("document length")
py.ylabel("confidence score")
it += 1
py.tight_layout()
py.show()
Explanation: Visualizing the pivoted normalization
Since cosine normalization favors retrieval of short documents from the plot we can see that when slope was 1 (when pivoted normalisation was not applied) short documents with length of around 500 had very good score hence the bias for short documents can be seen. As we varied the value of slope from 1 to 0 we introdcued a new bias for long documents to counter the bias caused by cosine normalisation. Therefore at a certain point we got an optimum value of slope which is 0.5 where the overall accuracy of the model is increased.
End of explanation |
12,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Powering a Machine Learning Data Store with Redis Labs Cloud
This notebook demonstrates how to use the Machine Learning API for automatically analyzing each column of the IRIS dataset. In this notebook, the API analyzes the IRIS dataset by building and training an XGB regressor model for each column within the IRIS dataset. Once the XGB models are built and trained, they are cached on a preconfigured Redis Labs Cloud (https
Step1: a) What column (Target Column) do you want to analyze?
Step2: b) What are the possible values in that column? (Target Column Values)
Step3: c) What columns can the algorithms use for training and learning? (Feature Columnns)
Step4: d) Is there a column that's a non-numeric representation of the values in the Target Column? (Label Column)
Step5: e) Are there any columns you want to ignore from the training? (Ignored Features usually any non-numeric columns should be ignored)
Step6: f) Select a supported Machine Learning Algorithm
Step7: g) Name the dataset something descriptive (this will be used as a key for caching later)
Step8: h) Assign the downloaded IRIS dataset csv file
Step9: i) Build the Machine Learning API Request dictionary
Step10: i) Validate the dataset is ready for use
Step11: 2) Build and Train Algorithm Models
The ml_train_models_for_predictions method creates a list of algorithm models based off the API request and trains them.
Step12: 3) Predictions
The ml_compile_predictions_from_models method will iterate over the list of algorithm models and make predictions based off the predict_row dictionary.
Step13: 4) Analysis
The ml_compile_analysis_dataset method will create a dictionary of analysis datasets based off the Target Column they are predicting.
Step14: 5) Caching
The ml_cache_analysis_and_models method will cache each algorithm model with the analysis dataset in the Redis Labs Online Cloud cache endpoint named "Cache" listening on port 16005.
Step15: 6) Visualizations
There is a simple plotting request API for quickly creating visuals based off the the type of analysis and the underlying model.
a) Set common plot settings
Step16: b) Plot Feature Importance
Step17: c) Show Pair Plot
Step18: d) Show Confusion Matrix
Step19: e) Show Scatter Plots
Step20: f) Show Joint Plots | Python Code:
# Setup the Sci-pype environment
import sys, os
# Only Redis Labs is needed for this notebook:
os.environ["ENV_DEPLOYMENT_TYPE"] = "RedisLabs"
# Load the Sci-pype PyCore as a named-object called "core" and environment variables
from src.common.load_ipython_env import *
Explanation: Powering a Machine Learning Data Store with Redis Labs Cloud
This notebook demonstrates how to use the Machine Learning API for automatically analyzing each column of the IRIS dataset. In this notebook, the API analyzes the IRIS dataset by building and training an XGB regressor model for each column within the IRIS dataset. Once the XGB models are built and trained, they are cached on a preconfigured Redis Labs Cloud (https://redislabs.com/redis-cloud) endpoint named "CACHE". This notebook uses the Redis Labs Cloud endpoint as a public, remote Machine Learning data store. Once the XGB models are in the data store, they can be exported, imported and used for making predictions by anyone running this notebook or the command line version.
Redis Labs Cloud Dashboard
Since this demo uses a shared Redis Labs Cloud endpoint, there may be other users changing the data in the CACHE at any point. To change to your own Redis Labs Cloud endpoint update the Address List field for each of the Redis Applications in: https://github.com/jay-johnson/sci-pype/blob/master/configs/cloud-redis.json#L6 and then restart the container with control scripts found in the repo base directory rl-stop.sh then rl-start.sh.
This notebook was built from a docker-less command line version:
<repo dir>/bins/ml/demo-rl-regressor-iris.py
For those new to Jupyter here's few commands I wish I knew when I started:
Use the h button will open a help menu
Use Shift + Enter to run the current cell and select the one below
Use Shift + Tab when in insert mode to bring up a live variable debugger while the cursor is inside a word
Machine Learning Workflow Notebooks:
This Notebook - Build, Train and Cache the XGB Models for the IRIS Dataset on Redis Labs Cloud
Extract from the Machine Learning data store and archive the artifact on S3
Import the artifacts from S3 and deploy them to the Machine Learning data store
Make new Predictions using the cached XGB Models
1) Creating a Machine Learning API Request
The API request below is setup to analyze each column of the IRIS dataset and build + train an XGB regressor model for predictions.
End of explanation
target_column_name = "ResultLabel"
Explanation: a) What column (Target Column) do you want to analyze?
End of explanation
target_column_values = [ "Iris-setosa", "Iris-versicolor", "Iris-virginica" ]
Explanation: b) What are the possible values in that column? (Target Column Values)
End of explanation
feature_column_names = [ "SepalLength", "SepalWidth", "PetalLength", "PetalWidth", "ResultTargetValue" ]
Explanation: c) What columns can the algorithms use for training and learning? (Feature Columnns)
End of explanation
label_column_name = "ResultLabel"
Explanation: d) Is there a column that's a non-numeric representation of the values in the Target Column? (Label Column)
End of explanation
ignore_features = [ # Prune non-int/float columns as needed:
target_column_name,
label_column_name
]
Explanation: e) Are there any columns you want to ignore from the training? (Ignored Features usually any non-numeric columns should be ignored)
End of explanation
ml_algo_name = "xgb-regressor"
Explanation: f) Select a supported Machine Learning Algorithm
End of explanation
ds_name = "iris_regressor"
Explanation: g) Name the dataset something descriptive (this will be used as a key for caching later)
End of explanation
# This will use <repo>/bins/ml/downloaders/download_iris.py to download this file before running
dataset_filename = "iris.csv"
ml_csv = str(os.getenv("ENV_DATA_SRC_DIR", "/opt/work/data/src")) + "/" + dataset_filename
# Check the file exists and download if not
if os.path.exists(ml_csv) == False:
downloader="/opt/work/bins/ml/downloaders/download_iris.py"
lg("Downloading and preparing(" + str(ml_csv) + ") for analysis")
os.system(downloader)
lg("Done Downloading and preparing(" + str(ml_csv) + ")", 5)
# end of downloading if the csv is missing
if os.path.exists(ml_csv) == False:
downloader="/opt/work/bins/ml/downloaders/download_iris.py"
lg("Please use the downloader: " + str(downloader) + " to download + prepare the IRIS csv file", 0)
else:
lg("Dataset(" + str(ml_csv) + ") is ready", 5)
# end of error checking the csv file was downloaded and built
Explanation: h) Assign the downloaded IRIS dataset csv file
End of explanation
ml_type = "Predict with Filter"
ml_request = {
"MLType" : ml_type,
"MLAlgo" : {
"Name" : ml_algo_name,
"Version" : 1,
"Meta" : {
"UnitsAhead" : 0,
"DatasetName" : ds_name,
"FilterMask" : None,
"Source" : {
"CSVFile" : ml_csv,
"S3File" : "", # <Bucket Name>:<Key>
"RedisKey" : "" # <App Name>:<Key>
},
},
"Steps" : {
"Train" :{ # these are specific for building an XGB Regressor
"LearningRate" : 0.1,
"NumEstimators" : 1000,
"Objective" : "reg:linear",
"MaxDepth" : 6,
"MaxDeltaStep" : 0,
"MinChildWeight" : 1,
"Gamma" : 0,
"SubSample" : 0.8,
"ColSampleByTree" : 0.8,
"ColSampleByLevel" : 1.0,
"RegAlpha" : 0,
"RegLambda" : 1,
"BaseScore" : 0.5,
"NumThreads" : -1, # infinite = -1
"ScaledPositionWeight" : 1,
"Seed" : 27,
"Debug" : True
}
}
},
"FeatureColumnNames": feature_column_names,
"TargetColumnName" : target_column_name,
"TargetColumnValues": target_column_values,
"IgnoreFeatures" : ignore_features,
"UnitsAheadSet" : [],
"UnitsAheadType" : "",
"PredictionType" : "Predict",
"MaxFeatures" : 10,
"Version" : 1,
"TrackingType" : "UseTargetColAndUnits",
"TrackingName" : core.to_upper(ds_name),
"TrackingID" : "ML_" + ds_name + "_" + str(core.build_unique_key()),
"Debug" : False
}
Explanation: i) Build the Machine Learning API Request dictionary
End of explanation
# Load the dataset
csv_res = core.ml_load_csv_dataset(ml_request, core.get_rds(), core.get_dbs(), debug)
if csv_res["Status"] != "SUCCESS":
lg("ERROR: Failed to Load CSV(" + str(ml_request["MLAlgo"]["Meta"]["Source"]["CSVFile"]) + ")", 0)
sys.exit(1)
# Assign a local variable to build a sample record mask:
ds_df = csv_res["Record"]["SourceDF"]
# Build a filter record mask for pruning bad records out before creating the train/test sets
samples_filter_mask = (ds_df["SepalLength"] > 0.0) \
& (ds_df["PetalWidth"] > 0.0)
# For patching on the fly you can use the encoder method to replace labels with target dictionary values:
#ds_df = core.ml_encode_target_column(ds_df, "ResultLabel", "Target")
# Add the filter mask to the request for changing the train/test samples in the dataset:
ml_request["MLAlgo"]["Meta"]["SamplesFilterMask"] = samples_filter_mask
show_pair_plot = True
if show_pair_plot:
lg("Samples(" + str(len(ds_df.index)) + ") in CSV(" + str(ml_request["MLAlgo"]["Meta"]["Source"]["CSVFile"]) + ")", 6)
lg("")
print ds_df.describe()
lg("")
num_per_class = ds_df.groupby("ResultLabel").size()
print num_per_class
lg("")
pair_plot_req = {
"Title" : "Iris Dataset PairPlot",
"SourceDF" : ds_df[samples_filter_mask],
"Style" : "default",
"DiagKind" : "hist", # "kde" or "hist"
"HueColumnName" : ml_request["TargetColumnName"],
"XLabel" : "",
"YLabel" : "",
"CompareColumns": ml_request["FeatureColumnNames"],
"Size" : 3.0,
"ImgFile" : str(os.getenv("ENV_DATA_SRC_DIR", "/opt/work/data/src")) + "/" + "validate_jupyter_iris_regressor_pairplot.png",
"ShowPlot" : True
}
lg("Plotting Validation Pair Plot - Please wait a moment...", 6)
core.sb_pairplot(pair_plot_req)
if os.path.exists(pair_plot_req["ImgFile"]):
lg("Done Plotting Valiation Pair Plot - Saved(" + str(pair_plot_req["ImgFile"]) + ")", 5)
else:
lg("Failed to save Validation Pair Plot(" + str(pair_plot_req["ImgFile"]) + "). Please check the ENV_DATA_SRC_DIR is writeable by this user and exposed to the docker container correctly.", 0)
# end of showing a pairplot for validation
Explanation: i) Validate the dataset is ready for use
End of explanation
ml_images = []
train_results = core.ml_train_models_for_predictions(ml_request, core.get_rds(), core.get_dbs(), debug)
if train_results["Status"] != "SUCCESS":
lg("ERROR: Failed to Train Models for Predictions with Error(" + str(train_results["Error"]) + ") StoppedEarly(" + str(train_results["Record"]["StoppedEarly"]) + ")", 0)
else:
lg("Done Training Algos", 5)
Explanation: 2) Build and Train Algorithm Models
The ml_train_models_for_predictions method creates a list of algorithm models based off the API request and trains them.
End of explanation
algo_nodes = train_results["Record"]["AlgoNodes"]
predict_row = {
"SepalLength" : 5.4,
"SepalWidth" : 3.4,
"PetalLength" : 1.7,
"PetalWidth" : 0.2,
"ResultTargetValue" : 0
}
predict_row_df = pd.DataFrame(predict_row, index=[0])
predict_req = {
"AlgoNodes" : algo_nodes,
"PredictionMask": samples_filter_mask,
"PredictionRow" : predict_row_df
}
predict_results = core.ml_compile_predictions_from_models(predict_req, core.get_rds(), core.get_dbs(), debug)
if predict_results["Status"] != "SUCCESS":
lg("ERROR: Failed to Compile Predictions from Models with Error(" + str(predict_results["Error"]) + ")", 0)
else:
lg("Done with Predictions", 6)
Explanation: 3) Predictions
The ml_compile_predictions_from_models method will iterate over the list of algorithm models and make predictions based off the predict_row dictionary.
End of explanation
al_req = train_results["Record"]
al_req["DSName"] = ml_request["TrackingName"]
al_req["Version"] = 1
al_req["FeatureColumnNames"]= ml_request["FeatureColumnNames"]
al_req["TargetColumnName"] = ml_request["TargetColumnName"]
al_req["TargetColumnValues"]= ml_request["TargetColumnValues"]
al_req["IgnoreFeatures"] = ml_request["IgnoreFeatures"]
al_req["PredictionType"] = ml_request["PredictionType"]
al_req["ConfMatrices"] = predict_results["Record"]["ConfMatrices"]
al_req["PredictionMarkers"] = predict_results["Record"]["PredictionMarkers"]
analysis_dataset = core.ml_compile_analysis_dataset(al_req, core.get_rds(), core.get_dbs(), debug)
lg("Analyzed Models(" + str(len(analysis_dataset["Models"])) + ")", 5)
Explanation: 4) Analysis
The ml_compile_analysis_dataset method will create a dictionary of analysis datasets based off the Target Column they are predicting.
End of explanation
lg("Caching Models", 6)
cache_req = {
"Name" : "CACHE",
"Key" : "_MODELS_" + str(al_req["Tracking"]["TrackingName"]) + "_LATEST",
"TrackingID": "_MD_" + str(al_req["Tracking"]["TrackingName"]),
"Analysis" : analysis_dataset
}
cache_results = core.ml_cache_analysis_and_models(cache_req, core.get_rds(), core.get_dbs(), debug)
lg("Done Caching Models", 5)
Explanation: 5) Caching
The ml_cache_analysis_and_models method will cache each algorithm model with the analysis dataset in the Redis Labs Online Cloud cache endpoint named "Cache" listening on port 16005.
End of explanation
# Turn this False to show the images:
analysis_dataset["ShowPlot"] = True
analysis_dataset["SourceDF"] = al_req["SourceDF"]
Explanation: 6) Visualizations
There is a simple plotting request API for quickly creating visuals based off the the type of analysis and the underlying model.
a) Set common plot settings
End of explanation
lg("Plotting Feature Importance", 6)
for midx,model_node in enumerate(analysis_dataset["Models"]):
predict_col = model_node["Target"]
if predict_col == "ResultTargetValue":
plot_req = {
"ImgFile" : analysis_dataset["FeatImpImgFile"],
"Model" : model_node["Model"],
"XLabel" : str(predict_col),
"YLabel" : "Importance Amount",
"Title" : str(predict_col) + " Importance Analysis",
"ShowPlot" : analysis_dataset["ShowPlot"]
}
image_list = core.sb_model_feature_importance(plot_req, debug)
for img in image_list:
ml_images.append(img)
# end of for all models
Explanation: b) Plot Feature Importance
End of explanation
lg("Plotting PairPlots", 6)
plot_req = {
"DSName" : str(analysis_dataset["DSName"]),
"Title" : str(analysis_dataset["DSName"]) + " - Pair Plot",
"ImgFile" : str(analysis_dataset["PairPlotImgFile"]),
"SourceDF" : al_req["SourceDF"],
"HueColumnName" : target_column_name,
"CompareColumns": feature_column_names,
"Markers" : ["o", "s", "D"],
"Width" : 15.0,
"Height" : 15.0,
"ShowPlot" : analysis_dataset["ShowPlot"]
}
image_list = core.sb_pairplot(plot_req, debug)
for img in image_list:
ml_images.append(img)
Explanation: c) Show Pair Plot
End of explanation
lg("Plotting Confusion Matrices", 6)
plot_req = {
"DSName" : str(analysis_dataset["DSName"]),
"Title" : str(analysis_dataset["DSName"]) + " - Confusion Matrix",
"ImgFile" : str(analysis_dataset["CMatrixImgFile"]),
"SourceDF" : al_req["SourceDF"],
"ConfMatrices" : al_req["ConfMatrices"],
"Width" : 15.0,
"Height" : 15.0,
"XLabel" : "Dates",
"YLabel" : "Values",
"ShowPlot" : analysis_dataset["ShowPlot"]
}
image_list = core.sb_confusion_matrix(plot_req, debug)
for img in image_list:
ml_images.append(img)
Explanation: d) Show Confusion Matrix
End of explanation
lg("Plotting Scatters", 6)
plot_req = {
"DSName" : str(analysis_dataset["DSName"]),
"Title" : str(analysis_dataset["DSName"]) + " - Scatter Plot",
"ImgFile" : str(analysis_dataset["ScatterImgFile"]),
"SourceDF" : analysis_dataset["SourceDF"],
"UnitsAheadType" : analysis_dataset["UnitsAheadType"],
"FeatureColumnNames": analysis_dataset["FeatureColumnNames"],
"Hue" : label_column_name,
"Width" : 7.0,
"Height" : 7.0,
"XLabel" : "Dates",
"YLabel" : "Values",
"ShowPlot" : analysis_dataset["ShowPlot"]
}
image_list = core.sb_all_scatterplots(plot_req, debug)
for img in image_list:
ml_images.append(img)
Explanation: e) Show Scatter Plots
End of explanation
lg("Plotting JointPlots", 6)
plot_req = {
"DSName" : str(analysis_dataset["DSName"]),
"Title" : str(analysis_dataset["DSName"]) + " - Joint Plot",
"ImgFile" : str(analysis_dataset["JointPlotImgFile"]),
"SourceDF" : analysis_dataset["SourceDF"],
"UnitsAheadType" : analysis_dataset["UnitsAheadType"],
"FeatureColumnNames": analysis_dataset["FeatureColumnNames"],
"Hue" : label_column_name,
"Width" : 15.0,
"Height" : 15.0,
"XLabel" : "Dates",
"YLabel" : "Values",
"ShowPlot" : analysis_dataset["ShowPlot"]
}
image_list = core.sb_all_jointplots(plot_req, debug)
for img in image_list:
ml_images.append(img)
lg("", 6)
lg("Analysis Complete Saved Images(" + str(len(ml_images)) + ")", 5)
lg("", 6)
Explanation: f) Show Joint Plots
End of explanation |
12,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysing Smartwatch Data
This notebook gives an overview of how to use HeartPy in the analysis of raw PPG data taken from a commercial (Samsung) smartwatch device.
A signal measured this way contains a lot more noise when compared to a typical PPG sensor on the fingertip or earlobe, where perfusion is much easier to measure than on the wrist.
Analysing such a signal requires some additional steps as described in this notebook.
First let's load up the dependencies and the data file
Step1: Exploring data file
Let's explore the data file to get an idea of what we're working with.
Step2: Ok..
There seems to be intermittent sections of PPG dotted between non-signals (periods where the sensor was not recording).
For now let's slice the first signal section and see what's up. Later on I'll show you how to exclude non-signal sections automatically.
Step3: Now we need to know the sampling rate
The sampling rate is the one measure to rule them all. It is used to compute all others.
HeartPy has several ways of getting the sample rate from timer columns. Let's look at the format of the timer column to see what we're working with.
Step4: So, the format seems to be 'hours
Step5: That's pretty low.
The sample rate is quite low but to conserve power this is what many smart watches work with. For determining the BPM this is just fine, but any heart rate variability (HRV) measures are likely not going to be super accurate. Depending on your needs it may still be fine, though.
A second consideration with sampling rate is whether it's stable or not. Many devices including smart watches do many things at once. They run an OS that has other tasks besides measuring heart rate, so when measuring at 10Hz, the OS might not be ready exactly every 100ms to get a measurement. As such, the sampling rate might vary. Let's visualise this.
Step6: That's actually not bad!
The signal mean is close to 10Hz and shows a low variance. Sporadic peaks to 12Hz or dips to 9Hz indicate timer inaccuracies but they are infrequent.
For our current purposes this is just fine.
You could of course interpolate and resample the signal so that it has an exact sampling rate but the effects on computed measures are likely minimal. For now let's just continue on.
Step7: The first thing to note is that amplitude varies dramatically. Let's run it through a bandpass filter and take out all frequencies that definitely are not heart rate.
We'll take out frequencies below 0.7Hz (42 BPM) and above 3.5 Hz (210 BPM).
Step8: Still low quality but at least the heart rate is quite visible now!
Step9: That seems a reasonable result. By far the most peaks are marked correctly, and most peaksin noisy sections (low confidence) are simply rejected.
clean_rr uses by default quotient-filtering, which is a bit aggressive.
You can set 'iqr' or 'z-score' with the clean_rr_method flag.
Finally let's look at a way to extract signal section and exclude non-signal sections automatically.
Step10: Hmmm, not much luck yet, but an idea | Python Code:
import numpy as np
import heartpy as hp
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('raw_ppg.csv')
df.keys()
Explanation: Analysing Smartwatch Data
This notebook gives an overview of how to use HeartPy in the analysis of raw PPG data taken from a commercial (Samsung) smartwatch device.
A signal measured this way contains a lot more noise when compared to a typical PPG sensor on the fingertip or earlobe, where perfusion is much easier to measure than on the wrist.
Analysing such a signal requires some additional steps as described in this notebook.
First let's load up the dependencies and the data file
End of explanation
plt.figure(figsize=(12,6))
plt.plot(df['ppg'].values)
plt.show()
Explanation: Exploring data file
Let's explore the data file to get an idea of what we're working with.
End of explanation
signal = df['ppg'].values[14500:20500]
timer = df['timer'].values[14500:20500]
plt.plot(signal)
plt.show()
Explanation: Ok..
There seems to be intermittent sections of PPG dotted between non-signals (periods where the sensor was not recording).
For now let's slice the first signal section and see what's up. Later on I'll show you how to exclude non-signal sections automatically.
End of explanation
timer[0:20]
Explanation: Now we need to know the sampling rate
The sampling rate is the one measure to rule them all. It is used to compute all others.
HeartPy has several ways of getting the sample rate from timer columns. Let's look at the format of the timer column to see what we're working with.
End of explanation
help(hp.get_samplerate_datetime)
#Seems easy enough, right? Now let's determine the sample rate
sample_rate = hp.get_samplerate_datetime(timer, timeformat = '%H:%M:%S.%f')
print('sampling rate is: %.3f Hz' %sample_rate)
Explanation: So, the format seems to be 'hours:minutes:seconds.miliseconds'
HeartPy comes with a datetime function that can work with date- and time-strings called get_samplerate_datetime. Check the help to see how it works:
End of explanation
from datetime import datetime
#let's create a list 'newtimer' to house our datetime objects
newtimer = [datetime.strptime(x, '%H:%M:%S.%f') for x in timer]
#let's compute the real distances from entry to entry
elapsed = []
for i in range(len(newtimer) - 1):
elapsed.append(1 / ((newtimer[i+1] - newtimer[i]).microseconds / 1000000))
#and plot the results
plt.figure(figsize=(12,4))
plt.plot(elapsed)
plt.xlabel('Sample number')
plt.ylabel('Actual sampling rate in Hz')
plt.show()
print('mean sampling rate: %.3f' %np.mean(elapsed))
print('median sampling rate: %.3f'%np.median(elapsed))
print('standard deviation: %.3f'%np.std(elapsed))
Explanation: That's pretty low.
The sample rate is quite low but to conserve power this is what many smart watches work with. For determining the BPM this is just fine, but any heart rate variability (HRV) measures are likely not going to be super accurate. Depending on your needs it may still be fine, though.
A second consideration with sampling rate is whether it's stable or not. Many devices including smart watches do many things at once. They run an OS that has other tasks besides measuring heart rate, so when measuring at 10Hz, the OS might not be ready exactly every 100ms to get a measurement. As such, the sampling rate might vary. Let's visualise this.
End of explanation
#Let's plot 4 minutes of the segment we selected to get a view
#of what we're working with
plt.figure(figsize=(12,6))
plt.plot(signal[0:int(240 * sample_rate)])
plt.title('original signal')
plt.show()
Explanation: That's actually not bad!
The signal mean is close to 10Hz and shows a low variance. Sporadic peaks to 12Hz or dips to 9Hz indicate timer inaccuracies but they are infrequent.
For our current purposes this is just fine.
You could of course interpolate and resample the signal so that it has an exact sampling rate but the effects on computed measures are likely minimal. For now let's just continue on.
End of explanation
#Let's run it through a standard butterworth bandpass implementation to remove everything < 0.8 and > 3.5 Hz.
filtered = hp.filter_signal(signal, [0.7, 3.5], sample_rate=sample_rate,
order=3, filtertype='bandpass')
#let's plot first 240 seconds and work with that!
plt.figure(figsize=(12,12))
plt.subplot(211)
plt.plot(signal[0:int(240 * sample_rate)])
plt.title('original signal')
plt.subplot(212)
plt.plot(filtered[0:int(240 * sample_rate)])
plt.title('filtered signal')
plt.show()
plt.figure(figsize=(12,6))
plt.plot(filtered[0:int(sample_rate * 60)])
plt.title('60 second segment of filtered signal')
plt.show()
Explanation: The first thing to note is that amplitude varies dramatically. Let's run it through a bandpass filter and take out all frequencies that definitely are not heart rate.
We'll take out frequencies below 0.7Hz (42 BPM) and above 3.5 Hz (210 BPM).
End of explanation
#let's resample to ~100Hz as well
#10Hz is low for the adaptive threshold analysis HeartPy uses
from scipy.signal import resample
resampled = resample(filtered, len(filtered) * 10)
#don't forget to compute the new sampling rate
new_sample_rate = sample_rate * 10
#run HeartPy over a few segments, fingers crossed, and plot results of each
for s in [[0, 10000], [10000, 20000], [20000, 30000], [30000, 40000], [40000, 50000]]:
wd, m = hp.process(resampled[s[0]:s[1]], sample_rate = new_sample_rate,
high_precision=True, clean_rr=True)
hp.plotter(wd, m, title = 'zoomed in section', figsize=(12,6))
hp.plot_poincare(wd, m)
plt.show()
for measure in m.keys():
print('%s: %f' %(measure, m[measure]))
Explanation: Still low quality but at least the heart rate is quite visible now!
End of explanation
raw = df['ppg'].values
plt.plot(raw)
plt.show()
import sys
from scipy.signal import resample
windowsize = 100
std = []
for i in range(len(raw) // windowsize):
start = i * windowsize
end = (i + 1) * windowsize
sliced = raw[start:end]
try:
std.append(np.std(sliced))
except:
print(i)
plt.plot(std)
plt.show()
plt.plot(raw)
plt.show()
plt.plot(raw[0:(len(raw) // windowsize) * windowsize] - resample(std, len(std)*windowsize))
plt.show()
Explanation: That seems a reasonable result. By far the most peaks are marked correctly, and most peaksin noisy sections (low confidence) are simply rejected.
clean_rr uses by default quotient-filtering, which is a bit aggressive.
You can set 'iqr' or 'z-score' with the clean_rr_method flag.
Finally let's look at a way to extract signal section and exclude non-signal sections automatically.
End of explanation
(len(raw) // windowsize) * windowsize
mx = np.max(raw)
mn = np.min(raw)
global_range = mx - mn
windowsize = 100
filtered = []
for i in range(len(raw) // windowsize):
start = i * windowsize
end = (i + 1) * windowsize
sliced = raw[start:end]
rng = np.max(sliced) - np.min(sliced)
if ((rng >= (0.5 * global_range))
or
(np.max(sliced) >= 0.9 * mx)
or
(np.min(sliced) <= mn + (0.1 * mn))):
for x in sliced:
filtered.append(0)
else:
for x in sliced:
filtered.append(x)
plt.figure(figsize=(12,6))
plt.plot(raw)
plt.show()
plt.figure(figsize=(12,6))
plt.plot(filtered)
plt.show()
Explanation: Hmmm, not much luck yet, but an idea:
End of explanation |
12,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Don't forget to delete the hdmi_out and hdmi_in when finished
Image Overlay 256 Color Filter Example
In this notebook, we will overlay an image on the output videofeed. By default, an image showing the BYU logo will be displayed at the top of the screen.
In order to store larger images, a 256 color pallette is used so that each pixel color can be represented as 8 bits instead of the 24 bits used to represent the RGB values in the HDMI standard. The filter maps an 8-bit index to its corresponding 24-bit RGB color. The 256 colors used by this filter use the same mapping as xterm colors, which can be found at the following link
Step1: 2. Connect camera
Physically connect the camera to the HDMI-in port of the PYNQ. Run the following code to instruct the PYNQ to capture the video from the camera and to begin streaming video to your monitor (connected to the HDMI-out port). The "2" represents a resolution of 1280x720, which is the output streaming resolution of the camera.
Step2: 3. Program board with 256 Color Image Overlay Filter
Run the following script to download the 256 color image overlay filter to the PYNQ. This will allow us to add image overlays on top of the video feed.
Step3: 4. Create a user interface
We will communicate with the filter using a nice user interface. Run the following code to activate that interface.
6 Registers are used to interact with this particular filter.
R0
Step4: 5. Exploration
Image Position.
Try moving the sliders up and down. Moving the X Origin slider up should move the image to the right. Moving the Y Origin slider up should move the image down.
Transparency.
Try entering the value '15' in the 'Transparent Color Index' box. Notice that the white pixels in the image have become transparent. Now try entering the value '17' in the box. Notice that the blue pixels in the image have become transparent. Now enter the value '0' so that the black pixels will be transparent for the next steps.
Upload New Image.
Try selecting a different image. The new image file should be written to replace the previous image.
6. Clean up
When you are done experimenting with the image overlay filter, run the following code before exiting. | Python Code:
from pynq.drivers.video import HDMI
from pynq import Bitstream_Part
from pynq.board import Register
from pynq import Overlay
Overlay("demo.bit").download()
Explanation: Don't forget to delete the hdmi_out and hdmi_in when finished
Image Overlay 256 Color Filter Example
In this notebook, we will overlay an image on the output videofeed. By default, an image showing the BYU logo will be displayed at the top of the screen.
In order to store larger images, a 256 color pallette is used so that each pixel color can be represented as 8 bits instead of the 24 bits used to represent the RGB values in the HDMI standard. The filter maps an 8-bit index to its corresponding 24-bit RGB color. The 256 colors used by this filter use the same mapping as xterm colors, which can be found at the following link: <a href = "http://www.calmar.ws/vim/256-xterm-24bit-rgb-color-chart.html">256 Color Chart</a>
This filter also allows the user to specify a "transparent" color. By specifying the index to represent transparency, it will replace all pixels that use the specified color index with the background video pixels.
1. Download base overlay to the board
Ensure that the camera is not connected to the board. Run the following script to provide the PYNQ with its base overlay.
End of explanation
hdmi_in = HDMI('in')
hdmi_out = HDMI('out', frame_list=hdmi_in.frame_list)
hdmi_out.mode(2)
hdmi_out.start()
hdmi_in.start()
Explanation: 2. Connect camera
Physically connect the camera to the HDMI-in port of the PYNQ. Run the following code to instruct the PYNQ to capture the video from the camera and to begin streaming video to your monitor (connected to the HDMI-out port). The "2" represents a resolution of 1280x720, which is the output streaming resolution of the camera.
End of explanation
Bitstream_Part("img_overlay_256color_p.bit").download()
Explanation: 3. Program board with 256 Color Image Overlay Filter
Run the following script to download the 256 color image overlay filter to the PYNQ. This will allow us to add image overlays on top of the video feed.
End of explanation
import ipywidgets as widgets
from IPython.display import HTML, display
display(HTML('''<style>
.widget-label { min-width: 25ex !important; }
</style>'''))
R0 =Register(0)
R1 =Register(1)
R2 =Register(2)
R3 =Register(3)
R4 =Register(4)
R5 =Register(5)
R0.write(1)
R1.write(1)
R2.write(200)
R3.write(200)
R4.write(0)
R0_s = widgets.IntSlider(
value=1,
min=1,
max=1279,
step=1,
description='X Origin:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='red',
width = '150px'
)
R1_s = widgets.IntSlider(
value=1,
min=1,
max=719,
step=1,
description='Y Origin:',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='green',
width = '150px'
)
R2_b = widgets.BoundedIntText(
value=200,
min=1,
max=1280,
step=1,
description='Image Width:',
disabled=True
)
R3_b = widgets.BoundedIntText(
value=200,
min=1,
max=720,
step=1,
description='Image Height:',
disabled=True
)
R4_b = widgets.BoundedIntText(
value=0,
min=0,
max=255,
step=1,
description='Transparent Color Index:',
disabled=False,
width = '225px'
)
R5_s = widgets.Select(
options=['BYU Medallion', 'BYU Cougar', 'BYU Logo'],
value='BYU Medallion',
description='Display Image:',
disabled=False,
width = '400px'
)
def update_r0(*args):
R0.write(R0_s.value)
R0_s.observe(update_r0, 'value')
def update_r1(*args):
R1.write(R1_s.value)
R1_s.observe(update_r1, 'value')
def update_r2(*args):
R2.write(R2_b.value)
R2_b.observe(update_r2, 'value')
def update_r3(*args):
R3.write(R3_b.value)
R3_b.observe(update_r3, 'value')
def update_r4(*args):
R4.write(R4_b.value)
R4_b.observe(update_r4, 'value')
def update_r5(*args):
#print("Hello")
filename = "nofile.bin"
if R5_s.value == 'BYU Medallion':
filename = "./data/medallion_256.bin"
elif R5_s.value == 'BYU Cougar':
filename = "./data/cougar_256.bin"
elif R5_s.value == 'BYU Logo':
filename = "./data/BYU_Stretch_Y.bin"
with open(filename, "rb") as f:
width = f.read(1)
height = f.read(1)
R2.write(width[0])
R3.write(height[0])
num_pixels = width[0]*height[0]-1
for i in range(0, num_pixels):
byte = f.read(1)
x = (i<<8) | byte[0];
R5.write(x);
R5_s.observe(update_r5, 'value')
widgets.HBox([R0_s,R1_s,R4_b,R5_s])
Explanation: 4. Create a user interface
We will communicate with the filter using a nice user interface. Run the following code to activate that interface.
6 Registers are used to interact with this particular filter.
R0 : Origin X-Coordinate. The origin of the image is the top left corner. Writing to R0 allows you to specify where the image appears horizontally on the feed.
R1 : Origin Y-Coordinate. Writing to R1 allows you to specify where the image appears vertically on the feed.
R2 : Width. This specifies how wide (in pixels) the image is.
R3 : Height. This specifies how tall (in pixels) the image is.
R4 : This specifies the index of the color that should be made transparent. Any pixels with that color index will be made transparent.
R5 : This is used to write a new image to the filter. The 16-bit pixel address and 8-bit pixel color index are concatenated and written to this register. The color index will then be written to the BRAMs at the pixel address.
The current minimum and maximum values for the X- and Y-Coordinates as well as image width and height are based on a 1280x720 screen resolution.
End of explanation
hdmi_out.stop()
hdmi_in.stop()
del hdmi_out
del hdmi_in
Explanation: 5. Exploration
Image Position.
Try moving the sliders up and down. Moving the X Origin slider up should move the image to the right. Moving the Y Origin slider up should move the image down.
Transparency.
Try entering the value '15' in the 'Transparent Color Index' box. Notice that the white pixels in the image have become transparent. Now try entering the value '17' in the box. Notice that the blue pixels in the image have become transparent. Now enter the value '0' so that the black pixels will be transparent for the next steps.
Upload New Image.
Try selecting a different image. The new image file should be written to replace the previous image.
6. Clean up
When you are done experimenting with the image overlay filter, run the following code before exiting.
End of explanation |
12,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This IPython notebook illustrates how to select the best learning based matcher. First, we need to import py_entitymatching package and other libraries as follows
Step1: Then, split the labeled data into development set and evaluation set. Use the development set to select the best learning-based matcher
Step2: Selecting the Best learning-based matcher
This, typically involves the following steps
Step3: Creating features
Next, we need to create a set of features for the development set. Magellan provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features.
Step4: We observe that there were 20 features generated. As a first step, lets say that we decide to use only 'year' related features.
Step5: Extracting feature vectors
In this step, we extract feature vectors using the development set and the created features.
Step6: We observe that the extracted feature vectors contain missing values. We have to impute the missing values for the learning-based matchers to fit the model correctly. For the purposes of this guide, we impute the missing value in a column with the mean of the values in that column.
Step7: Selecting the best matcher using cross-validation
Now, we select the best matcher using k-fold cross-validation. For the purposes of this guide, we use five fold cross validation and use 'precision' metric to select the best matcher.
Step9: Debug X (Random Forest) | Python Code:
# Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
# Set the seed value
seed = 0
!ls $datasets_dir
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
path_A = datasets_dir + os.sep + 'dblp_demo.csv'
path_B = datasets_dir + os.sep + 'acm_demo.csv'
path_labeled_data = datasets_dir + os.sep + 'labeled_data_demo.csv'
A = em.read_csv_metadata(path_A, key='id')
B = em.read_csv_metadata(path_B, key='id')
# Load the pre-labeled data
S = em.read_csv_metadata(path_labeled_data,
key='_id',
ltable=A, rtable=B,
fk_ltable='ltable_id', fk_rtable='rtable_id')
Explanation: Introduction
This IPython notebook illustrates how to select the best learning based matcher. First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
# Split S into I an J
IJ = em.split_train_test(S, train_proportion=0.5, random_state=0)
I = IJ['train']
J = IJ['test']
Explanation: Then, split the labeled data into development set and evaluation set. Use the development set to select the best learning-based matcher
End of explanation
# Create a set of ML-matchers
dt = em.DTMatcher(name='DecisionTree', random_state=0)
svm = em.SVMMatcher(name='SVM', random_state=0)
rf = em.RFMatcher(name='RF', random_state=0)
lg = em.LogRegMatcher(name='LogReg', random_state=0)
ln = em.LinRegMatcher(name='LinReg')
Explanation: Selecting the Best learning-based matcher
This, typically involves the following steps:
1. Creating a set of learning-based matchers
2. Creating features
3. Extracting feature vectors
4. Selecting the best learning-based matcher using k-fold cross validation
5. Debugging the matcher (and possibly repeat the above steps)
Creating a set of learning-based matchers
First, we need to create a set of learning-based matchers. The following matchers are supported in Magellan: (1) decision tree, (2) random forest, (3) naive bayes, (4) svm, (5) logistic regression, and (6) linear regression.
End of explanation
# Generate a set of features
F = em.get_features_for_matching(A, B, validate_inferred_attr_types=False)
Explanation: Creating features
Next, we need to create a set of features for the development set. Magellan provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features.
End of explanation
F.feature_name
Explanation: We observe that there were 20 features generated. As a first step, lets say that we decide to use only 'year' related features.
End of explanation
# Convert the I into a set of feature vectors using F
H = em.extract_feature_vecs(I,
feature_table=F,
attrs_after='label',
show_progress=False)
# Display first few rows
H.head()
# Check if the feature vectors contain missing values
# A return value of True means that there are missing values
any(pd.notnull(H))
Explanation: Extracting feature vectors
In this step, we extract feature vectors using the development set and the created features.
End of explanation
# Impute feature vectors with the mean of the column values.
H = em.impute_table(H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
strategy='mean')
Explanation: We observe that the extracted feature vectors contain missing values. We have to impute the missing values for the learning-based matchers to fit the model correctly. For the purposes of this guide, we impute the missing value in a column with the mean of the values in that column.
End of explanation
# Select the best ML matcher using CV
result = em.select_matcher([dt, rf, svm, ln, lg], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
k=5,
target_attr='label', metric_to_select_matcher='f1', random_state=0)
result['cv_stats']
result['drill_down_cv_stats']['precision']
result['drill_down_cv_stats']['recall']
result['drill_down_cv_stats']['f1']
Explanation: Selecting the best matcher using cross-validation
Now, we select the best matcher using k-fold cross-validation. For the purposes of this guide, we use five fold cross validation and use 'precision' metric to select the best matcher.
End of explanation
# Split H into P and Q
PQ = em.split_train_test(H, train_proportion=0.5, random_state=0)
P = PQ['train']
Q = PQ['test']
# Debug RF matcher using GUI
em.vis_debug_rf(rf, P, Q,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='label')
# Add a feature to do Jaccard on title + authors and add it to F
# Create a feature declaratively
sim = em.get_sim_funs_for_matching()
tok = em.get_tokenizers_for_matching()
feature_string = jaccard(wspace((ltuple['title'] + ' ' + ltuple['authors']).lower()),
wspace((rtuple['title'] + ' ' + rtuple['authors']).lower()))
feature = em.get_feature_fn(feature_string, sim, tok)
# Add feature to F
em.add_feature(F, 'jac_ws_title_authors', feature)
# Convert I into feature vectors using updated F
H = em.extract_feature_vecs(I,
feature_table=F,
attrs_after='label',
show_progress=False)
# Check whether the updated F improves X (Random Forest)
result = em.select_matcher([rf], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
k=5,
target_attr='label', metric_to_select_matcher='f1', random_state=0)
result['drill_down_cv_stats']['f1']
# Select the best matcher again using CV
result = em.select_matcher([dt, rf, svm, ln, lg], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
k=5,
target_attr='label', metric_to_select_matcher='f1', random_state=0)
result['cv_stats']
result['drill_down_cv_stats']['f1']
Explanation: Debug X (Random Forest)
End of explanation |
12,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pydiffexp
The pydiffexp package is meant to provide an interface between R and Python to do differential expression analysis.
Imports
Step1: Load Data
Each DEAnalysis object (DEA) operates on a specific dataset. DEA uses a <a href='http
Step2: Let's look at the data that has been added to the object. Notice that the columns are a Multiindex in which the levels correspond to lists of the possible values and the names of each level come from the list supplied to index_names
Raw Data
Step3: Formatted data as Hierarchial Dataframe
Step4: When the data is added, DEA automatically saves a summary of the experiment, which can also be summarized with the print function.
Step5: Model Fitting
Now we're ready to fit a model! All we need to do is supply contrasts that we want to compare. These are formatted in the R style and can either be a string, list, or dictionary. Here we'll just do one contrast, so we supply a string. When the fit is run, DEA gains several new attributes that store the data, design, contrast, and fit objects created by R.
All of the model information is kept as attributes so that the entire object can be saved and the analysis can be recapitulated.
Step6: After the fit, we want to see our significant results. DEA calls <a href="http | Python Code:
import pandas as pd
from pydiffexp import DEAnalysis
Explanation: Pydiffexp
The pydiffexp package is meant to provide an interface between R and Python to do differential expression analysis.
Imports
End of explanation
test_path = "/Users/jfinkle/Documents/Northwestern/MoDyLS/Python/sprouty/data/raw_data/all_data_formatted.csv"
raw_data = pd.read_csv(test_path, index_col=0)
# Initialize analysis object with data. Data is retained
'''
The hierarchy provides the names for each label in the multiindex. 'condition' and 'time' are supplied as the reference
labels, which are used to make contrasts.
'''
hierarchy = ['condition', 'well', 'time', 'replicate']
dea = DEAnalysis(raw_data, index_names=hierarchy, reference_labels=['condition', 'time'] )
Explanation: Load Data
Each DEAnalysis object (DEA) operates on a specific dataset. DEA uses a <a href='http://pandas.pydata.org/pandas-docs/stable/advanced.html'> hierarchical dataframe</a> (i.e. a dataframe with a multiindex) for analysis. One can either be supplied, or can be created from a dataframe with appropriate column or row labels. DEA expects the multiindex to be along the columns and will transform the data if necessary. DEA can also be initilized without data, but many methods will not work as expected.
End of explanation
raw_data.head()
Explanation: Let's look at the data that has been added to the object. Notice that the columns are a Multiindex in which the levels correspond to lists of the possible values and the names of each level come from the list supplied to index_names
Raw Data
End of explanation
dea.data.head()
dea.data.columns
Explanation: Formatted data as Hierarchial Dataframe
End of explanation
dea.experiment_summary
dea.print_experiment_summary()
Explanation: When the data is added, DEA automatically saves a summary of the experiment, which can also be summarized with the print function.
End of explanation
# Types of contrasts
c_dict = {'Diff0': "(KO_15-KO_0)-(WT_15-WT_0)", 'Diff15': "(KO_60-KO_15)-(WT_60-WT_15)",
'Diff60': "(KO_120-KO_60)-(WT_120-WT_60)", 'Diff120': "(KO_240-KO_120)-(WT_240-WT_120)"}
c_list = ["KO_15-KO_0", "KO_60-KO_15", "KO_120-KO_60", "KO_240-KO_120"]
c_string = "KO_0-WT_0"
dea.fit(c_string)
print(dea.design, '', dea.contrast_robj, '', dea.de_fit)
Explanation: Model Fitting
Now we're ready to fit a model! All we need to do is supply contrasts that we want to compare. These are formatted in the R style and can either be a string, list, or dictionary. Here we'll just do one contrast, so we supply a string. When the fit is run, DEA gains several new attributes that store the data, design, contrast, and fit objects created by R.
All of the model information is kept as attributes so that the entire object can be saved and the analysis can be recapitulated.
End of explanation
dea.get_results(p_value=0.01, n=10)
Explanation: After the fit, we want to see our significant results. DEA calls <a href="http://web.mit.edu/~r/current/arch/i386_linux26/lib/R/library/limma/html/toptable.html"> topTable</a> so all keywoard arguments from the R function can be passed, though the defaults explicitly in get_results() are the most commonly used ones. If more than one contrast is supplied, pydiffexp will default to using the F statistic when selecting significant genes.
End of explanation |
12,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pyIAST example (N$_2$, CO$_2$, H$_2$O)
data from Mason et al. here
construct models for pure-component adsorption isotherms
Step1: binary (CO$_2$/N$_2$ adsorption)
CO$_2$ partial pressure
Step2: ternary (CO$_2$/N$_2$/H$_2$O adsorption)
CO$_2$ partial pressure
Step3: compare to experiment
see Fig. 6 in Mason et al. | Python Code:
df_N2 = pd.read_csv("N2.csv", skiprows=1)
N2_isotherm = pyiast.ModelIsotherm(df_N2, loading_key="Loading(mmol/g)",
pressure_key="P(bar)", model='Henry')
pyiast.plot_isotherm(N2_isotherm)
N2_isotherm.print_params()
df_CO2 = pd.read_csv("CO2.csv", skiprows=1)
CO2_isotherm = pyiast.ModelIsotherm(df_CO2, loading_key="Loading(mmol/g)",
pressure_key="P(bar)", model="Langmuir")
CO2_isotherm.print_params()
pyiast.plot_isotherm(CO2_isotherm)
df_H2O = pd.read_csv("H2O.csv", skiprows=1)
H2O_isotherm = pyiast.InterpolatorIsotherm(df_H2O, loading_key="Loading(mmol/g)",
pressure_key="P(bar)",
fill_value=df_H2O["Loading(mmol/g)"].max())
pyiast.plot_isotherm(H2O_isotherm)
Explanation: pyIAST example (N$_2$, CO$_2$, H$_2$O)
data from Mason et al. here
construct models for pure-component adsorption isotherms
End of explanation
p = np.array([.166, .679]) # mbar
print("total P = ", np.sum(p))
q = pyiast.iast(p, [CO2_isotherm, N2_isotherm])
print(q)
Explanation: binary (CO$_2$/N$_2$ adsorption)
CO$_2$ partial pressure: 166 mbar
N$_2$ partial pressure: 679 mbar
End of explanation
p3 = np.array([.166, .679, .02]) # mbar
print("total P = ", np.sum(p3))
q3 = pyiast.iast(p3, [CO2_isotherm, N2_isotherm, H2O_isotherm], verboseflag=True)
q3
Explanation: ternary (CO$_2$/N$_2$/H$_2$O adsorption)
CO$_2$ partial pressure: 166 mbar
N$_2$ partial pressure: 679 mbar
H$_2$O partial pressure: 0.02 mbar
End of explanation
p_mix = np.array([.16562, .67912])
q_mix = np.array([.34, .27])
yerr = np.array([.1, .14])
fig = plt.figure(facecolor='w')
plt.plot(df_CO2['P(bar)'], df_CO2['Loading(mmol/g)'], marker='o', color='g', label='pure CO$_2$', markersize=10)
plt.plot(df_N2['P(bar)'], df_N2['Loading(mmol/g)'], marker='*', color='b', label='pure N$_2$', markersize=12)
plt.xlabel("Pressure (bar)")
# plt.scatter(p3[:-1], q3[:-1], color='orange', marker='s',s=50,zorder=110, label='IAST')
plt.scatter(p, q, color='orange', marker='s',s=45,zorder=110, label='IAST')
plt.scatter(p_mix, q_mix, marker='x', zorder=200, color='k', s=50, label='Expt')
plt.errorbar(p_mix, q_mix,color='k', yerr=yerr, linestyle='none', markersize=50)
plt.xlim([0, 1.0])
plt.ylim([0, 2.0])
plt.ylabel("Gas uptake (mmol/g)")
plt.legend(loc='upper left')
plt.savefig('JaradMason.pdf', format='pdf', facecolor=fig.get_facecolor())
fig = plt.figure(facecolor='w')
p_plot = np.linspace(0, 1)
plt.scatter(df_CO2['P(bar)'], df_CO2['Loading(mmol/g)'],
marker='o', color='g', label='pure CO$_2$', s=60)
plt.plot(p_plot, CO2_isotherm.loading(p_plot), color='g')
plt.scatter(df_N2['P(bar)'], df_N2['Loading(mmol/g)'],
marker='s', color='b', label='pure N$_2$', s=60)
plt.plot(p_plot, N2_isotherm.loading(p_plot), color='b')
plt.xlabel("Pressure (bar)")
plt.axvline(x=0.679, linewidth=2, color='b', linestyle='--')
plt.axvline(x=0.166, linewidth=2, color='g', linestyle='--')
# plt.scatter(p3[:-1], q3[:-1], color='orange', marker='s',s=45,zorder=110, label='IAST')
# plt.scatter(p, q, color='orange', marker='s',s=45,zorder=110, label='IAST')
# plt.scatter(p_mix, q_mix, marker='x', zorder=200, color='k', s=56, label='Expt')
# plt.errorbar(p_mix, q_mix,color='k', yerr=yerr, linestyle='none')
plt.xlim([-.05, 1.0])
plt.ylim([-.1, 2.0])
plt.ylabel("Gas uptake (mmol/g)")
plt.legend(loc='upper left')
plt.tight_layout()
plt.savefig('JaradMason_N2_and_CO2.pdf', format='pdf', facecolor=fig.get_facecolor())
fig = plt.figure(facecolor='w')
plt.scatter(df_H2O['P(bar)'], df_H2O['Loading(mmol/g)'], marker='o',
color='r', label='pure H$_2$O', s=60)
plt.plot(np.linspace(0, .07), H2O_isotherm.loading(np.linspace(0, 0.07)), color='r')
plt.axvline(x=.02, linewidth=2, color='r', linestyle='--')
plt.xlabel("Pressure (bar)")
plt.xlim([-.05*.07, 0.07])
plt.ylim([-.05*45, 45.])
plt.ylabel("Water uptake (mmol/g)")
plt.legend(loc='upper center')
plt.tight_layout()
plt.savefig('JaradMason_H2O.pdf', format='pdf', facecolor=fig.get_facecolor())
ind = np.arange(3) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots(facecolor='w')
rects1 = ax.bar(ind, q3, width, color=['g', 'b', 'r'], hatch='*')
# rects1 = ax.bar(np.arange(2), q, width, color=['g', 'b'], hatch='//')
rects2 = ax.bar(np.arange(2)+width, q_mix, width, color=['g', 'b', 'r'], yerr=yerr, ecolor='k')
# add some text for labels, title and axes ticks
ax.set_ylabel('Gas uptake (mmol/g)')
# ax.set_title('Sc')
ax.set_xticks(ind+width)
ax.set_xticklabels(('CO$_2$', 'N$_2$', r'H$_2$O') )
#x.legend( (rects1[0], rects2[0]), ('Exp\'t', 'IAST') , loc='upper center')
def autolabel(rects):
# attach some text labels
for rect in rects:
height = rect.get_height()
#ax.text(rect.get_x()+rect.get_width()/2., 1.05*height, '%.2f'% height,
ax.text(rect.get_x()+rect.get_width()/2., .05, '%.2f'% height,
ha='center', va='bottom', fontsize=15, weight='bold',
backgroundcolor='w')
bbox_props = dict(boxstyle="round", fc="w", ec="0.5", alpha=0.9)
ax.text(2.35+rects1[0].get_width()/2., .05, 'N/A',
ha='center', va='bottom',
fontsize=15, weight='bold',
backgroundcolor='w')
plt.xlim([-.05,3-width+.05])
autolabel(rects1)
autolabel(rects2)
plt.tight_layout()
plt.savefig('JaradMason_IAST.pdf', format='pdf', facecolor=fig.get_facecolor())
plt.savefig('JaradMason_IAST.png', format='png', facecolor=fig.get_facecolor(), dpi=250)
plt.show()
Explanation: compare to experiment
see Fig. 6 in Mason et al.
End of explanation |
12,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Profiling BatchFlow code
A profile is a set of statistics that describes how often and for how long various parts of the program executed.
This notebooks shows how to profile various parts of BatchFlow
Step1: To collect information about model training times (both on CPU and GPU), one must set profile option in the model configuration to True
Step2: To gather statistics about how long each action takes, we must set profile to True inside run call
Step3: Pipeline profiling
First of all, there is an elapsed_time attribute inside every instance of Pipeline
Step4: Note that elapsed_time attribute is created whether or not we set profile to True.
After running with profile=True, pipeline has attribute profile_info
Step5: Note that there is a detailed information about exact methods that are called inside each of the actions. That is a lot of data which can give us precise understanding of parts of the code, that are our bottlenecks.
Columns of the profile_info
Step6: Model profiling
Step7: There is an info property that, unsurprisingly, shows a lot of interesting details regarding model itself or the training process
Step8: As with pipeline, there is a profile_info attribute, as well as show_profile_info method. Depending on type of the used device (CPU or GPU) | Python Code:
import sys
sys.path.append("../../..")
from batchflow import B, V, W
from batchflow.opensets import MNIST
from batchflow.models.torch import ResNet18
dataset = MNIST()
Explanation: Profiling BatchFlow code
A profile is a set of statistics that describes how often and for how long various parts of the program executed.
This notebooks shows how to profile various parts of BatchFlow: namely, pipelines and models.
End of explanation
model_config = {
'inputs/labels/classes': 10,
'loss': 'ce',
'profile': True,
}
pipeline = (dataset.train.p
.init_variable('loss_history', [])
.to_array(channels='first', dtype='float32')
.multiply(multiplier=1/255., preserve_type=False)
.init_model('dynamic', ResNet18,
'resnet', config=model_config)
.train_model('resnet',
B.images, B.labels,
fetches='loss',
save_to=V('loss_history', mode='a'))
)
Explanation: To collect information about model training times (both on CPU and GPU), one must set profile option in the model configuration to True:
End of explanation
BATCH_SIZE = 64
N_ITERS = 50
pipeline.run(BATCH_SIZE, n_iters=N_ITERS, bar=True, profile=True,
bar_desc=W(V('loss_history')[-1].format('Loss is {:7.7}')))
Explanation: To gather statistics about how long each action takes, we must set profile to True inside run call:
End of explanation
pipeline.elapsed_time
Explanation: Pipeline profiling
First of all, there is an elapsed_time attribute inside every instance of Pipeline: it stores total time of running the pipeline (even if it was used multiple times):
End of explanation
pipeline.profile_info.head()
Explanation: Note that elapsed_time attribute is created whether or not we set profile to True.
After running with profile=True, pipeline has attribute profile_info: this DataFrame holds collected information:
End of explanation
# timings for each action
pipeline.show_profile_info(per_iter=False, detailed=False)
# for each action show 2 of the slowest methods, based on maximum `ncalls`
pipeline.show_profile_info(per_iter=False, detailed=True, sortby=('ncalls', 'max'), limit=2)
# timings for each action for each iter
pipeline.show_profile_info(per_iter=True, detailed=False,)
# for each iter each action show 3 of the slowest methods, based on maximum `ncalls`
pipeline.show_profile_info(per_iter=True, detailed=True, sortby='tottime', limit=3)
Explanation: Note that there is a detailed information about exact methods that are called inside each of the actions. That is a lot of data which can give us precise understanding of parts of the code, that are our bottlenecks.
Columns of the profile_info:
- action, iter, batch_id and start_time are pretty self-explainable
- id allows to identify exact method with great details: it is a concatenation of method_name, file_name, line_number and callee
- total_time is a time taken by an action
- pipeline_time is total_time plus time of processing the profiling table at each iteration
- tottime is a time taken by a method inside action
- cumtime is a time take by a method and all of the methods that are called inside this method
More often than not, though, we don't need such granularity. Pipeline method show_profile_info makes some handy aggregations:
Note: by default, results are sorted on total_time or tottime, depending on level of details.
End of explanation
model = pipeline.m('resnet')
Explanation: Model profiling
End of explanation
model.info
Explanation: There is an info property that, unsurprisingly, shows a lot of interesting details regarding model itself or the training process:
End of explanation
# one row for every operation inside model; limit at 5 rows
model.show_profile_info(per_iter=False, limit=5)
# for each iteration show 3 of the slowest operations
model.show_profile_info(per_iter=True, limit=3)
Explanation: As with pipeline, there is a profile_info attribute, as well as show_profile_info method. Depending on type of the used device (CPU or GPU)
End of explanation |
12,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian MLP for MNIST using preconditioned SGLD
We use the Jax Bayes library
by James Vuckovic
to fit an MLP to MNIST using SGD, and SGLD (with RMS preconditioning).
Code is based on
Step1: Data
Step3: Model
Step4: SGD
Step5: SGLD
Step6: Uncertainty analysis
We select the predictions above a confidence threshold, and compute the predictive accuracy on that subset. As we increase the threshold, the accuracy should increase, but fewer examples will be selected.
Step7: SGD
For the plugin estimate, the model is very confident on nearly all of the points.
Step9: SGLD
Step10: Distribution shift
We now examine the behavior of the models on the Fashion MNIST dataset.
We expect the predictions to be much less confident, since the inputs are now 'out of distribution'. We will see that this is true for the Bayesian approach, but not for the plugin approximation.
Step11: SGD
We see that the plugin estimate is confident (but wrong!) on many of the predictions, which is undesirable.
If consider a confidence threshold of 0.6,
the plugin approach predicts on about 80% of the examples,
even though the accuracy is only about 6% on these.
Step12: SGLD
If consider a confidence threshold of 0.6,
the Bayesian approach predicts on less than 20% of the examples,
on which the accuracy is ~4%. | Python Code:
%%capture
!pip install git+https://github.com/deepmind/dm-haiku
!pip install git+https://github.com/jamesvuc/jax-bayes
import haiku as hk
import jax.numpy as jnp
from jax.experimental import optimizers
import jax
import jax_bayes
import sys, os, math, time
import numpy as onp
import numpy as np
from functools import partial
from matplotlib import pyplot as plt
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
import tensorflow_datasets as tfds
Explanation: Bayesian MLP for MNIST using preconditioned SGLD
We use the Jax Bayes library
by James Vuckovic
to fit an MLP to MNIST using SGD, and SGLD (with RMS preconditioning).
Code is based on:
https://github.com/jamesvuc/jax-bayes/blob/master/examples/deep/mnist/mnist.ipynb
https://github.com/jamesvuc/jax-bayes/blob/master/examples/deep/mnist/mnist_mcmc.ipynb
Setup
End of explanation
def load_dataset(split, is_training, batch_size):
ds = tfds.load("mnist:3.*.*", split=split).cache().repeat()
if is_training:
ds = ds.shuffle(10 * batch_size, seed=0)
ds = ds.batch(batch_size)
# return tfds.as_numpy(ds)
return iter(tfds.as_numpy(ds))
# load the data into memory and create batch iterators
train_batches = load_dataset("train", is_training=True, batch_size=1_000)
val_batches = load_dataset("train", is_training=False, batch_size=10_000)
test_batches = load_dataset("test", is_training=False, batch_size=10_000)
Explanation: Data
End of explanation
nclasses = 10
def net_fn(batch, sig):
Standard LeNet-300-100 MLP
x = batch["image"].astype(jnp.float32) / 255.0
# x has size (1000, 28, 28, 1)
D = np.prod(x.shape[1:]) # 784
# To match initialization of linear layer
# sigma = 1/sqrt(fan-in)
# https://dm-haiku.readthedocs.io/en/latest/api.html#id1
# w_init = hk.initializers.TruncatedNormal(stddev=stddev)
sizes = [D, 300, 100, nclasses]
sigmas = [sig / jnp.sqrt(fanin) for fanin in sizes]
mlp = hk.Sequential(
[
hk.Flatten(),
hk.Linear(sizes[1], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[0]), b_init=jnp.zeros),
jax.nn.relu,
hk.Linear(sizes[2], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[1]), b_init=jnp.zeros),
jax.nn.relu,
hk.Linear(sizes[3], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[2]), b_init=jnp.zeros),
]
)
return mlp(x)
# L2 regularizer will be added to loss
reg = 1e-4
Explanation: Model
End of explanation
net = hk.transform(partial(net_fn, sig=1))
lr = 1e-3
opt_init, opt_update, opt_get_params = optimizers.rmsprop(lr)
# instantiate the model parameters --- requires a sample batch to get size
params_init = net.init(jax.random.PRNGKey(42), next(train_batches))
# intialize the optimzier state
opt_state = opt_init(params_init)
def loss(params, batch):
logits = net.apply(params, None, batch)
labels = jax.nn.one_hot(batch["label"], 10)
l2_loss = 0.5 * sum(jnp.sum(jnp.square(p)) for p in jax.tree_leaves(params))
softmax_crossent = -jnp.mean(labels * jax.nn.log_softmax(logits))
return softmax_crossent + reg * l2_loss
@jax.jit
def accuracy(params, batch):
preds = net.apply(params, None, batch)
return jnp.mean(jnp.argmax(preds, axis=-1) == batch["label"])
@jax.jit
def train_step(i, opt_state, batch):
params = opt_get_params(opt_state)
dx = jax.grad(loss)(params, batch)
opt_state = opt_update(i, dx, opt_state)
return opt_state
print(params_init["linear"]["w"].shape)
def callback(step, params, train_eval, test_eval, print_every=500):
if step % print_every == 0:
# Periodically evaluate classification accuracy on train & test sets.
train_accuracy = accuracy(params, next(train_eval))
test_accuracy = accuracy(params, next(test_eval))
train_accuracy, test_accuracy = jax.device_get((train_accuracy, test_accuracy))
print(f"[Step {step}] Train / Test accuracy: " f"{train_accuracy:.3f} / {test_accuracy:.3f}.")
%%time
nsteps = 5000
for step in range(nsteps + 1):
opt_state = train_step(step, opt_state, next(train_batches))
params_sgd = opt_get_params(opt_state)
callback(step, params_sgd, val_batches, test_batches)
Explanation: SGD
End of explanation
lr = 5e-3
num_samples = 10 # number of samples to approximate the posterior
init_stddev = 0.01 # 0.1 # params sampled around params_init
# we initialize all weights to 0 since we will be sampling them anyway
# net_bayes = hk.transform(partial(net_fn, sig=0))
sampler_fns = jax_bayes.mcmc.rms_langevin_fns
seed = 0
key = jax.random.PRNGKey(seed)
sampler_init, sampler_propose, sampler_update, sampler_get_params = sampler_fns(
key, num_samples=num_samples, step_size=lr, init_stddev=init_stddev
)
@jax.jit
def accuracy_bayes(params_samples, batch):
# average the logits over the parameter samples
pred_fn = jax.vmap(net.apply, in_axes=(0, None, None))
preds = jnp.mean(pred_fn(params_samples, None, batch), axis=0)
return jnp.mean(jnp.argmax(preds, axis=-1) == batch["label"])
# the log-probability is the negative of the loss
logprob = lambda p, b: -loss(p, b)
# build the mcmc step. This is like the opimization step, but for sampling
@jax.jit
def mcmc_step(i, sampler_state, sampler_keys, batch):
# extract parameters
params = sampler_get_params(sampler_state)
# form a partial eval of logprob on the data
logp = lambda p: logprob(p, batch)
# evaluate *per-sample* gradients
fx, dx = jax.vmap(jax.value_and_grad(logp))(params)
# generat proposal states for the Markov chains
sampler_prop_state, new_keys = sampler_propose(i, dx, sampler_state, sampler_keys)
# we don't need to re-compute gradients for the accept stage (unadjusted Langevin)
fx_prop, dx_prop = fx, dx
# accept the proposal states for the markov chain
sampler_state, new_keys = sampler_update(i, fx, fx_prop, dx, sampler_state, dx_prop, sampler_prop_state, new_keys)
return jnp.mean(fx), sampler_state, new_keys
def callback_bayes(step, params, val_batches, test_batches, print_every=500):
if step % print_every == 0:
val_acc = accuracy_bayes(params, next(val_batches))
test_acc = accuracy_bayes(params, next(test_batches))
print(f"step = {step}" f" | val acc = {val_acc:.3f}" f" | test acc = {test_acc:.3f}")
%%time
#get a single sample of the params using the normal hk.init(...)
params_init = net.init(jax.random.PRNGKey(42), next(train_batches))
# get a SamplerState object with `num_samples` params along dimension 0
# generated by adding Gaussian noise (see sampler_fns(..., init_dist='normal'))
sampler_state, sampler_keys = sampler_init(params_init)
# iterate the the Markov chain
nsteps = 5000
for step in range(nsteps+1):
train_logprob, sampler_state, sampler_keys = \
mcmc_step(step, sampler_state, sampler_keys, next(train_batches))
params_samples = sampler_get_params(sampler_state)
callback_bayes(step, params_samples, val_batches, test_batches)
print(params_samples["linear"]["w"].shape) # 10 samples of the weights for first layer
Explanation: SGLD
End of explanation
test_batch = next(test_batches)
from jax_bayes.utils import entropy, certainty_acc
def plot_acc_vs_confidence(predict_fn, test_batch):
# plot how accuracy changes as we increase the required level of certainty
preds = predict_fn(test_batch) # (batch_size, n_classes) array of probabilities
acc, mask = certainty_acc(preds, test_batch["label"], cert_threshold=0)
thresholds = [0.1 * i for i in range(11)]
cert_accs, pct_certs = [], []
for t in thresholds:
cert_acc, cert_mask = certainty_acc(preds, test_batch["label"], cert_threshold=t)
cert_accs.append(cert_acc)
pct_certs.append(cert_mask.mean())
fig, ax = plt.subplots(1)
line1 = ax.plot(thresholds, cert_accs, label="accuracy at certainty", marker="x")
line2 = ax.axhline(y=acc, label="regular accuracy", color="black")
ax.set_ylabel("accuracy")
ax.set_xlabel("certainty threshold")
axb = ax.twinx()
line3 = axb.plot(thresholds, pct_certs, label="pct of certain preds", color="green", marker="x")
axb.set_ylabel("pct certain")
lines = line1 + [line2] + line3
labels = [l.get_label() for l in lines]
ax.legend(lines, labels, loc=6)
return fig, ax
Explanation: Uncertainty analysis
We select the predictions above a confidence threshold, and compute the predictive accuracy on that subset. As we increase the threshold, the accuracy should increase, but fewer examples will be selected.
End of explanation
# plugin approximation to posterior predictive
@jax.jit
def posterior_predictive_plugin(params, batch):
logit_pp = net.apply(params, None, batch)
return jax.nn.softmax(logit_pp, axis=-1)
def pred_fn(batch):
return posterior_predictive_plugin(params_sgd, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, test_batch)
plt.savefig("acc-vs-conf-sgd.pdf")
plt.show()
Explanation: SGD
For the plugin estimate, the model is very confident on nearly all of the points.
End of explanation
def posterior_predictive_bayes(params_sampled, batch):
computes the posterior_predictive P(class = c | inputs, params) using a histogram
pred_fn = lambda p: net.apply(p, jax.random.PRNGKey(0), batch)
pred_fn = jax.vmap(pred_fn)
logit_samples = pred_fn(params_sampled) # n_samples x batch_size x n_classes
pred_samples = jnp.argmax(logit_samples, axis=-1) # n_samples x batch_size
n_classes = logit_samples.shape[-1]
batch_size = logit_samples.shape[1]
probs = np.zeros((batch_size, n_classes))
for c in range(n_classes):
idxs = pred_samples == c
probs[:, c] = idxs.sum(axis=0)
return probs / probs.sum(axis=1, keepdims=True)
def pred_fn(batch):
return posterior_predictive_bayes(params_samples, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, test_batch)
plt.savefig("acc-vs-conf-sgld.pdf")
plt.show()
Explanation: SGLD
End of explanation
fashion_ds = tfds.load("fashion_mnist:3.*.*", split="test").cache().repeat()
fashion_test_batches = tfds.as_numpy(fashion_ds.batch(10_000))
fashion_test_batches = iter(fashion_test_batches)
fashion_batch = next(fashion_test_batches)
Explanation: Distribution shift
We now examine the behavior of the models on the Fashion MNIST dataset.
We expect the predictions to be much less confident, since the inputs are now 'out of distribution'. We will see that this is true for the Bayesian approach, but not for the plugin approximation.
End of explanation
def pred_fn(batch):
return posterior_predictive_plugin(params_sgd, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, fashion_batch)
plt.savefig("acc-vs-conf-sgd-fashion.pdf")
plt.show()
Explanation: SGD
We see that the plugin estimate is confident (but wrong!) on many of the predictions, which is undesirable.
If consider a confidence threshold of 0.6,
the plugin approach predicts on about 80% of the examples,
even though the accuracy is only about 6% on these.
End of explanation
def pred_fn(batch):
return posterior_predictive_bayes(params_samples, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, fashion_batch)
plt.savefig("acc-vs-conf-sgld-fashion.pdf")
plt.show()
Explanation: SGLD
If consider a confidence threshold of 0.6,
the Bayesian approach predicts on less than 20% of the examples,
on which the accuracy is ~4%.
End of explanation |
12,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imports and setup
Imports
Step4: Common functions
Step5: Get charges
Calculate RESP charges using Gaussian through submit_gaussian for use with GAFF.
Step6: Parameterize molecule in GAFF with ANTECHAMBER and ACPYPE
Note, ACPYPE was installed from this repository, which seems to be from the original author, though maybe not the one who put it onto pypi.
For the catalyst
Step7: Move molecules
In VMD, the molecules were moved so that they were not sitting on top of each other.
Solvate
As before, using DCM parameters and solvent box from virtualchemistry.org.
Step8: Minimize
Step9: Equilibrate
Step10: Setup and submit parallel tempering (PT)
Step11: The energies from the simulations can be read in as a pandas DataFrame using panedr and then analyzed or plotted to check on equilibration, convergence, etc.
Step12: Setup for several systems/molecules at once
Working based on what was done above (using some things that were defined up there as well
Get charges
Step13: Copied over the g16.gesp files and renamed them for each molecule.
Make input files
Loaded amber/2016 module (and its dependencies).
antechamber -i TS1.gesp -fi gesp -o TS1.mol2 -fo mol2
acpype.py -i TS1.mol2 -b TS1-gesp --net_charge=1 -o gmx -d -c user
There was a warning for assigning bond types.
antechamber -i TS3.gesp -fi gesp -o TS3.mol2 -fo mol2
acpype.py -i TS3.mol2 -b TS3-gesp --net_charge=1 -o gmx -d -c user
Similar warning.
antechamber -i YCP.gesp -fi gesp -o YCP.mol2 -fo mol2
acpype.py -i YCP.mol2 -b YCP-gesp --net_charge=-1 -o gmx -d -c use
No similar warning here.
Step14: Move molecules
I presume I will again need to make the molecules non-overlapping, and that will be done manually in VMD.
Box and solvate
Step15: Minimize
Step16: Made index file (called index-ycp.ndx) with solutes and solvent groups.
SA equilibration
Step17: !!! Need to check distance on restraint !!!
Check equilibration
Step18: The volumes seem to look okay.
Started high (I did remove some solvents and it hadn't relaxed much), dropped quickly, then seemed to grow appropriately as the temperatures rose.
None seems to have boiled. | Python Code:
import re, os, sys, shutil
import shlex, subprocess
import glob
import pandas as pd
import panedr
import numpy as np
import MDAnalysis as mda
import nglview
import matplotlib.pyplot as plt
import parmed as pmd
import py
import scipy
from scipy import stats
from importlib import reload
from thtools import cd
from paratemp import copy_no_overwrite
from paratemp import geometries as gm
from paratemp import coordinate_analysis as ca
import paratemp.para_temp_setup as pts
import paratemp as pt
from gautools import submit_gaussian as subg
from gautools.tools import use_gen_template as ugt
Explanation: Imports and setup
Imports
End of explanation
def plot_prop_PT(edict, prop):
fig, axes = plt.subplots(4, 4, figsize=(16,16))
for i in range(16):
ax = axes.flat[i]
edict[i][prop].plot(ax=ax)
fig.tight_layout()
return fig, axes
def plot_e_props(df, labels, nrows=2, ncols=2):
fig, axes = plt.subplots(nrows, ncols, sharex=True)
for label, ax in zip(labels, axes.flat):
df[label].plot(ax=ax)
ax.set_title(label)
fig.tight_layout()
return fig, axes
def plot_rd(univ): # rd = reaction distance
univ.calculate_distances(rd=(20,39))
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
univ.data.rd.plot(ax=axes[0])
univ.data.rd.hist(ax=axes[1], grid=False)
print(f'reaction distance mean: {univ.data.rd.mean():.2f} and sd: {univ.data.rd.std():.2f}')
return fig, axes
def plot_hist_dist(univ, name, indexes=None):
if indexes is not None:
kwargs = {name: indexes}
univ.calculate_distances(**kwargs)
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
univ.data[name].plot(ax=axes[0])
univ.data[name].hist(ax=axes[1], grid=False)
print(f'{name} distance mean: {univ.data[name].mean():.2f} and sd: {univ.data[name].std():.2f}')
def get_solvent_count_solvate(proc):
for line in proc.stdout.split('\n'):
m = re.search(r'(?:atoms\):\s+)(\d+)(?:\s+residues)', line)
if m:
return int(m.group(1))
else:
raise ValueError('Solvent count not found.')
def set_solv_count(n_gro, s_count,
res_name='DCM', prepend='unequal-'):
Remove solvent residues from the end of a gro file to match s_count
This assumes all non-solvent molecules are listed in the input gro
file before the solvent residues.
bak_name = os.path.join(os.path.dirname(n_gro),
prepend+os.path.basename(n_gro))
copy_no_overwrite(n_gro, bak_name)
with open(n_gro, 'r') as in_gro:
lines = in_gro.readlines()
for line in lines[2:]:
if res_name in line:
non_s_res_count = resid
break
else:
resid = int(line[:5])
res_count = s_count + non_s_res_count
# TODO check reasonability of this number
box = lines.pop()
while True:
line = lines.pop()
if int(line[:5]) > res_count:
continue
elif int(line[:5]) == res_count:
atom_count = line[15:20]
lines.append(line)
break
elif int(line[:5]) < res_count:
raise ValueError("Desired res "
"count is larger than "
"line's resid.\n" +
"res_count: {}\n".format(res_count) +
"line: {}".format(line))
lines[1] = atom_count + '\n'
lines.append(box)
with open(n_gro, 'w') as out_gro:
for line in lines:
out_gro.write(line)
def get_solv_count_top(n_top, res_name='DCM'):
Return residue count of specified residue from n_top
with open(n_top, 'r') as in_top:
mol_section = False
for line in in_top:
if line.strip().startswith(';'):
pass
elif not mol_section:
if re.search(r'\[\s*molecules\s*\]', line,
flags=re.IGNORECASE):
mol_section = True
else:
if res_name.lower() in line.lower():
return int(line.split()[1])
def set_solv_count_top(n_top, s_count,
res_name='DCM', prepend='unequal-'):
Set count of res_name residues in n_top
This will make a backup copy of the top file with `prepend`
prepended to the name of the file.
bak_name = os.path.join(os.path.dirname(n_top),
prepend+os.path.basename(n_top))
copy_no_overwrite(n_top, bak_name)
with open(n_top, 'r') as in_top:
lines = in_top.readlines()
with open(n_top, 'w') as out_top:
mol_section = False
for line in lines:
if line.strip().startswith(';'):
pass
elif not mol_section:
if re.search(r'\[\s*molecules\s*\]', line,
flags=re.IGNORECASE):
mol_section = True
else:
if res_name.lower() in line.lower():
line = re.sub(r'\d+', str(s_count), line)
out_top.write(line)
Explanation: Common functions
End of explanation
d_charge_params = dict(opt='SCF=tight Test Pop=MK iop(6/33=2) iop(6/42=6) iop(6/50=1)',
func='HF',
basis='6-31G*',
footer='\ng16.gesp\n\ng16.gesp\n\n')
l_scripts = []
s = subg.write_sub_script('01-charges/TS2.com',
executable='g16',
make_xyz='../TS2.pdb',
make_input=True,
ugt_dict={'job_name':'GPX TS2 charges',
'charg_mult':'+1 1',
**d_charge_params})
l_scripts.append(s)
s = subg.write_sub_script('01-charges/R-NO2-CPA.com',
executable='g16',
make_xyz='../R-NO2-CPA.pdb',
make_input=True,
ugt_dict={'job_name':'GPX R-NO2-CPA charges',
'charg_mult':'-1 1',
**d_charge_params})
l_scripts.append(s)
l_scripts
subg.submit_scripts(l_scripts, batch=True, submit=True)
Explanation: Get charges
Calculate RESP charges using Gaussian through submit_gaussian for use with GAFF.
End of explanation
gpx = pmd.gromacs.GromacsTopologyFile('01-charges/GPX-ts.acpype/GPX-ts_GMX.top', xyz='01-charges/GPX-ts.acpype/GPX-ts_GMX.gro')
cpa = pmd.gromacs.GromacsTopologyFile('01-charges/CPA-gesp.acpype/CPA-gesp_GMX.top', xyz='01-charges/CPA-gesp.acpype/CPA-gesp_GMX.gro')
for res in gpx.residues:
if res.name == 'MOL':
res.name = 'GPX'
for res in cpa.residues:
if res.name == 'MOL':
res.name = 'CPA'
struc_comb = gpx + cpa
struc_comb
struc_comb.write('gpx-cpa-dry.top')
struc_comb.save('gpx-cpa-dry.gro')
Explanation: Parameterize molecule in GAFF with ANTECHAMBER and ACPYPE
Note, ACPYPE was installed from this repository, which seems to be from the original author, though maybe not the one who put it onto pypi.
For the catalyst:
Use antechamber to create mol2 file with Gaussian ESP charges (though wrong atom types and such, for now):
antechamber -i R-NO2-CPA.gesp -fi gesp -o R-NO2-CPA.mol2 -fo mol2
Use ACPYPE to use this mol2 file (and it's GESP charges) to generate GROMACS input files:
acpype.py -i R-NO2-CPA.mol2 -b CPA-gesp --net_charge=-1 -o gmx -d -c user
For the reactant:
antechamber -i TS2.gesp -fi gesp -o TS2.mol2 -fo mol2
acpype.py -i TS2.mol2 -b GPX-ts --net_charge=1 -o gmx -c user
Then the different molecules can be combined using ParmEd.
End of explanation
f_dcm = py.path.local('~/GROMACS-basics/DCM-GAFF/')
f_solvate = py.path.local('02-solvate/')
sep_gro = py.path.local('gpx-cpa-sep.gro')
boxed_gro = f_solvate.join('gpx-cpa-boxed.gro')
box = '3.5 3.5 3.5'
solvent_source = f_dcm.join('dichloromethane-T293.15.gro')
solvent_top = f_dcm.join('dichloromethane.top')
solv_gro = f_solvate.join('gpx-cpa-dcm.gro')
top = py.path.local('../params/gpxTS-cpa-dcm.top')
verbose = True
solvent_counts, key = dict(), 'GPX'
with f_solvate.as_cwd():
## Make box
cl = shlex.split(f'gmx_mpi editconf -f {sep_gro} ' +
f'-o {boxed_gro} -box {box}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_editconf'] = proc.stdout
proc.check_returncode()
## Solvate
cl = shlex.split(f'gmx_mpi solvate -cp {boxed_gro} ' +
f'-cs {solvent_source} -o {solv_gro}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_solvate'] = proc.stdout
proc.check_returncode()
solvent_counts[key] = get_solvent_count_solvate(proc)
if verbose:
print(f'Solvated system into {solv_gro}')
struc_g_c = pmd.load_file('gpx-cpa-dry.top')
struc_dcm = pmd.load_file(str(f_dcm.join('dichloromethane.top')))
struc_g_c_d = struc_g_c + solvent_counts['GPX'] * struc_dcm
struc_g_c_d.save(str(top))
Explanation: Move molecules
In VMD, the molecules were moved so that they were not sitting on top of each other.
Solvate
As before, using DCM parameters and solvent box from virtualchemistry.org.
End of explanation
ppl = py.path.local
f_min = ppl('03-minimize/')
f_g_basics = py.path.local('~/GROMACS-basics/')
mdp_min = f_g_basics.join('minim.mdp')
tpr_min = f_min.join('min.tpr')
deffnm_min = f_min.join('min-out')
gro_min = deffnm_min + '.gro'
with f_min.as_cwd():
## Compile tpr
if not tpr_min.exists():
cl = shlex.split(f'gmx_mpi grompp -f {mdp_min} '
f'-c {solv_gro} '
f'-p {top} '
f'-o {tpr_min}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_grompp_em'] = proc.stdout
proc.check_returncode()
if verbose:
print(f'Compiled em tpr to {tpr_min}')
elif verbose:
print(f'em tpr file already exists ({tpr_min})')
## Run minimization
if not gro_min.exists():
cl = shlex.split('gmx_mpi mdrun '
f'-s {tpr_min} '
f'-deffnm {deffnm_min} ')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_mdrun_em'] = proc.stdout
# TODO Get the potential energy from this output
proc.check_returncode()
if verbose:
print(f'Ran {key} em to make {gro_min}')
elif verbose:
print(f'em output gro already exists (gro_min)')
Explanation: Minimize
End of explanation
f_equil = ppl('04-equilibrate/')
plumed = f_equil.join('plumed.dat')
mdp_equil = f_g_basics.join('npt-298.mdp')
tpr_equil = f_equil.join('equil.tpr')
deffnm_equil = f_equil.join('equil-out')
gro_equil = deffnm_equil + '.gro'
gro_input = gro_min
with f_equil.as_cwd():
## Compile equilibration
if not tpr_equil.exists():
cl = shlex.split(f'gmx_mpi grompp -f {mdp_equil} '
f'-c {gro_input} '
f'-p {top} '
f'-o {tpr_equil}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_grompp_equil'] = proc.stdout
proc.check_returncode()
if verbose:
print(f'Compiled equil tpr to {tpr_equil}')
elif verbose:
print(f'equil tpr file already exists ({tpr_equil})')
## Run equilibration
if not gro_equil.exists():
cl = shlex.split('gmx_mpi mdrun '
f'-s {tpr_equil} '
f'-deffnm {deffnm_equil} '
f'-plumed {plumed}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_mdrun_equil'] = proc.stdout
proc.check_returncode()
if verbose:
print(f'Ran {key} equil to make {gro_equil}')
elif verbose:
print(f'equil output gro already exists (gro_equil)')
Explanation: Equilibrate
End of explanation
f_pt = ppl('05-PT/')
template = f_pt.join('template-mdp.txt')
index = ppl('index.ndx')
sub_templ = f_g_basics.join('sub-template-128.sub')
d_sub_templ = dict(tpr_base = 'TOPO/npt',
deffnm = 'PT-out',
name = 'GPX-PT',
plumed = plumed,
)
scaling_exponent = 0.025
maxwarn = 0
start_temp = 298.
verbose = True
skip_existing = True
jobs = []
failed_procs = []
for key in ['GPX']:
kwargs = {'template': str(template),
'topology': str(top),
'structure': str(gro_equil),
'index': str(index),
'scaling_exponent': scaling_exponent,
'start_temp': start_temp,
'maxwarn': maxwarn}
with f_pt.as_cwd():
try:
os.mkdir('TOPO')
except FileExistsError:
if skip_existing:
print(f'Skipping {key} because it seems to '
'already be done.\nMoving on...')
continue
with cd('TOPO'):
print(f'Now in {os.getcwd()}\nAttempting to compile TPRs...')
pts.compile_tprs(**kwargs)
print('Done compiling. Moving on...')
print(f'Now in {os.getcwd()}\nWriting submission script...')
with sub_templ.open(mode='r') as templ_f, \
open('gromacs-start-job.sub', 'w') as sub_s:
[sub_s.write(l.format(**d_sub_templ)) for l in templ_f]
print('Done.\nNow submitting job...')
cl = ['qsub', 'gromacs-start-job.sub']
proc = subprocess.run(cl,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True)
if proc.returncode == 0:
output = proc.stdout
jobs.append(re.search('[0-9].+\)', output).group(0))
print(output, '\nDone.\nMoving to next...')
else:
print('\n\n'+5*'!!!---'+'\n')
print(f'Error with calling qsub on {key}')
print('Command line input was', cl)
print('Check input and try again manually.'
'\nMoving to next anyway...')
failed_procs.append(proc)
print('-----Done-----\nSummary of jobs submitted:')
for job in jobs:
print(job)
Explanation: Setup and submit parallel tempering (PT)
End of explanation
e_05s = dict()
for i in range(16):
e_05s[i] = panedr.edr_to_df(f'05-PT/PT-out{i}.edr')
fig, axes = plot_prop_PT(e_05s, 'Pressure')
Explanation: The energies from the simulations can be read in as a pandas DataFrame using panedr and then analyzed or plotted to check on equilibration, convergence, etc.
End of explanation
l_scripts = []
s = subg.write_sub_script('01-charges/TS1.com',
executable='g16',
make_xyz='../TS1protonated.mol2',
make_input=True,
ugt_dict={'job_name':'GPX TS1 charges',
'charg_mult':'+1 1',
**d_charge_params})
l_scripts.append(s)
s = subg.write_sub_script('01-charges/TS3.com',
executable='g16',
make_xyz='../TS3protonated.mol2',
make_input=True,
ugt_dict={'job_name':'GPX TS3 charges',
'charg_mult':'+1 1',
**d_charge_params})
l_scripts.append(s)
s = subg.write_sub_script('01-charges/anti-cat-yamamoto.com',
executable='g16',
make_xyz='../R-Yamamoto-Cat.pdb',
make_input=True,
ugt_dict={'job_name':
'yamamoto catalyst charges',
'charg_mult':'-1 1',
**d_charge_params})
l_scripts.append(s)
l_scripts
subg.submit_scripts(l_scripts, batch=True, submit=True)
Explanation: Setup for several systems/molecules at once
Working based on what was done above (using some things that were defined up there as well
Get charges
End of explanation
ts1 = pmd.gromacs.GromacsTopologyFile(
'01-charges/TS1-gesp.acpype/TS1-gesp_GMX.top',
xyz='01-charges/TS1-gesp.acpype/TS1-gesp_GMX.gro')
ts3 = pmd.gromacs.GromacsTopologyFile(
'01-charges/TS3-gesp.acpype/TS3-gesp_GMX.top',
xyz='01-charges/TS3-gesp.acpype/TS3-gesp_GMX.gro')
ycp = pmd.gromacs.GromacsTopologyFile(
'01-charges/YCP-gesp.acpype/YCP-gesp_GMX.top',
xyz='01-charges/YCP-gesp.acpype/YCP-gesp_GMX.gro')
for res in ts1.residues:
if res.name == 'MOL':
res.name = 'TS1'
for res in ts3.residues:
if res.name == 'MOL':
res.name = 'TS3'
for res in ycp.residues:
if res.name == 'MOL':
res.name = 'YCP'
ts1_en = ts1.copy(pmd.gromacs.GromacsTopologyFile)
ts3_en = ts3.copy(pmd.gromacs.GromacsTopologyFile)
ts1_en.coordinates = - ts1.coordinates
ts3_en.coordinates = - ts3.coordinates
sys_ts1 = ts1 + ycp
sys_ts1_en = ts1_en + ycp
sys_ts3 = ts3 + ycp
sys_ts3_en = ts3_en + ycp
sys_ts1.write('ts1-ycp-dry.top')
sys_ts3.write('ts3-ycp-dry.top')
sys_ts1.save('ts1-ycp-dry.gro')
sys_ts1_en.save('ts1_en-ycp-dry.gro')
sys_ts3.save('ts3-ycp-dry.gro')
sys_ts3_en.save('ts3_en-ycp-dry.gro')
Explanation: Copied over the g16.gesp files and renamed them for each molecule.
Make input files
Loaded amber/2016 module (and its dependencies).
antechamber -i TS1.gesp -fi gesp -o TS1.mol2 -fo mol2
acpype.py -i TS1.mol2 -b TS1-gesp --net_charge=1 -o gmx -d -c user
There was a warning for assigning bond types.
antechamber -i TS3.gesp -fi gesp -o TS3.mol2 -fo mol2
acpype.py -i TS3.mol2 -b TS3-gesp --net_charge=1 -o gmx -d -c user
Similar warning.
antechamber -i YCP.gesp -fi gesp -o YCP.mol2 -fo mol2
acpype.py -i YCP.mol2 -b YCP-gesp --net_charge=-1 -o gmx -d -c use
No similar warning here.
End of explanation
f_dcm = py.path.local('~/GROMACS-basics/DCM-GAFF/')
f_solvate = py.path.local('37-solvate-anti/')
box = '3.7 3.7 3.7'
solvent_source = f_dcm.join('dichloromethane-T293.15.gro')
solvent_top = f_dcm.join('dichloromethane.top')
solv_gro = f_solvate.join('gpx-cpa-dcm.gro')
ts1_top = ppl('../params/ts1-ycp-dcm.top')
ts3_top = ppl('../params/ts3-ycp-dcm.top')
l_syss = ['TS1', 'TS1_en', 'TS3', 'TS3_en']
verbose = True
solvent_counts = dict()
for key in l_syss:
sep_gro = ppl(f'{key.lower()}-ycp-dry.gro')
if not sep_gro.exists():
raise FileNotFoundError(f'{sep_gro} does not exist')
boxed_gro = f'{key.lower()}-ycp-box.gro'
solv_gro = f'{key.lower()}-ycp-dcm.gro'
with f_solvate.ensure_dir().as_cwd():
## Make box
cl = shlex.split(f'gmx_mpi editconf -f {sep_gro} ' +
f'-o {boxed_gro} -box {box}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_editconf'] = proc.stdout
proc.check_returncode()
## Solvate
cl = shlex.split(f'gmx_mpi solvate -cp {boxed_gro} ' +
f'-cs {solvent_source} -o {solv_gro}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_solvate'] = proc.stdout
proc.check_returncode()
solvent_counts[key] = get_solvent_count_solvate(proc)
if verbose:
print(f'Solvated system into {solv_gro}')
# min_solv_count = min(solvent_counts.values())
min_solv_count = 328 # want to match with syn calculations
if min(solvent_counts.values()) < min_solv_count:
raise ValueError('At least one of the structures has <328 DCMs.\n'
'Check and/or make the box larger')
for key in l_syss:
solv_gro = f'{key.lower()}-ycp-dcm.gro'
with f_solvate.as_cwd():
set_solv_count(solv_gro, min_solv_count)
struc_ts1 = pmd.load_file('ts1-ycp-dry.top')
struc_ts3 = pmd.load_file('ts3-ycp-dry.top')
struc_dcm = pmd.load_file(str(f_dcm.join('dichloromethane.top')))
struc_ts1_d = struc_ts1 + min_solv_count * struc_dcm
struc_ts1_d.save(str(ts1_top))
struc_ts3_d = struc_ts3 + min_solv_count * struc_dcm
struc_ts3_d.save(str(ts3_top))
Explanation: Move molecules
I presume I will again need to make the molecules non-overlapping, and that will be done manually in VMD.
Box and solvate
End of explanation
f_min = ppl('38-relax-anti/')
f_min.ensure_dir()
f_g_basics = py.path.local('~/GROMACS-basics/')
mdp_min = f_g_basics.join('minim.mdp')
d_tops = dict(TS1=ts1_top, TS1_en=ts1_top, TS3=ts3_top, TS3_en=ts3_top)
for key in l_syss:
solv_gro = ppl(f'37-solvate-anti/{key.lower()}-ycp-dcm.gro')
tpr_min = f_min.join(f'{key.lower()}-min.tpr')
deffnm_min = f_min.join(f'{key.lower()}-min-out')
gro_min = deffnm_min + '.gro'
top = d_tops[key]
with f_min.as_cwd():
## Compile tpr
if not tpr_min.exists():
cl = shlex.split(f'gmx_mpi grompp -f {mdp_min} '
f'-c {solv_gro} '
f'-p {top} '
f'-o {tpr_min}')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_grompp_em'] = proc.stdout
proc.check_returncode()
if verbose:
print(f'Compiled em tpr to {tpr_min}')
elif verbose:
print(f'em tpr file already exists ({tpr_min})')
## Run minimization
if not gro_min.exists():
cl = shlex.split('gmx_mpi mdrun '
f'-s {tpr_min} '
f'-deffnm {deffnm_min} ')
proc = subprocess.run(cl, universal_newlines=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
outputs[key+'_mdrun_em'] = proc.stdout
# TODO Get the potential energy from this output
proc.check_returncode()
if verbose:
print(f'Ran {key} em to make {gro_min}')
elif verbose:
print(f'em output gro already exists (gro_min)')
Explanation: Minimize
End of explanation
f_pt = ppl('38-relax-anti/')
template = ppl('33-SA-NPT-rest-no-LINCS/template-mdp.txt')
index = ppl('../params/index-ycp.ndx')
scaling_exponent = 0.025
maxwarn = 0
start_temp = 298.
nsims = 16
verbose = True
skip_existing = True
jobs = []
failed_procs = []
for key in l_syss:
d_sub_templ = dict(
tpr = f'{key.lower()}-TOPO/npt',
deffnm = f'{key.lower()}-SA-out',
name = f'{key.lower()}-SA',
nsims = nsims,
tpn = 16,
cores = 128,
multi = True,
)
gro_equil = f_min.join(f'{key.lower()}-min-out.gro')
top = d_tops[key]
kwargs = {'template': str(template),
'topology': str(top),
'structure': str(gro_equil),
'index': str(index),
'scaling_exponent': scaling_exponent,
'start_temp': start_temp,
'maxwarn': maxwarn,
'number': nsims,
'grompp_exe': 'gmx_mpi grompp'}
with f_pt.as_cwd():
try:
os.mkdir(f'{key.lower()}-TOPO/')
except FileExistsError:
if (os.path.exists(f'{key.lower()}-TOPO/temperatures.dat') and
skip_existing):
print(f'Skipping {key} because it seems to '
'already be done.\nMoving on...')
continue
with cd(f'{key.lower()}-TOPO/'):
print(f'Now in {os.getcwd()}\nAttempting to compile TPRs...')
pts.compile_tprs(**kwargs)
print('Done compiling. Moving on...')
print(f'Now in {os.getcwd()}\nWriting submission script...')
lp_sub = pt.sim_setup.make_gromacs_sub_script(
f'gromacs-start-{key}-job.sub', **d_sub_templ)
print('Done.\nNow submitting job...')
cl = shlex.split(f'qsub {lp_sub}')
proc = subprocess.run(cl,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True)
if proc.returncode == 0:
output = proc.stdout
jobs.append(re.search('[0-9].+\)', output).group(0))
print(output, '\nDone.\nMoving to next...')
else:
print('\n\n'+5*'!!!---'+'\n')
print(f'Error with calling qsub on {key}')
print('Command line input was', cl)
print('Check input and try again manually.'
'\nMoving to next anyway...')
failed_procs.append(proc)
print('-----Done-----\nSummary of jobs submitted:')
for job in jobs:
print(job)
Explanation: Made index file (called index-ycp.ndx) with solutes and solvent groups.
SA equilibration
End of explanation
e_38s = dict()
for key in l_syss:
deffnm = f'{key.lower()}-SA-out'
e_38s[key] = dict()
d = e_38s[key]
for i in range(16):
d[i] = panedr.edr_to_df(f'38-relax-anti/{deffnm}{i}.edr')
for key in l_syss:
d = e_38s[key]
fig, axes = plot_prop_PT(d, 'Volume')
Explanation: !!! Need to check distance on restraint !!!
Check equilibration
End of explanation
for key in l_syss:
d = e_38s[key]
fig, ax = plt.subplots()
for key in list(d.keys()):
ax.hist(d[key]['Total Energy'], bins=100)
del d[key]
Explanation: The volumes seem to look okay.
Started high (I did remove some solvents and it hadn't relaxed much), dropped quickly, then seemed to grow appropriately as the temperatures rose.
None seems to have boiled.
End of explanation |
12,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas
CLEPY - August Module of the month
Anurag Saxena
@_asaxena
Pandas - Python Data Analysis Library
pandas.pydata.org
Open Source
High Performance
Easy to use Data Structures and Data Analysis Tools
Step1: There are two data structures in pandas that you will encounter the most
Step2: DataFrame
Represents a tabular, spreadsheet like data sturcture.
It has both row and column index.
You can think about it as a dict of Series but sharing the same index.
If you work with pandas, you are mostly working in dataframes.
Step3: Reading data from File
Pandas can read and write to a multitude of file formats | Python Code:
import pandas as pd
import numpy as np
Explanation: Pandas
CLEPY - August Module of the month
Anurag Saxena
@_asaxena
Pandas - Python Data Analysis Library
pandas.pydata.org
Open Source
High Performance
Easy to use Data Structures and Data Analysis Tools
End of explanation
obj = pd.Series([1,3,4,5,6,7,8,9])
obj
# You can define a custom index for series data
obj_c = pd.Series([1,2,4], index=['a','b','c'])
obj_c
Explanation: There are two data structures in pandas that you will encounter the most:
- Series
- DataFrame
Series
Nothing but a one dimensional array like object containing an array of data
End of explanation
df_1 = pd.DataFrame(np.random.randint(0,100,size=(100,4)), columns=list('ABCD'))
print(type(df_1))
df_1.head()
df_1.tail(3)
df_1.index
df_1.columns
df_1.values
df_1.describe()
df_1.mean()
df_1
df_1['A']
df_1[0:3]
Explanation: DataFrame
Represents a tabular, spreadsheet like data sturcture.
It has both row and column index.
You can think about it as a dict of Series but sharing the same index.
If you work with pandas, you are mostly working in dataframes.
End of explanation
import time
t_start = time.time()
apple_df = pd.read_csv('https://raw.githubusercontent.com/matplotlib/sample_data/master/aapl.csv')
t_end = time.time()
print (t_end - t_start)
apple_df.head()
apple_describe_time_start = time.time()
apple_df.describe()
apple_describe_time_end = time.time()
print (apple_describe_time_end - apple_describe_time_start)
# pd.set_eng_float_format(accuracy=2, use_eng_prefix=True)
apple_df.mean()
Explanation: Reading data from File
Pandas can read and write to a multitude of file formats
End of explanation |
12,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple implementation of network propagation from Vanunu et. al.
Author
Step1: Load the interactome from Barabasi paper
Interactome downloaded from supplemental materials of http
Step2: Load a focal gene set
Gene lists should follow the format shown here for kidney_diseases.txt and epilepsy_genes.txt, and should be in the entrez ID format
Step3: Network propagation from seed nodes
First calculate the degree-corrected version of adjacency matrix
Network propagation simulation follows methods in http
Step4: Propagate heat from seed disease, examine the community structure of hottest nodes
Step5: Plot the hot subnetwork
Create subgraphs from interactome containing only disease genes
Sort the heat vector (Fnew), and select the top_N hottest genes to plot
Step6: Set the node positions using nx.spring_layout. Parameter k controls default spacing between nodes (lower k brings the nodes closer together, higher k pushes them apart)
Step7: What are the top N genes? Print out gene symbols
These are the genes which are likely to be related to input gene set
(Convert from entrez ID to gene symbol using MyGene.info) | Python Code:
# import some useful packages
import numpy as np
import matplotlib.pyplot as plt
import seaborn
import networkx as nx
import pandas as pd
import random
# latex rendering of text in graphs
import matplotlib as mpl
mpl.rc('text', usetex = False)
mpl.rc('font', family = 'serif')
import sys
#sys.path.append('/Users/brin/Google Drive/UCSD/genome_interpreter_docs/barabasi_disease_distances/barabasi_incomplete_interactome/source/')
sys.path.append('source/')
import separation
import plotting_results
import network_prop
import imp
imp.reload(separation)
imp.reload(plotting_results)
imp.reload(network_prop)
% matplotlib inline
Explanation: Simple implementation of network propagation from Vanunu et. al.
Author: Brin Rosenthal ([email protected])
March 24, 2016
Note: data and code for this notebook may be found in the 'data' and 'source' directories
End of explanation
# load the interactome network (use their default network)
Gint = separation.read_network('data/DataS1_interactome.tsv')
# remove self links
separation.remove_self_links(Gint)
# Get rid of nodes with no edges
nodes_degree = Gint.degree()
nodes_0 = [n for n in nodes_degree.keys() if nodes_degree[n]==0]
Gint.remove_nodes_from(nodes_0)
Explanation: Load the interactome from Barabasi paper
Interactome downloaded from supplemental materials of http://science.sciencemag.org/content/347/6224/1257601 (Menche, Jörg, et al. "Uncovering disease-disease relationships through the incomplete interactome." Science 347.6224 (2015): 1257601.)
<img src="screenshots/barabasi_abstract.png" width="600" height="600">
We need a reliable background interactome in order to correctly calculate localization and co-localization properties of node sets
We have a few choices in this decision, three of which are outlined below:
<img src="screenshots/which_interactome.png" width="800" height="800">
End of explanation
genes_KID = separation.read_gene_list('kidney_diseases.txt')
genes_EPI = separation.read_gene_list('epilepsy_genes.txt')
# set disease name and focal genes here
dname = 'kidney'
genes_focal = genes_KID
Explanation: Load a focal gene set
Gene lists should follow the format shown here for kidney_diseases.txt and epilepsy_genes.txt, and should be in the entrez ID format
End of explanation
Wprime= network_prop.normalized_adj_matrix(Gint)
Explanation: Network propagation from seed nodes
First calculate the degree-corrected version of adjacency matrix
Network propagation simulation follows methods in http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000641 (Vanunu, Oron, et al. "Associating genes and protein complexes with disease via network propagation." PLoS Comput Biol 6.1 (2010): e1000641.)
<img src="screenshots/vanunu_abstract.png">
Calculate the degree- normalized adjacency matrix, using network_prop.normalized_adj_matrix function
End of explanation
seed_nodes = list(np.intersect1d(list(genes_focal),Gint.nodes()))
alpha=.5 # this parameter controls how fast the heat dissipates
Fnew = network_prop.network_propagation(Gint,Wprime,seed_nodes,alpha=alpha,num_its=20)
Explanation: Propagate heat from seed disease, examine the community structure of hottest nodes
End of explanation
Fsort = Fnew.sort(ascending=False)
top_N = 200
F_top_N = Fnew.head(top_N)
gneigh_top_N = list(F_top_N.index)
G_neigh_N = Gint.subgraph(gneigh_top_N)
# pull out some useful subgraphs for use in plotting functions
# find genes which are neighbors of seed genes
genes_in_graph = list(np.intersect1d(Gint.nodes(),list(genes_focal)))
G_focal=G_neigh_N.subgraph(list(genes_in_graph))
Explanation: Plot the hot subnetwork
Create subgraphs from interactome containing only disease genes
Sort the heat vector (Fnew), and select the top_N hottest genes to plot
End of explanation
pos = nx.spring_layout(G_neigh_N,k=.03) # set the node positions
plotting_results.plot_network_2_diseases(G_neigh_N,pos,G_focal,d1name=dname,saveflag=False)
nx.draw_networkx_nodes(G_neigh_N,pos=pos,node_color=Fnew[G_neigh_N.nodes()],cmap='YlOrRd',node_size=30,
vmin=0,vmax=max(Fnew)/3)
nx.draw_networkx_edges(G_neigh_N,pos=pos,edge_color='white',alpha=.2)
plt.title('Top '+str(top_N)+' genes propagated from '+dname+': alpha = ' + str(alpha),color='white',fontsize=16,y=.95)
plt.savefig('heat_prop_network.png',dpi=200) # save the figure here
Explanation: Set the node positions using nx.spring_layout. Parameter k controls default spacing between nodes (lower k brings the nodes closer together, higher k pushes them apart)
End of explanation
import mygene
mg = mygene.MyGeneInfo()
# print out the names of the top N genes (that don't include the seed set)
focal_group = list(F_top_N.index)
focal_group = np.setdiff1d(focal_group,list(genes_focal))
top_heat_focal = F_top_N[focal_group]
focal_temp = mg.getgenes(focal_group)
focal_entrez_names = [str(x['entrezgene']) for x in focal_temp if 'symbol' in x.keys()]
focal_gene_names = [str(x['symbol']) for x in focal_temp if 'symbol' in x.keys()]
top_heat_df = pd.DataFrame({'gene_symbol':focal_gene_names,'heat':top_heat_focal[focal_entrez_names]})
top_heat_df = top_heat_df.sort('heat',ascending=False)
# print the top 25 related genes, along with their heat values
top_heat_df.head(25)
Explanation: What are the top N genes? Print out gene symbols
These are the genes which are likely to be related to input gene set
(Convert from entrez ID to gene symbol using MyGene.info)
End of explanation |
12,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functions from Stream to Stream
This module describes one way of creating functions from a single stream to a single stream. Other ways of mapping a single input stream to a single output stream are described elsewhere.
Mapping a single input element to a single output element
@fmap_e
We use the decorator @fmap_e to convert a function that maps an element to an element into a function that maps a stream into a stream.
@fmap_e
def f(v)
Step1: map_element
In some situations, you may not want to decorate a function. In these cases you can use map_element
map_element(func, in_stream, out_stream, state=None, name=None, kwargs)
where func is the function applied to each element of the input stream.
Next, we implement the previous example without using decorators. Note that you have to declare x AND y as streams, and specify the relation between x and y by calling map_element before extending stream x.
Step2: Mapping element to element
Step3: Mapping element to element
Step4: Same example using map_element instead of a decorator
Step5: Saving state in an object
You can save the state of a stream in an object such as a dict as shown in the following example of the Fibonacci sequence.
Step6: Filtering elements in a stream
We are given a function f that returns a Boolean. We apply the decorator @filter_e to get a function that takes an input stream and returns an output stream consisting of those elements in the input stream for which f returns True.
In the following example, positive(v) returns True exactly when v is positive. After we apply the decorator, f becomes a function that reads an input stream and returns a stream consisting of the input stream's positive values.
Step7: Using filter_element instead of a decorator
Just as you may prefer to use map_element instead of the decorator fmap_e in some situations, you may also prefer to use filter_element instead of the decorator filter_e. The previous example, implemented without decorators, is given next.
Step8: Function that maps list to list
In some cases, using functions that map lists to lists is more convenient than functions that map element to element.
When such a function is decorated with @fmap_l, the function becomes one that maps streams to streams.
Example
Step9: Example
Step10: Incremental computations from stream to stream
We can decorate the incremental computation from list to list to obtain a computation from stream to stream. This is illustrated in the next example.
Step11: Example with state and keyword argument
We want to output the elements of the accumulated stream that exceed a threshold. For example, if a stream x is [10, 20, -30, 50, -40] then the accumulation stream is [10, 30, 0, 50, 10] and the elements of the accumulated stream that exceed a threshold of 25 are [30, 50].
Step12: Example
Step13: Using list_element instead of the decorator fmap_l
The next example illustrates how map_element can be used with state and keyword arguments. It is the same as the previous example, except that it doesn't use decorators. | Python Code:
import os
import sys
sys.path.append("../")
from IoTPy.core.stream import Stream, run
from IoTPy.agent_types.op import map_element
from IoTPy.agent_types.basics import fmap_e
from IoTPy.helper_functions.recent_values import recent_values
@fmap_e
def f(v): return v+10
# f is a function that maps a stream to a stream
def example():
x = Stream()
y = f(x)
x.extend([1, 2, 3, 4])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Functions from Stream to Stream
This module describes one way of creating functions from a single stream to a single stream. Other ways of mapping a single input stream to a single output stream are described elsewhere.
Mapping a single input element to a single output element
@fmap_e
We use the decorator @fmap_e to convert a function that maps an element to an element into a function that maps a stream into a stream.
@fmap_e
def f(v): return v+10
Creates a function f from a stream to a stream where the n-th element of the output stream is (undecorated) f applied to the n-th element of the input stream.
If y = f(x) then y[n] = x + 10
End of explanation
def example():
def f(v): return v+10
x, y = Stream(), Stream()
map_element(func=f, in_stream=x, out_stream=y)
x.extend([1, 2, 3, 4])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: map_element
In some situations, you may not want to decorate a function. In these cases you can use map_element
map_element(func, in_stream, out_stream, state=None, name=None, kwargs)
where func is the function applied to each element of the input stream.
Next, we implement the previous example without using decorators. Note that you have to declare x AND y as streams, and specify the relation between x and y by calling map_element before extending stream x.
End of explanation
@fmap_e
def f(v, addend, multiplier): return v * multiplier + addend
# f is a function that maps a stream to a stream
def example():
x = Stream()
# Specify the keyword argument: addend
y = f(x, addend=20, multiplier=2)
x.extend([1, 2, 3, 4])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Mapping element to element : function with keyword arguments
The function
f(v, addend) = return v * multiplier + addend
maps v to the return value. The first parameter v is an element of an input stream, and the arguments addend and multiplier are keyword arguments of the function.
Decorating the function with @fmap_e gives a function that maps a stream to a stream.
End of explanation
@fmap_e
def f(v, state, multiplier):
output = (v + state)*multiplier
next_state = v + state
return output, next_state
# f is a function that maps a stream to a stream
def example():
x = Stream()
# Specify the keyword argument: multiplier
y = f(x, state=0, multiplier=2)
x.extend([1, 2, 3, 4])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Mapping element to element: Function with state
Strictly speaking a function cannot have state; however, we bend the definition here to allow functions with states that operate on streams.
Look at this example: The input and output streams of a function are x and y, respectively; and we want:
y[n] = (x[0] + x[1] + ... + x[n]) * multiplier
where multiplier is an argument.
We can implement a function where its state before the n-th application of the function is:
x[0] + x[1] + ... + x[n-1],
and its state after the n-th application is:
x[0] + x[1] + ... + x[n-1] + x[n]
The state is updated by adding x[n].
We can capture the state of a function by specifying a special keyword argument state and specifying the initial state in the call to the function. The function must returns two values: the next output and the next state.
In this example, the function has 3 values: the next element of the stream, state, and multiplier. The state and multiplier are keyword arguments.
End of explanation
def f(v, state, multiplier):
output = (v + state)*multiplier
next_state = v + state
return output, next_state
def example():
x, y = Stream(), Stream()
map_element(func=f, in_stream=x, out_stream=y, state=0, multiplier=2)
x.extend([1, 2, 3, 4])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Same example using map_element instead of a decorator
End of explanation
def example():
x = Stream('x')
# Object in which state is saved
s = {'a':0, 'b':1}
@fmap_e
def fib(v, fib):
# Update state
fib['a'], fib['b'] = fib['a'] + fib['b'], fib['a']
return fib['a'] + v
# Declare stream y
y = fib(x, fib=s)
x.extend([0, 0, 0, 0, 0, 0, 0])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Saving state in an object
You can save the state of a stream in an object such as a dict as shown in the following example of the Fibonacci sequence.
End of explanation
from IoTPy.agent_types.basics import filter_e
@filter_e
def positive(v, threshold): return v > threshold
# f is a function that maps a stream to a stream
def example():
x = Stream()
y = positive(x, threshold=0)
x.extend([-1, 2, -3, 4])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Filtering elements in a stream
We are given a function f that returns a Boolean. We apply the decorator @filter_e to get a function that takes an input stream and returns an output stream consisting of those elements in the input stream for which f returns True.
In the following example, positive(v) returns True exactly when v is positive. After we apply the decorator, f becomes a function that reads an input stream and returns a stream consisting of the input stream's positive values.
End of explanation
from IoTPy.agent_types.op import filter_element
def example():
def positive(v, threshold): return v > threshold
x, y = Stream(), Stream()
filter_element(func=positive, in_stream=x, out_stream=y, threshold=0)
x.extend([-1, 2, -3, 4])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Using filter_element instead of a decorator
Just as you may prefer to use map_element instead of the decorator fmap_e in some situations, you may also prefer to use filter_element instead of the decorator filter_e. The previous example, implemented without decorators, is given next.
End of explanation
from IoTPy.agent_types.basics import fmap_l
@fmap_l
def increment_odd_numbers(the_list):
return [v+1 if v%2 else v for v in the_list]
def example():
x = Stream()
y = increment_odd_numbers(x)
x.extend([0, 1, 2, 3, 4, 5, 6])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Function that maps list to list
In some cases, using functions that map lists to lists is more convenient than functions that map element to element.
When such a function is decorated with @fmap_l, the function becomes one that maps streams to streams.
Example: Decorate the function increment_odd_numbers
End of explanation
from itertools import accumulate
def incremental_accumulate(the_list, state):
print ("the_list ", the_list)
output_list = [v + state for v in list(accumulate(the_list))]
next_state = output_list[-1]
return output_list, next_state
def example():
x = [1, 2, 3]
y, state = incremental_accumulate(x, state=0)
print ('y is ', y)
x.extend([4, 5, 6, 7])
y, state = incremental_accumulate(x, state=0)
print ('y is ', y)
example()
Explanation: Example: incremental computations from list to list
Given a list x we can generate list y where y[j] = x[0] + .. + x[j] by:
y = list(accumulate(x))*
For example, if x = [1, 2, 3] then y = [1, 3, 6]
Now suppose we extend x with the list [4, 5, 6, 7], then we can get the desired y = [1, 3, 6, 10, 15, 21, 28] by calling accumulate again. We can also compute the new value of y incrementally by adding the last output from the previous computation (i.e., 6) to the accumulation of the extension, as shown next.
End of explanation
from itertools import accumulate
@fmap_l
def incremental_accumulate(the_list, state):
output_list = [v + state for v in list(accumulate(the_list))]
next_state = output_list[-1]
return output_list, next_state
def example():
x = Stream()
y = incremental_accumulate(x, state=0)
x.extend([10, 20, -30, 50, -40])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
x.extend([10, 20, -30])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Incremental computations from stream to stream
We can decorate the incremental computation from list to list to obtain a computation from stream to stream. This is illustrated in the next example.
End of explanation
from itertools import accumulate
@fmap_l
def total_exceeds_threshold(the_list, state, threshold):
output_list = [v + state for v in list(accumulate(the_list)) if v + state > threshold]
state += sum(the_list)
return output_list, state
def example():
x = Stream()
y = total_exceeds_threshold(x, state=0, threshold=25)
x.extend([10, 20, -30, 50, -40])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
x.extend([10, 20, -30])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Example with state and keyword argument
We want to output the elements of the accumulated stream that exceed a threshold. For example, if a stream x is [10, 20, -30, 50, -40] then the accumulation stream is [10, 30, 0, 50, 10] and the elements of the accumulated stream that exceed a threshold of 25 are [30, 50].
End of explanation
def example():
x = Stream()
y = positive(incremental_accumulate(x, state=0), threshold=25)
x.extend([10, 20, -30, 50, -40])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
x.extend([10, 20, -30])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Example: function composition
We can also solve the previous problem by concatenating the functions positive and incremental_accumulate.
End of explanation
from IoTPy.agent_types.basics import map_list
def total_exceeds_threshold(the_list, state, threshold):
output_list = [v + state for v in list(accumulate(the_list)) if v + state > threshold]
state += sum(the_list)
return output_list, state
def example():
x, y = Stream(), Stream()
map_list(func=total_exceeds_threshold, in_stream=x, out_stream=y, state=0, threshold=25)
x.extend([10, 20, -30, 50, -40])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
x.extend([10, 20, -30])
# Execute a step
run()
print ('recent values of stream y are')
print (recent_values(y))
example()
Explanation: Using list_element instead of the decorator fmap_l
The next example illustrates how map_element can be used with state and keyword arguments. It is the same as the previous example, except that it doesn't use decorators.
End of explanation |
12,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Planning Algorithms
Do you remember on lesson 2 and 3 we discussed algorithms that basically solve MDPs? That is, find a policy given a exact representation of the environment. In this section, we will explore 2 such algorithms. Value Iteration and policy iteration.
Step1: Value Iteration
The Value Iteration algorithm uses dynamic programming by dividing the problem into common sub-problems and leveraging that optimal structure to speed-up computations.
Let me show you how value iterations look like
Step2: As we can see, value iteration expects a set of states, e.g. (0,1,2,3,4) a set of actions, e.g. (0,1) and a set of transition probabilities that represent the dynamics of the environment. Let's take a look at these variables
Step3: You see the world we are looking into "FrozenLake-v0" has 16 different states, 4 different actions. The P[10] is basically showing us a peek into the dynamics of the world. For example, in this case, if you are in state "10" (from P[10]) and you take action 0 (see dictionary key 0), you have equal probability of 0.3333 to land in either state 6, 9 or 14. None of those transitions give you any reward and none of them is terminal.
In contrast, we can see taking action 2, might transition you to state 11, which is terminal.
Get the hang of it? Let's run it!
Step4: Now, value iteration calculates two important things. First, it calculates V, which tells us how much should we expect from each state if we always act optimally. Second, it gives us pi, which is the optimal policy given V. Let's take a deeper look
Step5: See? This policy basically says in state 0, take action 0. In state 1 take action 3. In state 2 take action 0 and so on. Got it?
Now, we have the "directions" or this "map". With this, we can just use this policy and solve the environment as we interact with it.
Let's try it out!
Step6: That was the agent interacting with the environment. Let's take a look at some of the episodes
Step8: You can look on that link, or better, let's show it on the notebook
Step9: Interesting right? Did you get the world yet?
So, 'S' is the starting state, 'G' the goal. 'F' are Frozen grids, and 'H' are holes. Your goal is to go from S to G without falling into any H. The problem is, F is slippery so, often times you are better of by trying moves that seems counter-intuitive. But because you are preventing falling on 'H's it makes sense in the end. For example, the second row, first column 'F', you can see how our agent was trying so hard to go left!! Smashing his head against the wall?? Silly. But why?
Step10: See how action 0 (left) doesn't have any transition leading to a terminal state??
All other actions give you a 0.333333 chance each of pushing you into the hole in state '5'!!! So it actually makes sense to go left until it slips you downward to state 8.
Cool right?
Step11: See how the "prescribed" action is 0 (left) on the policy calculated by value iteration?
How about the values?
Step12: These show the expected rewards on each state.
Step13: See how the state '15' gives you a reward of +1?? These signal gets propagated all the way to the start state using Value Iteration and it shows the values all accross.
Cool? Good.
Step14: If you want to submit to OpenAI Gym, get your API Key and paste it here
Step16: Policy Iteration
There is another method called policy iteration. This method is composed of 2 other methods, policy evaluation and policy improvement. The logic goes that policy iteration is 'evaluating' a policy to check for convergence (meaning the policy doesn't change), and 'improving' the policy, which is applying something similar to a 1 step value iteration to get a slightly better policy, but definitely not worse.
These two functions cycling together are what policy iteration is about.
Can you implement this algorithm yourself? Try it. Make sure to look the solution notebook in case you get stuck.
I will give you the policy evaluation and policy improvement methods, you build the policy iteration cycling between the evaluation and improvement methods until there are no changes to the policy.
Step17: After you implement the algorithms, you can run it and calculate the optimal policy
Step19: And, of course, interact with the environment looking at the "directions" or "policy"
Step20: Similar as before. Policies could be slightly different if there is a state in which more than one action give the same value in the end.
Step21: That's it let's wrap up.
Step22: If you want to submit to OpenAI Gym, get your API Key and paste it here | Python Code:
import numpy as np
import pandas as pd
import tempfile
import pprint
import json
import sys
import gym
from gym import wrappers
from subprocess import check_output
from IPython.display import HTML
Explanation: Planning Algorithms
Do you remember on lesson 2 and 3 we discussed algorithms that basically solve MDPs? That is, find a policy given a exact representation of the environment. In this section, we will explore 2 such algorithms. Value Iteration and policy iteration.
End of explanation
def value_iteration(S, A, P, gamma=.99, theta = 0.0000001):
V = np.random.random(len(S))
for i in range(100000):
old_V = V.copy()
Q = np.zeros((len(S), len(A)), dtype=float)
for s in S:
for a in A:
for prob, s_prime, reward, done in P[s][a]:
Q[s][a] += prob * (reward + gamma * old_V[s_prime] * (not done))
V[s] = Q[s].max()
if np.all(np.abs(old_V - V) < theta):
break
pi = np.argmax(Q, axis=1)
return pi, V
Explanation: Value Iteration
The Value Iteration algorithm uses dynamic programming by dividing the problem into common sub-problems and leveraging that optimal structure to speed-up computations.
Let me show you how value iterations look like:
End of explanation
mdir = tempfile.mkdtemp()
env = gym.make('FrozenLake-v0')
env = wrappers.Monitor(env, mdir, force=True)
S = range(env.env.observation_space.n)
A = range(env.env.action_space.n)
P = env.env.env.P
S
A
P[10]
Explanation: As we can see, value iteration expects a set of states, e.g. (0,1,2,3,4) a set of actions, e.g. (0,1) and a set of transition probabilities that represent the dynamics of the environment. Let's take a look at these variables:
End of explanation
pi, V = value_iteration(S, A, P)
Explanation: You see the world we are looking into "FrozenLake-v0" has 16 different states, 4 different actions. The P[10] is basically showing us a peek into the dynamics of the world. For example, in this case, if you are in state "10" (from P[10]) and you take action 0 (see dictionary key 0), you have equal probability of 0.3333 to land in either state 6, 9 or 14. None of those transitions give you any reward and none of them is terminal.
In contrast, we can see taking action 2, might transition you to state 11, which is terminal.
Get the hang of it? Let's run it!
End of explanation
V
pi
Explanation: Now, value iteration calculates two important things. First, it calculates V, which tells us how much should we expect from each state if we always act optimally. Second, it gives us pi, which is the optimal policy given V. Let's take a deeper look:
End of explanation
for _ in range(10000):
state = env.reset()
while True:
state, reward, done, info = env.step(pi[state])
if done:
break
Explanation: See? This policy basically says in state 0, take action 0. In state 1 take action 3. In state 2 take action 0 and so on. Got it?
Now, we have the "directions" or this "map". With this, we can just use this policy and solve the environment as we interact with it.
Let's try it out!
End of explanation
last_video = env.videos[-1][0]
out = check_output(["asciinema", "upload", last_video])
out = out.decode("utf-8").replace('\n', '').replace('\r', '')
print(out)
Explanation: That was the agent interacting with the environment. Let's take a look at some of the episodes:
End of explanation
castid = out.split('/')[-1]
html_tag =
<script type="text/javascript"
src="https://asciinema.org/a/{0}.js"
id="asciicast-{0}"
async data-autoplay="true" data-size="big">
</script>
html_tag = html_tag.format(castid)
HTML(data=html_tag)
Explanation: You can look on that link, or better, let's show it on the notebook:
End of explanation
P[4]
Explanation: Interesting right? Did you get the world yet?
So, 'S' is the starting state, 'G' the goal. 'F' are Frozen grids, and 'H' are holes. Your goal is to go from S to G without falling into any H. The problem is, F is slippery so, often times you are better of by trying moves that seems counter-intuitive. But because you are preventing falling on 'H's it makes sense in the end. For example, the second row, first column 'F', you can see how our agent was trying so hard to go left!! Smashing his head against the wall?? Silly. But why?
End of explanation
pi
Explanation: See how action 0 (left) doesn't have any transition leading to a terminal state??
All other actions give you a 0.333333 chance each of pushing you into the hole in state '5'!!! So it actually makes sense to go left until it slips you downward to state 8.
Cool right?
End of explanation
V
Explanation: See how the "prescribed" action is 0 (left) on the policy calculated by value iteration?
How about the values?
End of explanation
P[15]
Explanation: These show the expected rewards on each state.
End of explanation
env.close()
Explanation: See how the state '15' gives you a reward of +1?? These signal gets propagated all the way to the start state using Value Iteration and it shows the values all accross.
Cool? Good.
End of explanation
gym.upload(mdir, api_key='<YOUR OPENAI API KEY>')
Explanation: If you want to submit to OpenAI Gym, get your API Key and paste it here:
End of explanation
def policy_evaluation(pi, S, A, P, gamma=.99, theta=0.0000001):
V = np.zeros(len(S))
while True:
delta = 0
for s in S:
v = V[s]
V[s] = 0
for prob, dst, reward, done in P[s][pi[s]]:
V[s] += prob * (reward + gamma * V[dst] * (not done))
delta = max(delta, np.abs(v - V[s]))
if delta < theta:
break
return V
def policy_improvement(pi, V, S, A, P, gamma=.99):
for s in S:
old_a = pi[s]
Qs = np.zeros(len(A), dtype=float)
for a in A:
for prob, s_prime, reward, done in P[s][a]:
Qs[a] += prob * (reward + gamma * V[s_prime] * (not done))
pi[s] = np.argmax(Qs)
V[s] = np.max(Qs)
return pi, V
def policy_iteration(S, A, P, gamma=.99):
pi = np.random.choice(A, len(S))
YOU COMPLETE THIS METHOD
return pi
Explanation: Policy Iteration
There is another method called policy iteration. This method is composed of 2 other methods, policy evaluation and policy improvement. The logic goes that policy iteration is 'evaluating' a policy to check for convergence (meaning the policy doesn't change), and 'improving' the policy, which is applying something similar to a 1 step value iteration to get a slightly better policy, but definitely not worse.
These two functions cycling together are what policy iteration is about.
Can you implement this algorithm yourself? Try it. Make sure to look the solution notebook in case you get stuck.
I will give you the policy evaluation and policy improvement methods, you build the policy iteration cycling between the evaluation and improvement methods until there are no changes to the policy.
End of explanation
mdir = tempfile.mkdtemp()
env = gym.make('FrozenLake-v0')
env = wrappers.Monitor(env, mdir, force=True)
S = range(env.env.observation_space.n)
A = range(env.env.action_space.n)
P = env.env.env.P
pi = policy_iteration(S, A, P)
print(pi)
Explanation: After you implement the algorithms, you can run it and calculate the optimal policy:
End of explanation
for _ in range(10000):
state = env.reset()
while True:
state, reward, done, info = env.step(pi[state])
if done:
break
last_video = env.videos[-1][0]
out = check_output(["asciinema", "upload", last_video])
out = out.decode("utf-8").replace('\n', '').replace('\r', '')
print(out)
castid = out.split('/')[-1]
html_tag =
<script type="text/javascript"
src="https://asciinema.org/a/{0}.js"
id="asciicast-{0}"
async data-autoplay="true" data-size="big">
</script>
html_tag = html_tag.format(castid)
HTML(data=html_tag)
Explanation: And, of course, interact with the environment looking at the "directions" or "policy":
End of explanation
V
pi
Explanation: Similar as before. Policies could be slightly different if there is a state in which more than one action give the same value in the end.
End of explanation
env.close()
Explanation: That's it let's wrap up.
End of explanation
gym.upload(mdir, api_key='<YOUR OPENAI API KEY>')
Explanation: If you want to submit to OpenAI Gym, get your API Key and paste it here:
End of explanation |
12,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parallelization
emcee supports parallelization out of the box. The algorithmic details are given in the paper but the implementation is very simple. The parallelization is applied across the walkers in the ensemble at each step and it must therefore be synchronized after each iteration. This means that you will really only benefit from this feature when your probability function is relatively expensive to compute.
The recommended method is to use IPython's parallel feature but it's possible to use other "mappers" like the Python standard library's multiprocessing.Pool. The only requirement of the mapper is that it exposes a map method.
Using multiprocessing
As mentioned above, it's possible to parallelize your model using the standard library's multiprocessing package. Instead, I would recommend the pools.InterruptiblePool that is included with emcee because it is a simple thin wrapper around multiprocessing.Pool with support for a keyboard interrupt (^C)... you'll thank me later! If we wanted to use this pool, the final few lines from the example on the front page would become the following
Step1: Using MPI
To distribute emcee3 across nodes on a cluster, you'll need to use MPI. This can be done with the MPIPool from schwimmbad. To use this, you'll need to install the dependency mpi4py. Otherwise, the code is almost the same as the multiprocessing example above – the main change is the definition of the pool | Python Code:
import emcee3
import numpy as np
def log_prob(x):
return -0.5 * np.sum(x ** 2)
ndim, nwalkers = 10, 100
with emcee3.pools.InterruptiblePool() as pool:
ensemble = emcee3.Ensemble(log_prob, np.random.randn(nwalkers, ndim), pool=pool)
sampler = emcee3.Sampler()
sampler.run(ensemble, 1000)
Explanation: Parallelization
emcee supports parallelization out of the box. The algorithmic details are given in the paper but the implementation is very simple. The parallelization is applied across the walkers in the ensemble at each step and it must therefore be synchronized after each iteration. This means that you will really only benefit from this feature when your probability function is relatively expensive to compute.
The recommended method is to use IPython's parallel feature but it's possible to use other "mappers" like the Python standard library's multiprocessing.Pool. The only requirement of the mapper is that it exposes a map method.
Using multiprocessing
As mentioned above, it's possible to parallelize your model using the standard library's multiprocessing package. Instead, I would recommend the pools.InterruptiblePool that is included with emcee because it is a simple thin wrapper around multiprocessing.Pool with support for a keyboard interrupt (^C)... you'll thank me later! If we wanted to use this pool, the final few lines from the example on the front page would become the following:
End of explanation
# Connect to the cluster.
from ipyparallel import Client
rc = Client()
dv = rc.direct_view()
# Run the imports on the cluster too.
with dv.sync_imports():
import emcee3
import numpy
# Define the model.
def log_prob(x):
return -0.5 * numpy.sum(x ** 2)
# Distribute the model to the nodes of the cluster.
dv.push(dict(log_prob=log_prob), block=True)
# Set up the ensemble with the IPython "DirectView" as the pool.
ndim, nwalkers = 10, 100
ensemble = emcee3.Ensemble(log_prob, numpy.random.randn(nwalkers, ndim), pool=dv)
# Run the sampler in the same way as usual.
sampler = emcee3.Sampler()
ensemble = sampler.run(ensemble, 1000)
Explanation: Using MPI
To distribute emcee3 across nodes on a cluster, you'll need to use MPI. This can be done with the MPIPool from schwimmbad. To use this, you'll need to install the dependency mpi4py. Otherwise, the code is almost the same as the multiprocessing example above – the main change is the definition of the pool:
The if not pool.is_master() block is crucial otherwise the code will hang at the end of execution. To run this code, you would execute something like the following:
Using ipyparallel
ipyparallel is a
flexible and powerful framework for running distributed computation in Python.
It works on a single machine with multiple cores in the same way as it does on
a huge compute cluster and in both cases it is very efficient!
To use IPython parallel, make sure that you have a recent version of IPython
installed (ipyparallel docs) and start up the cluster
by running:
Then, run the following:
End of explanation |
12,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to pandas
by Maxwell Margenot
Part of the Quantopian Lecture Series
Step1: With pandas, it is easy to store, visualize, and perform calculations on your data. With only a few lines of code we can modify our data and present it in an easily-understandable way. Here we simulate some returns in NumPy, put them into a pandas DataFrame, and perform calculations to turn them into prices and plot them, all only using a few lines of code.
Step2: So let's have a look at how we actually build up to this point!
pandas Data Structures
Series
A pandas Series is a 1-dimensional array with labels that can contain any data type. We primarily use them for handling time series data. Creating a Series is as easy as calling pandas.Series() on a Python list or NumPy array.
Step3: Every Series has a name. We can give the series a name as a parameter or we can define it afterwards by directly accessing the name attribute. In this case, we have given our time series no name so the attribute should be empty.
Step4: This name can be directly modified with no repercussions.
Step5: We call the collected axis labels of a Series its index. An index can either passed to a Series as a parameter or added later, similarly to its name. In the absence of an index, a Series will simply contain an index composed of integers, starting at $0$, as in the case of our "Toy Series".
Step6: pandas has a built-in function specifically for creating date indices, date_range(). We use the function here to create a new index for s.
Step7: An index must be exactly the same length as the Series itself. Each index must match one-to-one with each element of the Series. Once this is satisfied, we can directly modify the Series index, as with the name, to use our new and more informative index (relatively speaking).
Step8: The index of the Series is crucial for handling time series, which we will get into a little later.
Accessing Series Elements
Series are typically accessed using the iloc[] and loc[] methods. We use iloc[] to access elements by integer index and we use loc[] to access the index of the Series.
Step9: We can slice a Series similarly to our favorite collections, Python lists and NumPy arrays. We use the colon operator to indicate the slice.
Step10: When creating a slice, we have the options of specifying a beginning, an end, and a step. The slice will begin at the start index, and take steps of size step until it passes the end index, not including the end.
Step11: We can even reverse a Series by specifying a negative step size. Similarly, we can index the start and end with a negative integer value.
Step12: This returns a slice of the series that starts from the second to last element and ends at the third to last element (because the fourth to last is not included, taking steps of size $1$).
Step13: We can also access a series by using the values of its index. Since we indexed s with a collection of dates (Timestamp objects) we can look at the value contained in s for a particular date.
Step14: Or even for a range of dates!
Step15: With Series, we can just use the brackets ([]) to access elements, but this is not best practice. The brackets are ambiguous because they can be used to access Series (and DataFrames) using both index and integer values and the results will change based on context (especially with DataFrames).
Boolean Indexing
In addition to the above-mentioned access methods, you can filter Series using boolean arrays. Series are compatible with your standard comparators. Once compared with whatever condition you like, you get back yet another Series, this time filled with boolean values.
Step16: We can pass this Series back into the original Series to filter out only the elements for which our condition is True.
Step17: If we so desire, we can group multiple conditions together using the logical operators &, |, and ~ (and, or, and not, respectively).
Step18: This is very convenient for getting only elements of a Series that fulfill specific criteria that we need. It gets even more convenient when we are handling DataFrames.
Indexing and Time Series
Since we use Series for handling time series, it's worth covering a little bit of how we handle the time component. For our purposes we use pandas Timestamp objects. Let's pull a full time series, complete with all the appropriate labels, by using our get_pricing() method. All data pulled with get_pricing() or using our Pipeline API will be in either Series or DataFrame format. We can modify this index however we like.
Step19: We can display the first few elements of our series by using the head() method and specifying the number of elements that we want. The analogous method for the last few elements is tail().
Step20: As with our toy example, we can specify a name for our time series, if only to clarify the name the get_pricing() provides us.
Step21: Let's take a closer look at the DatetimeIndex of our prices time series.
Step22: Notice that this DatetimeIndex has a collection of associated information. In particular it has an associated frequency (freq) and an associated timezone (tz). The frequency indicates whether the data is daily vs monthly vs some other period while the timezone indicates what locale this index is relative to. We can modify all of this extra information!
If we resample our Series, we can adjust the frequency of our data. We currently have daily data (excluding weekends) because get_pricing() pulls only data from market days. Let's up-sample from this daily data to monthly data using the resample() method.
Step23: The resample() method defaults to using the mean of the lower level data to create the higher level data. We can specify how else we might want the up-sampling to be calculated by specifying the how parameter.
Step25: We can even specify how we want the calculation of the new period to be done. Here we create a custom_resampler() function that will return the first value of the period. In our specific case, this will return a Series where the monthly value is the first value of that month.
Step26: We can also adjust the timezone of a Series to adapt the time of real-world data. In our case, our time series is already localized to UTC, but let's say that we want to adjust the time to be 'US/Eastern'. In this case we use the tz_convert() method, since the time is already localized.
Step27: In addition to the capacity for timezone and frequency management, each time series has a built-in reindex() method that we can use to realign the existing data according to a new set of index labels. If data does not exist for a particular label, the data will be filled with a placeholder value. This is typically np.nan, though we can provide a fill method.
The data that we get_pricing() only includes market days. But what if we want prices for every single calendar day? This will include holidays and weekends, times when you normally cannot trade equities. First let's create a new DatetimeIndex that contains all that we want.
Step28: Now let's use this new set of dates to reindex our time series. We tell the function that the fill method that we want is ffill. This denotes "forward fill". Any NaN values will be filled by the last value listed. So the price on the weekend or on a holiday will be listed as the price on the last market day that we know about.
Step29: You'll notice that we still have a couple of NaN values right at the beginning of our time series. This is because the first of January in 2012 was a Sunday and the second was a market holiday! Because these are the earliest data points and we don't have any information from before them, they cannot be forward-filled. We will take care of these NaN values in the next section, when we deal with missing data.
Missing Data
Whenever we deal with real data, there is a very real possibility of encountering missing values. Real data is riddled with holes and pandas provides us with ways to handle them. Sometimes resampling or reindexing can create NaN values. Fortunately, pandas provides us with ways to handle them. We have two primary means of coping with missing data. The first of these is filling in the missing data with fillna(). For example, say that we want to fill in the missing days with the mean price of all days.
Step30: Using fillna() is fairly easy. It is just a matter of indicating the value that you want to fill the spaces with. Unfortunately, this particular case doesn't make a whole lot of sense, for reasons discussed in the lecture on stationarity in the Lecture series. We could fill them with with $0$, simply, but that's similarly uninformative.
Rather than filling in specific values, we can use the method parameter, similarly to how the reindex() method works. We could use "backward fill", where NaNs are filled with the next filled value (instead of forward fill's last filled value) like so
Step31: But again, this is a bad idea for the same reasons as the previous option. Both of these so-called solutions take into account future data that was not available at the time of the data points that we are trying to fill. In the case of using the mean or the median, these summary statistics are calculated by taking into account the entire time series. Backward filling is equivalent to saying that the price of a particular security today, right now, tomorrow's price. This also makes no sense. These two options are both examples of look-ahead bias, using data that would be unknown or unavailable at the desired time, and should be avoided.
Our next option is significantly more appealing. We could simply drop the missing data using the dropna() method. This is much better alternative than filling NaN values in with arbitrary numbers.
Step32: Now our time series is cleaned for the calendar year, with all of our NaN values properly handled. It is time to talk about how to actually do time series analysis with pandas data structures.
Time Series Analysis with pandas
Let's do some basic time series analysis on our original prices. Each pandas Series has a built-in plotting method.
Step33: As well as some built-in descriptive statistics. We can either calculate these individually or using the describe() method.
Step34: We can easily modify Series with scalars using our basic mathematical operators.
Step35: And we can create linear combinations of Series themselves using the basic mathematical operators. pandas will group up matching indices and perform the calculations elementwise to produce a new Series.
Step36: If there are no matching indices, however, we may get an empty Series in return.
Step37: Rather than looking at a time series itself, we may want to look at its first-order differences or percent change (in order to get additive or multiplicative returns, in our particular case). Both of these are built-in methods.
Step38: pandas has convenient functions for calculating rolling means and standard deviations, as well!
Step39: Many NumPy functions will work on Series the same way that they work on 1-dimensional NumPy arrays.
Step40: The majority of these functions, however, are already implemented directly as Series and DataFrame methods.
Step41: In every case, using the built-in pandas method will be better than using the NumPy function on a pandas data structure due to improvements in performance. Make sure to check out the Series documentation before resorting to other calculations of common functions.
DataFrames
Many of the aspects of working with Series carry over into DataFrames. pandas DataFrames allow us to easily manage our data with their intuitive structure.
Like Series, DataFrames can hold multiple types of data, but DataFrames are 2-dimensional objects, unlike Series. Each DataFrame has an index and a columns attribute, which we will cover more in-depth when we start actually playing with an object. The index attribute is like the index of a Series, though indices in pandas have some extra features that we will unfortunately not be able to cover here. If you are interested in this, check out the pandas documentation on advanced indexing. The columns attribute is what provides the second dimension of our DataFrames, allowing us to combine named columns (all Series), into a cohesive object with the index lined-up.
We can create a DataFrame by calling pandas.DataFrame() on a dictionary or NumPy ndarray. We can also concatenate a group of pandas Series into a DataFrame using pandas.concat().
Step42: Each DataFrame has a few key attributes that we need to keep in mind. The first of these is the index attribute. We can easily include an index of Timestamp objects like we did with Series.
Step43: As mentioned above, we can combine Series into DataFrames. Concatatenating Series like this will match elements up based on their corresponding index. As the following Series do not have an index assigned, they each default to an integer index.
Step44: We will use pandas.concat() again later to combine multiple DataFrames into one.
Each DataFrame also has a columns attribute. These can either be assigned when we call pandas.DataFrame or they can be modified directly like the index. Note that when we concatenated the two Series above, the column names were the names of those Series.
Step45: To modify the columns after object creation, we need only do the following
Step46: In the same vein, the index of a DataFrame can be changed after the fact.
Step47: Separate from the columns and index of a DataFrame, we can also directly access the values they contain by looking at the values attribute.
Step48: This returns a NumPy array.
Step49: Accessing DataFrame elements
Again we see a lot of carryover from Series in how we access the elements of DataFrames. The key sticking point here is that everything has to take into account multiple dimensions now. The main way that this happens is through the access of the columns of a DataFrame, either individually or in groups. We can do this either by directly accessing the attributes or by using the methods we already are familiar with.
Step50: Here we directly access the CMG column. Note that this style of access will only work if your column name has no spaces or unfriendly characters in it.
Step51: We can also use loc[] to access an individual column like so.
Step52: Accessing an individual column will return a Series, regardless of how we get it.
Step53: Notice how we pass a tuple into the loc[] method? This is a key difference between accessing a Series and accessing a DataFrame, grounded in the fact that a DataFrame has multiple dimensions. When you pass a 2-dimensional tuple into a DataFrame, the first element of the tuple is applied to the rows and the second is applied to the columns. So, to break it down, the above line of code tells the DataFrame to return every single row of the column with label 'CMG'. Lists of columns are also supported.
Step54: We can also simply access the DataFrame by index value using loc[], as with Series.
Step55: This plays nicely with lists of columns, too.
Step56: Using iloc[] also works similarly, allowing you to access parts of the DataFrame by integer index.
Step57: Boolean indexing
As with Series, sometimes we want to filter a DataFrame according to a set of criteria. We do this by indexing our DataFrame with boolean values.
Step58: We can add multiple boolean conditions by using the logical operators &, |, and ~ (and, or, and not, respectively) again!
Step59: Adding, Removing Columns, Combining DataFrames/Series
It is all well and good when you already have a DataFrame filled with data, but it is also important to be able to add to the data that you have.
We add a new column simply by assigning data to a column that does not already exist. Here we use the .loc[
Step60: It is also just as easy to remove a column.
Step61: If we instead want to combine multiple DataFrames into one, we use the pandas.concat() method.
Step62: Missing data (again)
Bringing real-life data into a DataFrame brings us the same problems that we had with it in a Series, only this time in more dimensions. We have access to the same methods as with Series, as demonstrated below.
Step63: But again, the best choice in this case (since we are still using time series data, handling multiple time series at once) is still to simply drop the missing values.
Step64: Time Series Analysis with pandas
Using the built-in statistics methods for DataFrames, we can perform calculations on multiple time series at once! The code to perform calculations on DataFrames here is almost exactly the same as the methods used for Series above, so don't worry about re-learning everything.
The plot() method makes another appearance here, this time with a built-in legend that corresponds to the names of the columns that you are plotting.
Step65: The same statistical functions from our interactions with Series resurface here with the addition of the axis parameter. By specifying the axis, we tell pandas to calculate the desired function along either the rows (axis=0) or the columns (axis=1). We can easily calculate the mean of each columns like so
Step66: As well as the standard deviation
Step67: Again, the describe() function will provide us with summary statistics of our data if we would rather have all of our typical statistics in a convenient visual instead of calculating them individually.
Step68: We can scale and add scalars to our DataFrame, as you might suspect after dealing with Series. This again works element-wise.
Step69: Here we use the pct_change() method to get a DataFrame of the multiplicative returns of the securities that we are looking at.
Step70: If we use our statistics methods to standardize the returns, a common procedure when examining data, then we can get a better idea of how they all move relative to each other on the same scale.
Step71: This makes it easier to compare the motion of the different time series contained in our example.
Rolling means and standard deviations also work with DataFrames. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Introduction to pandas
by Maxwell Margenot
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
pandas is a Python library that provides a collection of powerful data structures to better help you manage data. In this lecture, we will cover how to use the Series and DataFrame objects to handle data. These objects have a strong integration with NumPy, covered elsewhere in the lecture series, allowing us to easily do the necessary statistical and mathematical calculations that we need for finance.
End of explanation
returns = pd.DataFrame(np.random.normal(1.0, 0.03, (100, 10)))
prices = returns.cumprod()
prices.plot()
plt.title('Randomly-generated Prices')
plt.xlabel('Time')
plt.ylabel('Price')
plt.legend(loc=0);
Explanation: With pandas, it is easy to store, visualize, and perform calculations on your data. With only a few lines of code we can modify our data and present it in an easily-understandable way. Here we simulate some returns in NumPy, put them into a pandas DataFrame, and perform calculations to turn them into prices and plot them, all only using a few lines of code.
End of explanation
s = pd.Series([1, 2, np.nan, 4, 5])
print s
Explanation: So let's have a look at how we actually build up to this point!
pandas Data Structures
Series
A pandas Series is a 1-dimensional array with labels that can contain any data type. We primarily use them for handling time series data. Creating a Series is as easy as calling pandas.Series() on a Python list or NumPy array.
End of explanation
print s.name
Explanation: Every Series has a name. We can give the series a name as a parameter or we can define it afterwards by directly accessing the name attribute. In this case, we have given our time series no name so the attribute should be empty.
End of explanation
s.name = "Toy Series"
print s.name
Explanation: This name can be directly modified with no repercussions.
End of explanation
print s.index
Explanation: We call the collected axis labels of a Series its index. An index can either passed to a Series as a parameter or added later, similarly to its name. In the absence of an index, a Series will simply contain an index composed of integers, starting at $0$, as in the case of our "Toy Series".
End of explanation
new_index = pd.date_range("2016-01-01", periods=len(s), freq="D")
print new_index
Explanation: pandas has a built-in function specifically for creating date indices, date_range(). We use the function here to create a new index for s.
End of explanation
s.index = new_index
print s.index
Explanation: An index must be exactly the same length as the Series itself. Each index must match one-to-one with each element of the Series. Once this is satisfied, we can directly modify the Series index, as with the name, to use our new and more informative index (relatively speaking).
End of explanation
print "First element of the series: ", s.iloc[0]
print "Last element of the series: ", s.iloc[len(s)-1]
Explanation: The index of the Series is crucial for handling time series, which we will get into a little later.
Accessing Series Elements
Series are typically accessed using the iloc[] and loc[] methods. We use iloc[] to access elements by integer index and we use loc[] to access the index of the Series.
End of explanation
s.iloc[:2]
Explanation: We can slice a Series similarly to our favorite collections, Python lists and NumPy arrays. We use the colon operator to indicate the slice.
End of explanation
start = 0
end = len(s) - 1
step = 1
s.iloc[start:end:step]
Explanation: When creating a slice, we have the options of specifying a beginning, an end, and a step. The slice will begin at the start index, and take steps of size step until it passes the end index, not including the end.
End of explanation
s.iloc[::-1]
Explanation: We can even reverse a Series by specifying a negative step size. Similarly, we can index the start and end with a negative integer value.
End of explanation
s.iloc[-2:-4:-1]
Explanation: This returns a slice of the series that starts from the second to last element and ends at the third to last element (because the fourth to last is not included, taking steps of size $1$).
End of explanation
s.loc['2016-01-01']
Explanation: We can also access a series by using the values of its index. Since we indexed s with a collection of dates (Timestamp objects) we can look at the value contained in s for a particular date.
End of explanation
s.loc['2016-01-02':'2016-01-04']
Explanation: Or even for a range of dates!
End of explanation
print s < 3
Explanation: With Series, we can just use the brackets ([]) to access elements, but this is not best practice. The brackets are ambiguous because they can be used to access Series (and DataFrames) using both index and integer values and the results will change based on context (especially with DataFrames).
Boolean Indexing
In addition to the above-mentioned access methods, you can filter Series using boolean arrays. Series are compatible with your standard comparators. Once compared with whatever condition you like, you get back yet another Series, this time filled with boolean values.
End of explanation
print s.loc[s < 3]
Explanation: We can pass this Series back into the original Series to filter out only the elements for which our condition is True.
End of explanation
print s.loc[(s < 3) & (s > 1)]
Explanation: If we so desire, we can group multiple conditions together using the logical operators &, |, and ~ (and, or, and not, respectively).
End of explanation
symbol = "CMG"
start = "2012-01-01"
end = "2016-01-01"
prices = get_pricing(symbol, start_date=start, end_date=end, fields="price")
Explanation: This is very convenient for getting only elements of a Series that fulfill specific criteria that we need. It gets even more convenient when we are handling DataFrames.
Indexing and Time Series
Since we use Series for handling time series, it's worth covering a little bit of how we handle the time component. For our purposes we use pandas Timestamp objects. Let's pull a full time series, complete with all the appropriate labels, by using our get_pricing() method. All data pulled with get_pricing() or using our Pipeline API will be in either Series or DataFrame format. We can modify this index however we like.
End of explanation
print "\n", type(prices)
prices.head(5)
Explanation: We can display the first few elements of our series by using the head() method and specifying the number of elements that we want. The analogous method for the last few elements is tail().
End of explanation
print 'Old name: ', prices.name
prices.name = symbol
print 'New name: ', prices.name
Explanation: As with our toy example, we can specify a name for our time series, if only to clarify the name the get_pricing() provides us.
End of explanation
print prices.index
Explanation: Let's take a closer look at the DatetimeIndex of our prices time series.
End of explanation
monthly_prices = prices.resample('M')
monthly_prices.head(10)
Explanation: Notice that this DatetimeIndex has a collection of associated information. In particular it has an associated frequency (freq) and an associated timezone (tz). The frequency indicates whether the data is daily vs monthly vs some other period while the timezone indicates what locale this index is relative to. We can modify all of this extra information!
If we resample our Series, we can adjust the frequency of our data. We currently have daily data (excluding weekends) because get_pricing() pulls only data from market days. Let's up-sample from this daily data to monthly data using the resample() method.
End of explanation
monthly_prices_med = prices.resample('M', how='median')
monthly_prices_med.head(10)
Explanation: The resample() method defaults to using the mean of the lower level data to create the higher level data. We can specify how else we might want the up-sampling to be calculated by specifying the how parameter.
End of explanation
def custom_resampler(array_like):
Returns the first value of the period
return array_like[0]
first_of_month_prices = prices.resample('M', how=custom_resampler)
first_of_month_prices.head(10)
Explanation: We can even specify how we want the calculation of the new period to be done. Here we create a custom_resampler() function that will return the first value of the period. In our specific case, this will return a Series where the monthly value is the first value of that month.
End of explanation
eastern_prices = prices.tz_convert('US/Eastern')
eastern_prices.head(10)
Explanation: We can also adjust the timezone of a Series to adapt the time of real-world data. In our case, our time series is already localized to UTC, but let's say that we want to adjust the time to be 'US/Eastern'. In this case we use the tz_convert() method, since the time is already localized.
End of explanation
calendar_dates = pd.date_range(start=start, end=end, freq='D', tz='UTC')
print calendar_dates
Explanation: In addition to the capacity for timezone and frequency management, each time series has a built-in reindex() method that we can use to realign the existing data according to a new set of index labels. If data does not exist for a particular label, the data will be filled with a placeholder value. This is typically np.nan, though we can provide a fill method.
The data that we get_pricing() only includes market days. But what if we want prices for every single calendar day? This will include holidays and weekends, times when you normally cannot trade equities. First let's create a new DatetimeIndex that contains all that we want.
End of explanation
calendar_prices = prices.reindex(calendar_dates, method='ffill')
calendar_prices.head(15)
Explanation: Now let's use this new set of dates to reindex our time series. We tell the function that the fill method that we want is ffill. This denotes "forward fill". Any NaN values will be filled by the last value listed. So the price on the weekend or on a holiday will be listed as the price on the last market day that we know about.
End of explanation
meanfilled_prices = calendar_prices.fillna(calendar_prices.mean())
meanfilled_prices.head(10)
Explanation: You'll notice that we still have a couple of NaN values right at the beginning of our time series. This is because the first of January in 2012 was a Sunday and the second was a market holiday! Because these are the earliest data points and we don't have any information from before them, they cannot be forward-filled. We will take care of these NaN values in the next section, when we deal with missing data.
Missing Data
Whenever we deal with real data, there is a very real possibility of encountering missing values. Real data is riddled with holes and pandas provides us with ways to handle them. Sometimes resampling or reindexing can create NaN values. Fortunately, pandas provides us with ways to handle them. We have two primary means of coping with missing data. The first of these is filling in the missing data with fillna(). For example, say that we want to fill in the missing days with the mean price of all days.
End of explanation
bfilled_prices = calendar_prices.fillna(method='bfill')
bfilled_prices.head(10)
Explanation: Using fillna() is fairly easy. It is just a matter of indicating the value that you want to fill the spaces with. Unfortunately, this particular case doesn't make a whole lot of sense, for reasons discussed in the lecture on stationarity in the Lecture series. We could fill them with with $0$, simply, but that's similarly uninformative.
Rather than filling in specific values, we can use the method parameter, similarly to how the reindex() method works. We could use "backward fill", where NaNs are filled with the next filled value (instead of forward fill's last filled value) like so:
End of explanation
dropped_prices = calendar_prices.dropna()
dropped_prices.head(10)
Explanation: But again, this is a bad idea for the same reasons as the previous option. Both of these so-called solutions take into account future data that was not available at the time of the data points that we are trying to fill. In the case of using the mean or the median, these summary statistics are calculated by taking into account the entire time series. Backward filling is equivalent to saying that the price of a particular security today, right now, tomorrow's price. This also makes no sense. These two options are both examples of look-ahead bias, using data that would be unknown or unavailable at the desired time, and should be avoided.
Our next option is significantly more appealing. We could simply drop the missing data using the dropna() method. This is much better alternative than filling NaN values in with arbitrary numbers.
End of explanation
prices.plot();
# We still need to add the axis labels and title ourselves
plt.title(symbol + " Prices")
plt.ylabel("Price")
plt.xlabel("Date");
Explanation: Now our time series is cleaned for the calendar year, with all of our NaN values properly handled. It is time to talk about how to actually do time series analysis with pandas data structures.
Time Series Analysis with pandas
Let's do some basic time series analysis on our original prices. Each pandas Series has a built-in plotting method.
End of explanation
print "Mean: ", prices.mean()
print "Standard deviation: ", prices.std()
print "Summary Statistics"
print prices.describe()
Explanation: As well as some built-in descriptive statistics. We can either calculate these individually or using the describe() method.
End of explanation
modified_prices = prices * 2 - 10
modified_prices.head(5)
Explanation: We can easily modify Series with scalars using our basic mathematical operators.
End of explanation
noisy_prices = prices + 5 * pd.Series(np.random.normal(0, 5, len(prices)), index=prices.index) + 20
noisy_prices.head(5)
Explanation: And we can create linear combinations of Series themselves using the basic mathematical operators. pandas will group up matching indices and perform the calculations elementwise to produce a new Series.
End of explanation
empty_series = prices + pd.Series(np.random.normal(0, 1, len(prices)))
empty_series.head(5)
Explanation: If there are no matching indices, however, we may get an empty Series in return.
End of explanation
add_returns = prices.diff()[1:]
mult_returns = prices.pct_change()[1:]
plt.title("Multiplicative returns of " + symbol)
plt.xlabel("Date")
plt.ylabel("Percent Returns")
mult_returns.plot();
Explanation: Rather than looking at a time series itself, we may want to look at its first-order differences or percent change (in order to get additive or multiplicative returns, in our particular case). Both of these are built-in methods.
End of explanation
rolling_mean = pd.rolling_mean(prices, 30)
rolling_mean.name = "30-day rolling mean"
prices.plot()
rolling_mean.plot()
plt.title(symbol + "Price")
plt.xlabel("Date")
plt.ylabel("Price")
plt.legend();
rolling_std = pd.rolling_std(prices, 30)
rolling_std.name = "30-day rolling volatility"
rolling_std.plot()
plt.title(rolling_std.name);
plt.xlabel("Date")
plt.ylabel("Standard Deviation");
Explanation: pandas has convenient functions for calculating rolling means and standard deviations, as well!
End of explanation
print np.median(mult_returns)
Explanation: Many NumPy functions will work on Series the same way that they work on 1-dimensional NumPy arrays.
End of explanation
print mult_returns.median()
Explanation: The majority of these functions, however, are already implemented directly as Series and DataFrame methods.
End of explanation
dict_data = {
'a' : [1, 2, 3, 4, 5],
'b' : ['L', 'K', 'J', 'M', 'Z'],
'c' : np.random.normal(0, 1, 5)
}
print dict_data
Explanation: In every case, using the built-in pandas method will be better than using the NumPy function on a pandas data structure due to improvements in performance. Make sure to check out the Series documentation before resorting to other calculations of common functions.
DataFrames
Many of the aspects of working with Series carry over into DataFrames. pandas DataFrames allow us to easily manage our data with their intuitive structure.
Like Series, DataFrames can hold multiple types of data, but DataFrames are 2-dimensional objects, unlike Series. Each DataFrame has an index and a columns attribute, which we will cover more in-depth when we start actually playing with an object. The index attribute is like the index of a Series, though indices in pandas have some extra features that we will unfortunately not be able to cover here. If you are interested in this, check out the pandas documentation on advanced indexing. The columns attribute is what provides the second dimension of our DataFrames, allowing us to combine named columns (all Series), into a cohesive object with the index lined-up.
We can create a DataFrame by calling pandas.DataFrame() on a dictionary or NumPy ndarray. We can also concatenate a group of pandas Series into a DataFrame using pandas.concat().
End of explanation
frame_data = pd.DataFrame(dict_data, index=pd.date_range('2016-01-01', periods=5))
print frame_data
Explanation: Each DataFrame has a few key attributes that we need to keep in mind. The first of these is the index attribute. We can easily include an index of Timestamp objects like we did with Series.
End of explanation
s_1 = pd.Series([2, 4, 6, 8, 10], name='Evens')
s_2 = pd.Series([1, 3, 5, 7, 9], name="Odds")
numbers = pd.concat([s_1, s_2], axis=1)
print numbers
Explanation: As mentioned above, we can combine Series into DataFrames. Concatatenating Series like this will match elements up based on their corresponding index. As the following Series do not have an index assigned, they each default to an integer index.
End of explanation
print numbers.columns
Explanation: We will use pandas.concat() again later to combine multiple DataFrames into one.
Each DataFrame also has a columns attribute. These can either be assigned when we call pandas.DataFrame or they can be modified directly like the index. Note that when we concatenated the two Series above, the column names were the names of those Series.
End of explanation
numbers.columns = ['Shmevens', 'Shmodds']
print numbers
Explanation: To modify the columns after object creation, we need only do the following:
End of explanation
print numbers.index
numbers.index = pd.date_range("2016-01-01", periods=len(numbers))
print numbers
Explanation: In the same vein, the index of a DataFrame can be changed after the fact.
End of explanation
numbers.values
Explanation: Separate from the columns and index of a DataFrame, we can also directly access the values they contain by looking at the values attribute.
End of explanation
type(numbers.values)
Explanation: This returns a NumPy array.
End of explanation
symbol = ["CMG", "MCD", "SHAK", "WFM"]
start = "2012-01-01"
end = "2016-01-01"
prices = get_pricing(symbol, start_date=start, end_date=end, fields="price")
if isinstance(symbol, list):
prices.columns = map(lambda x: x.symbol, prices.columns)
else:
prices.name = symbol
Explanation: Accessing DataFrame elements
Again we see a lot of carryover from Series in how we access the elements of DataFrames. The key sticking point here is that everything has to take into account multiple dimensions now. The main way that this happens is through the access of the columns of a DataFrame, either individually or in groups. We can do this either by directly accessing the attributes or by using the methods we already are familiar with.
End of explanation
prices.CMG.head()
Explanation: Here we directly access the CMG column. Note that this style of access will only work if your column name has no spaces or unfriendly characters in it.
End of explanation
prices.loc[:, 'CMG'].head()
Explanation: We can also use loc[] to access an individual column like so.
End of explanation
print type(prices.CMG)
print type(prices.loc[:, 'CMG'])
Explanation: Accessing an individual column will return a Series, regardless of how we get it.
End of explanation
prices.loc[:, ['CMG', 'MCD']].head()
Explanation: Notice how we pass a tuple into the loc[] method? This is a key difference between accessing a Series and accessing a DataFrame, grounded in the fact that a DataFrame has multiple dimensions. When you pass a 2-dimensional tuple into a DataFrame, the first element of the tuple is applied to the rows and the second is applied to the columns. So, to break it down, the above line of code tells the DataFrame to return every single row of the column with label 'CMG'. Lists of columns are also supported.
End of explanation
prices.loc['2015-12-15':'2015-12-22']
Explanation: We can also simply access the DataFrame by index value using loc[], as with Series.
End of explanation
prices.loc['2015-12-15':'2015-12-22', ['CMG', 'MCD']]
Explanation: This plays nicely with lists of columns, too.
End of explanation
prices.iloc[0:2, 1]
# Access prices with integer index in
# [1, 3, 5, 7, 9, 11, 13, ..., 99]
# and in column 0 or 3
prices.iloc[[1, 3, 5] + range(7, 100, 2), [0, 3]].head(20)
Explanation: Using iloc[] also works similarly, allowing you to access parts of the DataFrame by integer index.
End of explanation
prices.loc[prices.MCD > prices.WFM].head()
Explanation: Boolean indexing
As with Series, sometimes we want to filter a DataFrame according to a set of criteria. We do this by indexing our DataFrame with boolean values.
End of explanation
prices.loc[(prices.MCD > prices.WFM) & ~prices.SHAK.isnull()].head()
Explanation: We can add multiple boolean conditions by using the logical operators &, |, and ~ (and, or, and not, respectively) again!
End of explanation
s_1 = get_pricing('TSLA', start_date=start, end_date=end, fields='price')
prices.loc[:, 'TSLA'] = s_1
prices.head(5)
Explanation: Adding, Removing Columns, Combining DataFrames/Series
It is all well and good when you already have a DataFrame filled with data, but it is also important to be able to add to the data that you have.
We add a new column simply by assigning data to a column that does not already exist. Here we use the .loc[:, 'COL_NAME'] notation and store the output of get_pricing() (which returns a pandas Series if we only pass one security) there. This is the method that we would use to add a Series to an existing DataFrame.
End of explanation
prices = prices.drop('TSLA', axis=1)
prices.head(5)
Explanation: It is also just as easy to remove a column.
End of explanation
df_1 = get_pricing(['SPY', 'VXX'], start_date=start, end_date=end, fields='price')
df_2 = get_pricing(['MSFT', 'AAPL', 'GOOG'], start_date=start, end_date=end, fields='price')
df_3 = pd.concat([df_1, df_2], axis=1)
df_3.head()
Explanation: If we instead want to combine multiple DataFrames into one, we use the pandas.concat() method.
End of explanation
filled0_prices = prices.fillna(0)
filled0_prices.head(5)
bfilled_prices = prices.fillna(method='bfill')
bfilled_prices.head(5)
Explanation: Missing data (again)
Bringing real-life data into a DataFrame brings us the same problems that we had with it in a Series, only this time in more dimensions. We have access to the same methods as with Series, as demonstrated below.
End of explanation
dropped_prices = prices.dropna()
dropped_prices.head(5)
Explanation: But again, the best choice in this case (since we are still using time series data, handling multiple time series at once) is still to simply drop the missing values.
End of explanation
prices.plot()
plt.title("Collected Stock Prices")
plt.ylabel("Price")
plt.xlabel("Date");
Explanation: Time Series Analysis with pandas
Using the built-in statistics methods for DataFrames, we can perform calculations on multiple time series at once! The code to perform calculations on DataFrames here is almost exactly the same as the methods used for Series above, so don't worry about re-learning everything.
The plot() method makes another appearance here, this time with a built-in legend that corresponds to the names of the columns that you are plotting.
End of explanation
prices.mean(axis=0)
Explanation: The same statistical functions from our interactions with Series resurface here with the addition of the axis parameter. By specifying the axis, we tell pandas to calculate the desired function along either the rows (axis=0) or the columns (axis=1). We can easily calculate the mean of each columns like so:
End of explanation
prices.std(axis=0)
Explanation: As well as the standard deviation:
End of explanation
prices.describe()
Explanation: Again, the describe() function will provide us with summary statistics of our data if we would rather have all of our typical statistics in a convenient visual instead of calculating them individually.
End of explanation
(2 * prices - 50).head(5)
Explanation: We can scale and add scalars to our DataFrame, as you might suspect after dealing with Series. This again works element-wise.
End of explanation
mult_returns = prices.pct_change()[1:]
mult_returns.head()
Explanation: Here we use the pct_change() method to get a DataFrame of the multiplicative returns of the securities that we are looking at.
End of explanation
norm_returns = (mult_returns - mult_returns.mean(axis=0))/mult_returns.std(axis=0)
norm_returns.loc['2014-01-01':'2015-01-01'].plot();
Explanation: If we use our statistics methods to standardize the returns, a common procedure when examining data, then we can get a better idea of how they all move relative to each other on the same scale.
End of explanation
rolling_mean = pd.rolling_mean(prices, 30)
rolling_mean.columns = prices.columns
rolling_mean.plot()
plt.title("Rolling Mean of Prices")
plt.xlabel("Date")
plt.ylabel("Price")
plt.legend();
Explanation: This makes it easier to compare the motion of the different time series contained in our example.
Rolling means and standard deviations also work with DataFrames.
End of explanation |
12,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IPython & D3
Let's start with a few techniques for working with data in ipython and then build a d3 network graph.
Step1: JS with IPython?
The nice thing about IPython is that we can write in almost any lanaguage. For example, we can use javascript below and pull in the D3 library.
Step2: Python data | D3 Viz
A basic method is to serialze your results and then render html that pulls in the data. In this example, we save a json file and then load the html doc in an IFrame. We're now using D3 in ipython!
The example below is adapted from
Step3: Passing data from IPython to JS
Let's create some random numbers and render them in js (see the stackoverflow explanation and discussion).
Step6: Passing data from JS to IPython
We can also interact with js to define python variables (see this example).
Step7: Click "Set Value" then run the cell below.
Step8: Custom D3 module.
Now we're having fun. The simplicity of this process wins. We can pass data to javascript via a module called visualize that contains an attribute plot_circle, which uses jinja to render our js template. The advantage of using jinja to read our html is apparent | Python Code:
# import requirments
from IPython.display import Image
from IPython.display import display
from IPython.display import HTML
from datetime import *
import json
from copy import *
from pprint import *
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import json
from ggplot import *
import networkx as nx
from networkx.readwrite import json_graph
#from __future__ import http_server
from BaseHTTPServer import BaseHTTPRequestHandler
from IPython.display import IFrame
import rpy2
%load_ext rpy2.ipython
%R require("ggplot2")
% matplotlib inline
randn = np.random.randn
Explanation: IPython & D3
Let's start with a few techniques for working with data in ipython and then build a d3 network graph.
End of explanation
%%javascript
require.config({
paths: {
//d3: "http://d3js.org/d3.v3.min" //<-- url
d3: 'd3/d3.min.js' //<-- local path
}
});
Explanation: JS with IPython?
The nice thing about IPython is that we can write in almost any lanaguage. For example, we can use javascript below and pull in the D3 library.
End of explanation
import json
import networkx as nx
from networkx.readwrite import json_graph
from IPython.display import IFrame
G = nx.barbell_graph(6,3)
# this d3 example uses the name attribute for the mouse-hover value,
# so add a name to each node
for n in G:
G.node[n]['name'] = n
# write json formatted data
d = json_graph.node_link_data(G) # node-link format to serialize
# write json
json.dump(d, open('force/force.json','w'))
# render html inline
IFrame('force/force.html', width=700, height=350)
#print('Or copy all files in force/ to webserver and load force/force.html')
Explanation: Python data | D3 Viz
A basic method is to serialze your results and then render html that pulls in the data. In this example, we save a json file and then load the html doc in an IFrame. We're now using D3 in ipython!
The example below is adapted from:
* Hagberg, A & Schult, D. & Swart, P. Networkx (2011). Github repository, https://github.com/networkx/networkx/tree/master/examples/javascript/force
End of explanation
from IPython.display import Javascript
import numpy as np
mu, sig = 0.05, 0.2
rnd = np.random.normal(loc=mu, scale=sig, size=4)
## Use the variable rnd above in Javascript:
javascript = 'element.append("{}");'.format(str(rnd))
Javascript(javascript)
Explanation: Passing data from IPython to JS
Let's create some random numbers and render them in js (see the stackoverflow explanation and discussion).
End of explanation
from IPython.display import HTML
input_form =
<div style="background-color:gainsboro; border:solid black; width:300px; padding:20px;">
Name: <input type="text" id="var_name" value="foo"><br>
Value: <input type="text" id="var_value" value="bar"><br>
<button onclick="set_value()">Set Value</button>
</div>
javascript =
<script type="text/Javascript">
function set_value(){
var var_name = document.getElementById('var_name').value;
var var_value = document.getElementById('var_value').value;
var command = var_name + " = '" + var_value + "'";
console.log("Executing Command: " + command);
var kernel = IPython.notebook.kernel;
kernel.execute(command);
}
</script>
HTML(input_form + javascript)
Explanation: Passing data from JS to IPython
We can also interact with js to define python variables (see this example).
End of explanation
print foo
Explanation: Click "Set Value" then run the cell below.
End of explanation
from pythonD3 import visualize
data = [{'x': 10, 'y': 20, 'r': 15, 'name': 'circle one'},
{'x': 40, 'y': 40, 'r': 5, 'name': 'circle two'},
{'x': 20, 'y': 30, 'r': 8, 'name': 'circle three'},
{'x': 25, 'y': 10, 'r': 10, 'name': 'circle four'}]
visualize.plot_circle(data, id=2)
visualize.plot_chords(id=5)
Explanation: Custom D3 module.
Now we're having fun. The simplicity of this process wins. We can pass data to javascript via a module called visualize that contains an attribute plot_circle, which uses jinja to render our js template. The advantage of using jinja to read our html is apparent: we can pass variables directly from python!
End of explanation |
12,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This example demonstrated that Flexx apps can be run interactively in the notebook.
Step1: Any widget can be shown by using it as a cell output
Step2: Because apps are really just Widgets, we can show our app in the same way
Step3: And we can interact with it, e.g. change input signals, and react to signal changes | Python Code:
from flexx import app, ui, react
app.init_notebook()
# A bit of boilerplate to import an example app
import sys
#sys.path.insert(0, r'C:\Users\almar\dev\flexx\examples\ui')
sys.path.insert(0, '/home/almar/dev/pylib/flexx/examples/ui')
from twente_temperature import Twente
Explanation: This example demonstrated that Flexx apps can be run interactively in the notebook.
End of explanation
ui.Button(text='push me')
Explanation: Any widget can be shown by using it as a cell output:
End of explanation
t = Twente()
t
Explanation: Because apps are really just Widgets, we can show our app in the same way:
End of explanation
t.plot.line_width(10)
colors = ['#a00', '#0a0', '#00a', '#990', '#909', '#0990'] * 2 + ['#000']
@react.connect('t.month.value')
def _update_line_color(v):
t.plot.line_color(colors[int(v)])
t.plot.marker_color(colors[int(v)])
Explanation: And we can interact with it, e.g. change input signals, and react to signal changes:
End of explanation |
12,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyBPS Tutorial
1. Initialization
The first thing to do is obviously to import the pybps package.
At the same time, we also import other useful packages.
Step1: Once the pybps is imported into python, the first thing to do is to create an instance of the BPSProject class to get all the information required to run a particular simulation project from the project directory and hold it in a series of instance variables. This way, the project information will be easily retrieved when the different functions that manage simulation pre/post-processing and running are called.
The simplest and quickest way to get ready is to instanciate the BPSProject class with the path and batch arguments defined. path is the path (relative or absolute) to the directory holding the simulation files for a particular project. batch is a flag which sets whether the simulation project corresponds to a single run (batch = False, which is the default value) or to a batch run (batch = True).
In the present tutorial, we will use the very simple Begin example TRNSYS project that can be found in the Examples directory of any TRNSYS installation. We just made some modifications to the output (using Type46) and added parameters in an external parameters.txt file for the batch run. This example project can be found in the Examples/TRNSYS folder found in the PyBPS package. Note that as of today, PyBPS has only been tested with TRNSYS simulation projects, altought its functionnalities could easily be used with any other text-file-based simulation tool.
Step2: Another way to create an instance of the BPSProject class is to call it without any arguments and then use the path and batch methods to set both variables. In the case of the batch method, calling it sets batch to True (since by default it is set to False, which corresponds to a single simulation run).
Step3: Once we have got our bps object created, we can check the simulation project info obtained from the project folder and stored in the object's attributes. Behind the scenes, the BPSProject class uses two hidden methods to detect the simulation tool to be used (based on file extensions found in given directory) and to get the info required to run single or batch runs. The basic info needed to run any tool is contained in the config.ini file is the base folder of the pybps package.
If the project is of the "single run" type, the following instance variables hold the basic info needed to actually run the simulation
Step4: If the simulation project happens to be a batch run, an additional set of instance variables is created to hold information about the template and parameter files, as well as the list of jobs to be run. In pybps, template files are simulation files containing parameters to be replaced prior to running simulation. Parameters are identified as strings surrounded by % signs, like %PAR% for example. The user has to create the template files (replacing acordingly the simulation parameters with parameters search strings) prior to calling the pybps package and place a parameter file in csv format in the project folder. Template and parameter files should contain a specific search string in their filename to be recognized as such by pybps. By default, users should include _TMP in template filenames and _PAR in parameter filenames. These are just the default settings and can be modified in the config.ini file.
If the simulation project was identified by the user as corresponding to a batch run (batch = True) and pybps can't find any template or parameter file, it will give an error message and exit.
Step5: 2. Pre-process Simulation Data
Step6: 3. Run Simulation Jobs | Python Code:
import pybps
import os
import sys
import re
import sqlite3
import pandas as pd
from pandas.io import sql
import matplotlib.pyplot as plt
Explanation: PyBPS Tutorial
1. Initialization
The first thing to do is obviously to import the pybps package.
At the same time, we also import other useful packages.
End of explanation
bps = pybps.BPSProject('Examples/TRNSYS/Begin', batch=True)
Explanation: Once the pybps is imported into python, the first thing to do is to create an instance of the BPSProject class to get all the information required to run a particular simulation project from the project directory and hold it in a series of instance variables. This way, the project information will be easily retrieved when the different functions that manage simulation pre/post-processing and running are called.
The simplest and quickest way to get ready is to instanciate the BPSProject class with the path and batch arguments defined. path is the path (relative or absolute) to the directory holding the simulation files for a particular project. batch is a flag which sets whether the simulation project corresponds to a single run (batch = False, which is the default value) or to a batch run (batch = True).
In the present tutorial, we will use the very simple Begin example TRNSYS project that can be found in the Examples directory of any TRNSYS installation. We just made some modifications to the output (using Type46) and added parameters in an external parameters.txt file for the batch run. This example project can be found in the Examples/TRNSYS folder found in the PyBPS package. Note that as of today, PyBPS has only been tested with TRNSYS simulation projects, altought its functionnalities could easily be used with any other text-file-based simulation tool.
End of explanation
bps = pybps.BPSProject()
bps.path('Examples/TRNSYS/Begin')
bps.batch()
Explanation: Another way to create an instance of the BPSProject class is to call it without any arguments and then use the path and batch methods to set both variables. In the case of the batch method, calling it sets batch to True (since by default it is set to False, which corresponds to a single simulation run).
End of explanation
# Path to the folder containing simulation files
bps.path
# Simulation tool name
bps.sim_tool
# Simulation input file path
bps.simfile_path
# Basic config info needed to call the proper commands to run the simulation tool and identify the basic simulation files.
# This info is contained in the "config.ini" file.
bps.config
# Particular configuration parameters can be acceded like in any python dictionnary
bps.config['executable']
Explanation: Once we have got our bps object created, we can check the simulation project info obtained from the project folder and stored in the object's attributes. Behind the scenes, the BPSProject class uses two hidden methods to detect the simulation tool to be used (based on file extensions found in given directory) and to get the info required to run single or batch runs. The basic info needed to run any tool is contained in the config.ini file is the base folder of the pybps package.
If the project is of the "single run" type, the following instance variables hold the basic info needed to actually run the simulation:
End of explanation
# Unique ID for the batch to be run. Allows for succesive batch runs with different sets of parameters to be run within a same directory without the risk to overwrite cases.
# Also helps for storing info in sql databases
bps.series_id
# List of paths to the template files found in the project directory
bps.tmpfiles_pathlist
# Path to parameter file
bps.paramfile_path
# List of jobs to be run
# This is actually a list of dicts containing all the parameters for the current batch run
bps.job_list
# Number of simulation jobs to be run
bps.njob
Explanation: If the simulation project happens to be a batch run, an additional set of instance variables is created to hold information about the template and parameter files, as well as the list of jobs to be run. In pybps, template files are simulation files containing parameters to be replaced prior to running simulation. Parameters are identified as strings surrounded by % signs, like %PAR% for example. The user has to create the template files (replacing acordingly the simulation parameters with parameters search strings) prior to calling the pybps package and place a parameter file in csv format in the project folder. Template and parameter files should contain a specific search string in their filename to be recognized as such by pybps. By default, users should include _TMP in template filenames and _PAR in parameter filenames. These are just the default settings and can be modified in the config.ini file.
If the simulation project was identified by the user as corresponding to a batch run (batch = True) and pybps can't find any template or parameter file, it will give an error message and exit.
End of explanation
job = pybps.BPSJob('Examples/TRNSYS/Begin_PARAM/SIM00001')
job.path
Explanation: 2. Pre-process Simulation Data
End of explanation
bpsbatch.job[1].run()
bpsbatch.run()
Explanation: 3. Run Simulation Jobs
End of explanation |
12,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Preparation
Let's get a look on our Pokémons. The function below will plot all the sprites of a specific Pokémon on screen. That way we can have an idea of what kind of problem we can find in out sprite dataset.
Step1: Now let's take a look at the sprites of an Evolutionary Chain.
Plotting the evolutionary chain of Bulbasaur (Bulbasaur -> Ivysaur -> Venusaur) from the fifth generation games let's us see a problem with out dataset
Step2: On this step, we will build and test our pre-processing pipeline. Its goal is to identify the main object in the image (A simple task in out sprite dataset), find out its bounding box and redimensionate the image to an adequate size (We will use a 64 x 64 pixels image on this article)
This routine will be tested on Bulbasaur, Charmander, Squirtle and Venusaur sprites from the fifth generation.
Step3: Finnaly, let's call our centering pipeine on all sprites of a generation. To ensure the process is going smoothly, one in each thirty sprites will be ploted for visual inspection.
Step4: At least, let's process all the images and save them to disk for further use. | Python Code:
%matplotlib inline
from utility.plot import plot_all
#Plotting Bulbassaur ID = 1
plot_all(1)
#Plotting Charmander ID = 4
plot_all(4)
#Plotting Squirtle ID = 7
plot_all(7)
Explanation: Data Preparation
Let's get a look on our Pokémons. The function below will plot all the sprites of a specific Pokémon on screen. That way we can have an idea of what kind of problem we can find in out sprite dataset.
End of explanation
%matplotlib inline
from utility.plot import plot_chain
plot_chain("gen05_black-white",[1,2,3])
Explanation: Now let's take a look at the sprites of an Evolutionary Chain.
Plotting the evolutionary chain of Bulbasaur (Bulbasaur -> Ivysaur -> Venusaur) from the fifth generation games let's us see a problem with out dataset: Centering and cropping of images.
End of explanation
%matplotlib inline
from utility.preprocessing import center_and_resize
import matplotlib.image as mpimg
import os
main_folder = "./sprites/pokemon/main-sprites/"
game_folder = "gen05_black-white"
pkm_list = [1, 4, 7, 3]
for pkm in pkm_list:
img_file = "{id}.png".format(id=pkm)
img_path = os.path.join(main_folder,game_folder,img_file)
img = mpimg.imread(img_path)
center_and_resize(img,plot=True,id=img_path)
Explanation: On this step, we will build and test our pre-processing pipeline. Its goal is to identify the main object in the image (A simple task in out sprite dataset), find out its bounding box and redimensionate the image to an adequate size (We will use a 64 x 64 pixels image on this article)
This routine will be tested on Bulbasaur, Charmander, Squirtle and Venusaur sprites from the fifth generation.
End of explanation
%matplotlib inline
from utility.preprocessing import center_and_resize
import matplotlib.image as mpimg
from math import ceil
import matplotlib.pyplot as plt
main_folder = "./sprites/pokemon/main-sprites/"
game_folder = "gen05_black-white"
pkm_list = range(1,650)
image_list = []
for pkm in pkm_list:
try:
image_file = "{id}.png".format(id=pkm)
image_path = os.path.join(main_folder,game_folder,image_file)
image = mpimg.imread(image_path)
image_resize = center_and_resize(image,plot=False,id=image_path)
plot = (pkm % 30 == 0)
if plot:
image_list.append((image,image_resize))
except ValueError as e:
print("Out of Bounds Error:", e)
n_cols = 6
n_rows = ceil(2*len(image_list)/n_cols)
plt.figure(figsize=(16,256))
for idx, image_pair in enumerate(image_list):
image, image_resize = image_pair
plt.subplot(100,6,2*idx+1)
plt.imshow(image)
plt.subplot(100,6,2*idx+2)
plt.imshow(image_resize)
Explanation: Finnaly, let's call our centering pipeine on all sprites of a generation. To ensure the process is going smoothly, one in each thirty sprites will be ploted for visual inspection.
End of explanation
import warnings
import os
import matplotlib.image as img
from skimage import io
from utility.preprocessing import center_and_resize
main_folder = "./sprites/pokemon/main-sprites/"
dest_folder = "./sprites/pokemon/centered-sprites/"
if not os.path.exists(dest_folder):
os.makedirs(dest_folder)
gen_folders = {
"gen01_red-blue" : 151,
"gen01_red-green" : 151,
"gen01_yellow" : 151,
"gen02_crystal" : 251,
"gen02_gold" : 251,
"gen02_silver" : 251,
"gen03_emerald" : 386,
"gen03_firered-leafgreen" : 151,
"gen03_ruby-sapphire" : 386,
"gen04_diamond-pearl" : 493,
"gen04_heartgold-soulsilver" : 386,
"gen04_platinum" : 386,
"gen05_black-white" : 649
}
for gen, max_pkm in gen_folders.items():
print("Starting",gen)
main_gen_folder = os.path.join(main_folder,gen)
dest_gen_folder = os.path.join(dest_folder,gen)
if not os.path.exists(dest_gen_folder):
os.makedirs(dest_gen_folder)
for pkm_id in range(1,max_pkm+1):
image_file = "{id}.png".format(id=pkm_id)
image_path = os.path.join(main_gen_folder,image_file)
try:
image = mpimg.imread(image_path)
new_image = center_and_resize(image,plot=False,id=image_path)
new_image_path = os.path.join(dest_gen_folder,image_file)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
io.imsave(new_image_path,new_image)
except FileNotFoundError:
print(" - {file} not found".format(file=image_path))
print("Finished")
Explanation: At least, let's process all the images and save them to disk for further use.
End of explanation |
12,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K Means Clustering with Python
K Means Clustering is an unsupervised learning algorithm that tries to cluster data based on their similarity. Unsupervised learning means that there is no outcome to be predicted, and the algorithm just tries to find patterns in the data. In k means clustering, we have the specify the number of clusters we want the data to be grouped into. The algorithm randomly assigns each observation to a cluster, and finds the centroid of each cluster. Then, the algorithm iterates through two steps
Step1: Create some data
Step2: Visualize data
Step3: Creating Clusters | Python Code:
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: K Means Clustering with Python
K Means Clustering is an unsupervised learning algorithm that tries to cluster data based on their similarity. Unsupervised learning means that there is no outcome to be predicted, and the algorithm just tries to find patterns in the data. In k means clustering, we have the specify the number of clusters we want the data to be grouped into. The algorithm randomly assigns each observation to a cluster, and finds the centroid of each cluster. Then, the algorithm iterates through two steps:
Reassign data points to the cluster whose centroid is closest. Calculate new centroid of each cluster. These two steps are repeated till the within cluster variation cannot be reduced any further. The within cluster variation is calculated as the sum of the euclidean distance between the data points and their respective cluster centroids.
Import Libraries
End of explanation
from sklearn.datasets import make_blobs
#create data
data = make_blobs(n_samples=200,n_features=2,centers=4,cluster_std=1.8,random_state=101)
Explanation: Create some data
End of explanation
plt.scatter(data[0][:,0],data[0][:,1],c=data[1],cmap='rainbow')
Explanation: Visualize data
End of explanation
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=4)
kmeans.fit(data[0])
kmeans.cluster_centers_
kmeans.labels_
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True,figsize=(10,6))
ax1.set_title('K Means')
ax1.scatter(data[0][:,0],data[0][:,1],c=kmeans.labels_,cmap='rainbow')
ax2.set_title("Original")
ax2.scatter(data[0][:,0],data[0][:,1],c=data[1],cmap='rainbow')
Explanation: Creating Clusters
End of explanation |
12,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Playing with the Gaussian Distribution
There was a statement I saw online
Step1: I created a function that takes the observation, mean and standard deviation and returns the z-score. Notice it's just a little bit of arithmetic.
Here are a few examples, testing our method.
Step2: Calculate the Probability
Given the z-score, we can calculate the probability of someone being smarter or less-intelligent than observed. To do this, we need to estimate the area under the correct part of the curve. Looking at our curve again, we have
Notice the numbers along the bottom? Those are z-scores. So, if we have a z-score of -3, we'll know that anyone less intelligent than that is about 0.1% of the population. We take the perctage from under the curve in that region to get the answer of 0.1% If we have a z-score of -2.5, we add up the first two areas (0.1% and 0.5%) to figure out that 0.6% of people are less intelligent than that test score. If we get a z-score of 1, we'd add all the numbers from left to the z-score of 1 and get about 84.1%.
SciPy has a normal distribution with a cdf function, or the Cumulative Distribution Function. That's the function that measures the area of a curve, up to a point. To use this function, we write
Step3: Here, I use p for probability, or the probability that an observation will be lower than the value provided. I pass in the observation, the mean and the standard deviation. The function looks up my z-score for me and then calls SciPy's CDF function on the normal distribution.
Let's calculate this for a few z-scores. I'll use pandas to create a data frame, because they print neatly.
Step4: This code creates a pandas data frame by first setting a few sample test scores from 60 to 160. Then calculating their z-scores and the proportion of the population estimated to be less intelligent than that score.
So, someone with a score of 60 would have almost 4 people out of a thousand that are less intelligent. Someone with a score of 160 would expect that in a room of 100,000, 3 would be more intelligent than they are.
This is a similar result that we see in the bell curve, only as applied with our mean of 100 and our standard deviation of 15.
Understanding the Conclusions
Taking a few moments to calculate the probability of someone being less smart than a score reminds me how distributions work. Maybe this was something most programmers learned and don't use often, so the knowledge gets a little dusty, a little less practical.
I used matplotlib to create a graphic with our IQ distribution in it. I just grabbed the code from the SciPy documentation and adjusted it for our mean and standard deviation. I also use the ggplot style, because I think it's pretty slick.
Step5: The blue line shows us an approximation of the distribution. I used 1,000 random observations to get my data. I could have used 10,000 or 100,000 and the curve would look really slick. However, that would hide a little what we actually mean when we talk about distributions. If I took 1,000 students and gave them an IQ test, I would expect scores that were kind of blotchy like the red histogram in the plot. Some categories would be a little above the curve, some below.
As a side note, if I gave everyone in a school an IQ test and I saw that my results were skewed a little to the left or the right, I would probably conclude that the students in the school test better or worse than the population generally. That, or something was different about the day, or the test, or grading of the test, or the collection of tests or something. Seeing things different than expected is where the fun starts.
What About the Snark?
Oh, and by the way, what about "I don't know anyone with an IQ above 7 that respects Hillary Clinton"?
How common would it be to find someone with an IQ at 7? Let's use our code to figure that out.
Step6: That is, the z-score for a 7 test score is -6.2, or 6.2 standard deviations from the mean. That's a very low score. The probability that someone gets a lower score? 2.8e-10. How small is that number?
Step7: Or, if the snarky comment were accurate, there would be 2 people that have an IQ lower than 7 on the planet. Maybe both of us could have chilled out a little and came up with funnier ways to tease.
Interestingly, if we look at recent (9/21/15) head-to-head polls of Hillary Clinton against top Republican candidates, we see that
Step8: Or, from 15 hypothetical elections against various Republican candidates, about 46.8% would vote for former Secretary Clinton over her potential Republican rivals at this point. It's interesting to point out that the standard deviation in all these polls is only about 2%. Or, of all the Republican candidates, at this point very few people are thinking differently from party politics. Either the particular candidates are not well known, or people are just that polarized that they'll vote for their party's candidate no matter who they run.
If we're facetious and say that only stupid people are voting for Hillary Clinton (from the commenter's snark), how would we find the IQ threshold? Or, put another way, if you ranked US voters by intelligence, and assumed that the dumbest ones would vote for Hillary Clinton, and only the smart ones would vote Republican, what IQ score would these dumb ones have?
We can get the z-score like this
Step9: So, the z-score is just about 1/10th of one standard deviation below the mean. That is, it's going to be pretty close to 100.
Using the z-score formula, we can solve for $x_{1}$ and get | Python Code:
def z_score(x, m, s):
return (x - m) / s
Explanation: Playing with the Gaussian Distribution
There was a statement I saw online: "I don't know anyone with an IQ above 7 that respects Hillary Clinton."
Of course, the person is trying to sound smart and snarky but I don't think they pull it off very well. My first reaction was to show them how dumb they are, because arguments online are always a good idea, right? I didn't say anything, as I usually don't. Whatever. I thought I'd write down how to think about standard scores like this instead.
Before I start, there are interesting discussions about why IQ is an outdated idea. For example:
There is no reason to believe, and much reason not to believe, that the measure of so-called "Intelligence Quotient" in any way reflects some basic cognitive capacity or "natural kind" of the human mind. The domain-general measure of IQ isn't motivated by any recent discovery of cognitive or developmental psychology.
Atran S. 2015. IQ. In: Brockman J, editor. This idea must die: scientific ideas that are blocking progress. New York, New York: Harper Perennial. p. 15.
Notwithstanding, let's have a little fun with this, or brush up on some statistics using Python.
Getting Started
The Stanford-Binet IQ test is an intelligence test standardized for a median of 100 and a standard deviation of 15. That means that someone with an IQ of 100 has about as many people smarter than them as there are less intelligent. It also means that we can calculate about where someone fits in the population if their score is different than 100.
We'll use a Gaussian distribution to describe the scores. This is the bell curve we've all probably seen before, where most things group up in the middle and the exceptional items are found to the left and right of the center:
To figure out what a test score says about a person, we'll:
compare the test score to the everyone else (calculate the z-score)
figure out the proportion of people with lower scores (calculate the probability)
understand the conclusions
Calculate the Z-Score
The z-score is the distance between an observed value and the mean value divided by the standard deviation. For IQ scores, we'll use the median for the mean, since I don't have (or couldn't be bothered to find) better data. Here's the formula:
$$z = \frac{x_{i} - \mu}{\sigma}$$
where $x_{i}$ is the observed value, $\mu$ is the mean and $\sigma$ is the standard deviation.
Put another way, the mean measures the middle of normal data and the standard deviation measures the width. If it's wide, there's a lot of variance in the data, and if it's narrow, almost everything comes out near the mean value. The z-score measures how different an observation is from the middle of the data. There's another discussion of this that might be useful.
So, calculating the z-score is our first step so that we can compare the teset score to everyone else's test score.
Let's do this with Python.
End of explanation
print(z_score(95, 100, 15), z_score(130, 100, 15), z_score(7, 100, 15))
# We should see -0.3333333333333333 2.0 -6.2 or 1/3 deviation below average, 2 above and 6.2 below.
Explanation: I created a function that takes the observation, mean and standard deviation and returns the z-score. Notice it's just a little bit of arithmetic.
Here are a few examples, testing our method.
End of explanation
import scipy.stats as st
def p(x, m, s):
z = z_score(x, m, s)
return st.norm.cdf(z)
Explanation: Calculate the Probability
Given the z-score, we can calculate the probability of someone being smarter or less-intelligent than observed. To do this, we need to estimate the area under the correct part of the curve. Looking at our curve again, we have
Notice the numbers along the bottom? Those are z-scores. So, if we have a z-score of -3, we'll know that anyone less intelligent than that is about 0.1% of the population. We take the perctage from under the curve in that region to get the answer of 0.1% If we have a z-score of -2.5, we add up the first two areas (0.1% and 0.5%) to figure out that 0.6% of people are less intelligent than that test score. If we get a z-score of 1, we'd add all the numbers from left to the z-score of 1 and get about 84.1%.
SciPy has a normal distribution with a cdf function, or the Cumulative Distribution Function. That's the function that measures the area of a curve, up to a point. To use this function, we write:
End of explanation
import numpy as np
import pandas as pd
scores = np.arange(60, 161, 20)
z_scores = list(map(lambda x: z_score(x, 100, 15), scores))
less_intelligent = list(map(lambda x: p(x, 100, 15), scores))
df = pd.DataFrame()
df['test_score'] = scores
df['z_score'] = z_scores
df['less_intelligent'] = less_intelligent
df
Explanation: Here, I use p for probability, or the probability that an observation will be lower than the value provided. I pass in the observation, the mean and the standard deviation. The function looks up my z-score for me and then calls SciPy's CDF function on the normal distribution.
Let's calculate this for a few z-scores. I'll use pandas to create a data frame, because they print neatly.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
mu, sigma = 100, 15. # mean and standard deviation
s = sorted(np.random.normal(mu, sigma, 1000))
count, bins, ignored = plt.hist(s, 30, normed=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
linewidth=2)
plt.show()
Explanation: This code creates a pandas data frame by first setting a few sample test scores from 60 to 160. Then calculating their z-scores and the proportion of the population estimated to be less intelligent than that score.
So, someone with a score of 60 would have almost 4 people out of a thousand that are less intelligent. Someone with a score of 160 would expect that in a room of 100,000, 3 would be more intelligent than they are.
This is a similar result that we see in the bell curve, only as applied with our mean of 100 and our standard deviation of 15.
Understanding the Conclusions
Taking a few moments to calculate the probability of someone being less smart than a score reminds me how distributions work. Maybe this was something most programmers learned and don't use often, so the knowledge gets a little dusty, a little less practical.
I used matplotlib to create a graphic with our IQ distribution in it. I just grabbed the code from the SciPy documentation and adjusted it for our mean and standard deviation. I also use the ggplot style, because I think it's pretty slick.
End of explanation
z = z_score(7, 100, 15)
prob = p(7, 100, 15)
rounded_prob = round(prob, 15)
print("The z-score {0} and probability {1} of a test score of 7.".format(z, rounded_prob))
Explanation: The blue line shows us an approximation of the distribution. I used 1,000 random observations to get my data. I could have used 10,000 or 100,000 and the curve would look really slick. However, that would hide a little what we actually mean when we talk about distributions. If I took 1,000 students and gave them an IQ test, I would expect scores that were kind of blotchy like the red histogram in the plot. Some categories would be a little above the curve, some below.
As a side note, if I gave everyone in a school an IQ test and I saw that my results were skewed a little to the left or the right, I would probably conclude that the students in the school test better or worse than the population generally. That, or something was different about the day, or the test, or grading of the test, or the collection of tests or something. Seeing things different than expected is where the fun starts.
What About the Snark?
Oh, and by the way, what about "I don't know anyone with an IQ above 7 that respects Hillary Clinton"?
How common would it be to find someone with an IQ at 7? Let's use our code to figure that out.
End of explanation
instances_per_billion = round((1/prob) / 1000000000, 2)
people_on_the_planet = 7.125 # billion
instances_on_the_planet = people_on_the_planet / instances_per_billion
instances_on_the_planet
Explanation: That is, the z-score for a 7 test score is -6.2, or 6.2 standard deviations from the mean. That's a very low score. The probability that someone gets a lower score? 2.8e-10. How small is that number?
End of explanation
votes = pd.Series([46.3, 45.3, 46.3, 46.3, 49.4, 47.8, 42.7, 43.3, 49.0, 47.7, 48.3, 46.5, 46.5, 49.0, 48.0])
# I thought it was easier to read percentages as 46.3, but I'm converting those numbers here to fit
# in the set [0,1] as well-behaved probabilities do.
votes = votes.apply(lambda x: x / 100)
votes.describe()
Explanation: Or, if the snarky comment were accurate, there would be 2 people that have an IQ lower than 7 on the planet. Maybe both of us could have chilled out a little and came up with funnier ways to tease.
Interestingly, if we look at recent (9/21/15) head-to-head polls of Hillary Clinton against top Republican candidates, we see that:
End of explanation
hillary_z_score = st.norm.ppf(votes.mean())
hillary_z_score
Explanation: Or, from 15 hypothetical elections against various Republican candidates, about 46.8% would vote for former Secretary Clinton over her potential Republican rivals at this point. It's interesting to point out that the standard deviation in all these polls is only about 2%. Or, of all the Republican candidates, at this point very few people are thinking differently from party politics. Either the particular candidates are not well known, or people are just that polarized that they'll vote for their party's candidate no matter who they run.
If we're facetious and say that only stupid people are voting for Hillary Clinton (from the commenter's snark), how would we find the IQ threshold? Or, put another way, if you ranked US voters by intelligence, and assumed that the dumbest ones would vote for Hillary Clinton, and only the smart ones would vote Republican, what IQ score would these dumb ones have?
We can get the z-score like this:
End of explanation
iq = 15 * hillary_z_score + 100
iq
Explanation: So, the z-score is just about 1/10th of one standard deviation below the mean. That is, it's going to be pretty close to 100.
Using the z-score formula, we can solve for $x_{1}$ and get:
$$z = \frac{x_{i} - \mu}{\sigma}$$
$$x_{i} = \sigma z + \mu$$
Plugging our z-score number in, with our standard deviation and mean, we get:
End of explanation |
12,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VisPy colormaps
This notebook illustrates the colormap API provided by VisPy.
List all colormaps
Step1: Discrete colormaps
Discrete colormaps can be created by giving a list of colors, and an optional list of control points (in $[0,1]$, the first and last points need to be $0$ and $1$ respectively). The colors can be specified in many ways (1-character shortcuts, hexadecimal values, arrays or RGB values, ColorArray instances, and so on).
Step2: Linear gradients | Python Code:
import numpy as np
from vispy.color import (get_colormap, get_colormaps, Colormap)
from IPython.display import display_html
for cmap in get_colormaps():
display_html('<h3>%s</h3>' % cmap, raw=True)
display_html(get_colormap(cmap))
Explanation: VisPy colormaps
This notebook illustrates the colormap API provided by VisPy.
List all colormaps
End of explanation
Colormap(['r', 'g', 'b'], interpolation='zero')
Colormap(['r', 'g', 'y'], interpolation='zero')
Colormap(np.array([[0, .75, 0],
[.75, .25, .5]]),
[0., .25, 1.],
interpolation='zero')
Colormap(['r', 'g', '#123456'],
interpolation='zero')
Explanation: Discrete colormaps
Discrete colormaps can be created by giving a list of colors, and an optional list of control points (in $[0,1]$, the first and last points need to be $0$ and $1$ respectively). The colors can be specified in many ways (1-character shortcuts, hexadecimal values, arrays or RGB values, ColorArray instances, and so on).
End of explanation
Colormap(['r', 'g', '#123456'])
Colormap([[1,0,0], [1,1,1], [1,0,1]],
[0., .75, 1.])
Explanation: Linear gradients
End of explanation |
12,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estimation of a Categorical distribution
Maximum Likelihood Estimation
We observe a dataset ${x^{(n)}}_{n=1\dots N}$. The model for a single observation is a categorical distribution with parameter $\pi = (\pi_1, \dots, \pi_I)$ where
\begin{eqnarray}
x^{(n)} & \sim & p(x|\pi) = \prod_{i=1}^{I} \pi_i^{\ind{i = x^{(n)}}}
\end{eqnarray}
where $\sum_i \pi_i = 1$.
The loglikelihood of the entire dataset is
\begin{eqnarray}
{\cal L}(\pi_1,\dots,\pi_I) & = & \sum_{n=1}^N\sum_{i=1}^I \ind{i = x^{(n)}} \log \pi_i
\end{eqnarray}
This is a constrained optimisation problem.
Form the Lagrangian
\begin{eqnarray}
\Lambda(\pi, \lambda) & = & \sum_{n=1}^N\sum_{i'=1}^I \ind{i' = x^{(n)}} \log \pi_{i'} + \lambda \left( 1 - \sum_{i'} \pi_{i'} \right ) \
\frac{\partial \Lambda(\pi, \lambda)}{\partial \pi_i} & = & \sum_{n=1}^N \ind{i = x^{(n)}} \frac{1}{\pi_i} - \lambda = 0 \
\pi_i & = & \frac{\sum_{n=1}^N \ind{i = x^{(n)}}}{\lambda}
\end{eqnarray}
We solve for $\lambda$
\begin{eqnarray}
1 & = & \sum_i \pi_i = \frac{\sum_{i=1}^I \sum_{n=1}^N \ind{i = x^{(n)}}}{\lambda} \
\lambda & = & \sum_{i=1}^I \sum_{n=1}^N \ind{i = x^{(n)}} = \sum_{n=1}^N 1 = N
\end{eqnarray}
Hence
\begin{eqnarray}
\pi_i & = & \frac{\sum_{n=1}^N \ind{i = x^{(n)}}}{N}
\end{eqnarray}
Step1: Maximum A-Posteriori Estimation
$$
\pi \sim \mathcal{D}(\pi_{1
Step2: Our goal is deciding if the random variables $x$ and $y$ are independent or dependent, given some observations.
| | || |
|-|- |-|-|
| |$y$|| |
|$x$ |$3$|$5$|$9$|
| |$7$|$9$|$17$|
Independent model $M_1$
\begin{equation}
p(x, y) = p(x) p(y)
\end{equation}
\begin{align}
\pi_1 & \sim \mathcal{D}(\pi_1; a_1) &
\pi_2 & \sim \mathcal{D}(\pi_2; a_2) \
x^{(n)} & \sim \mathcal{C}(x; \pi_1) &
y^{(n)} & \sim \mathcal{C}(y; \pi_2)
\end{align}
We let
$X_{i+} = \sum_j X_{i,j}$
$X_{+j} = \sum_i X_{i,j}$
The marginal likelihood can be found as
\begin{eqnarray}
\log p(X|M_1) & = & \log{\Gamma(\sum_{i} a_{1}(i))} - {\sum_{i} \log \Gamma(a_{1}(i)} - \log{\Gamma(\sum_{i} (a_{1}(i) + \sum_{j} C(i,j)) )} + {\sum_{i} \log \Gamma(a_{1}(i) + \sum_{j} C(i,j))} \
& & +\log{\Gamma(\sum_{j} a_{2}(j)} - {\sum_{j} \log \Gamma(a_{2}(j))} - \log{\Gamma(\sum_{j} (a_{2}(j) + \sum_{i} C(i,j)) )} + {\sum_{j} \log \Gamma(a_{2}(j) + \sum_{i} C(i,j))} \
& = & \log{\Gamma(A_1)} - \sum_{i} \log \Gamma(a_{1}(i)) - \log{\Gamma(A_1+ N)} + {\sum_{i} \log \Gamma(a_{1}(i) + C_1(i))} \
& & + \log{\Gamma(A_2)} - {\sum_{j} \log \Gamma(a_{2}(j))} - \log{\Gamma(A_2 + N )} + {\sum_{j} \log \Gamma(a_{2}(j) + C_2(j))} \
\end{eqnarray}
Dependent model $M_2$
\begin{equation}
p(x_1, x_2)
\end{equation}
$\pi_{1,2}$ is a $S_1 \times S_2$ matrix where the joint distribution of entries is Dirichlet $\mathcal{D}(\pi_{1,2}; a_{1,2})$ with $S_1 \times S_2$ parameter matrix $a_{1,2}$. Then, the probability that $p(x_1 = i, x_2 = j|\pi_{1,2}) = \pi_{1,2}(i,j)$.
\begin{eqnarray}
\pi_{1,2} & \sim & \mathcal{D}(\pi_{1,2}; a_{1,2}) \
(x_1, x_2)^{(n)} & \sim & \mathcal{C}((x_1,x_2); \pi_{1,2}) \
\end{eqnarray}
\begin{eqnarray}
\log p(X|M_2) & = & \log{\Gamma(A_{1,2})} - {\sum_{i,j} \log \Gamma(a_{1,2}(i,j))} - \log{\Gamma(A_{1,2}+ N)} + {\sum_{i,j} \log \Gamma(a_{1,2}(i,j) + C(i,j))}
\end{eqnarray}
Dependent model $M_3$
\begin{equation}
p(x_1, x_2) = p(x_1) p(x_2|x_1)
\end{equation}
\begin{eqnarray}
\pi_1 & \sim & \mathcal{D}(\pi_1; a_1) \
\pi_{2,1} & \sim & \mathcal{D}(\pi_2; a_2) \
\vdots \
\pi_{2,S_1} & \sim & \mathcal{D}(\pi_2; a_2) \
x_1^{(n)} & \sim & \mathcal{C}(x_1; \pi_1) \
x_2^{(n)} & \sim & \mathcal{C}(x_2; \pi_{2}(x_1^{(n)},
Step3: Conceptually $M_2$, $M_3$ and $M_3b$ should have the same marginal likelihood score, as the dependence should not depend on how we parametrize the conditional probability tables. However, this is dependent on the choice of the prior parameters.
How should the prior parameters of $M_2$, $M_3$ and $M_3b$ be chosen such that we get the same evidence score?
The models
$M_2$, $M_3$ and $M_3b$ are all equivlent, if the prior parameters are chosen appropriately. For $M_2$ and $M_3$, we need to take $a_1(i) = \sum_j a_{1,2}(i,j)$.
For example, if in $M_2$, the prior parameters $a_{1,2}$ are chosen as
\begin{eqnarray}
a_{1,2} & = & \left(\begin{array}{ccc} 1 & 1 & 1\ 1 & 1 & 1 \end{array} \right)
\end{eqnarray}
we need to choose in model $M_3$
\begin{eqnarray}
a_{1} & = & \left(\begin{array}{c} 3 \ 3 \end{array} \right)
\end{eqnarray}
\begin{eqnarray}
a_{2} & = & \left(\begin{array}{ccc} 1 & 1 & 1\ 1 & 1 & 1 \end{array} \right)
\end{eqnarray}
and in model $M_3b$
\begin{eqnarray}
a_{2} & = & \left(\begin{array}{ccc} 2 & 2 & 2 \end{array} \right)
\end{eqnarray}
\begin{eqnarray}
a_{1} & = & \left(\begin{array}{ccc} 1 & 1 & 1\ 1 & 1 & 1 \end{array} \right)
\end{eqnarray}
This is due to fact that the marginals of a Dirichlet distribution are also Dirichlet. In particular,
if a probability vector $x$ and corresponding parameter vector $a$ are partitioned as $x = (x_\iota, x_{-\iota})$
and $a = (a_\iota, a_{-\iota})$, the Dirichlet distribution
$$
\mathcal{D}(x_\iota, x_{-\iota}; a_\iota, a_{-\iota})
$$
has marginals
$$
\mathcal{D}(X_{\iota}, X_{-\iota}; A_{\iota}, A_{-\iota})
$$
where $X_\iota = \sum_{i \in \iota} x_i$ and $X_{-\iota} = \sum_{i \in -\iota} x_i$, where $A_\iota = \sum_{i \in \iota} a_i$ and $A_{-\iota} = \sum_{i \in -\iota} a_i$. The script below verifies that the marginals are indeed distributed according to this formula.
Step4: Question
Are two given histograms drawn from the same distribution?
$[3,5,12,4]$
$[8, 14, 31, 14]$
Visualizing the Dirichlet Distribution
[http
Step5: 6-faced die with repeated labels
Consider a die where the numbers on each face are labeled, possibly with repetitions, from the set $1\dots 6$.
A 'normal' die has labels $1,2,3,4,5,6$ but we allow other labelings, for example as $1,1,3,5,5,5$ or $1,1,1,1,1,6$.
Can we construct a method to find how the die has been labeled from a sequence of outcomes?
Does my data have a single cluster or are there two clusters?
We observe a dataset of $N$ points and want to decide if there are one or two clusters. For example, the below dataset, when visualized seems to suggest two clusters; however the separation is not very clear; perhaps a single component might have been also sufficient. How can we derive a procedure that leads to a resonable answer in this ambigious situation?
<img src="clusters.png" width='180' align='center'>
One principled approach is based on Bayesian model selection. Our approach will be describing two alternative generative models for data. Each generative model will reflect our assumption what it means to have clusters. In other words, we should describe two different procedures
Step6: Model $M = 2$
Step7: Extension
We dont know the component variances
Combining into a single model
Capture
Recapture
Change point
Coin switch
Coal Mining Data
Single Change Point
Multiple Change Point
Bivariate Gaussian model selection
Suppose we are given a dataset $X = {x_1, x_2, \dots, x_N }$ where $x_n \in \mathbb{R}^K$ for $n=1 \dots N$ and consider two competing models
Step8: Computing the marginal likelihood
Model 1
\begin{eqnarray}
p(X| m=1) & = & \int d{s_{1 | Python Code:
# %load template_equations.py
from IPython.display import display, Math, Latex, HTML
import notes_utilities as nut
from importlib import reload
reload(nut)
Latex('$\DeclareMathOperator{\trace}{Tr}$')
L = nut.pdf2latex_dirichlet(x=r'\pi', a=r'a',N=r'I', i='i')
display(HTML(nut.eqs2html_table(L)))
Explanation: Estimation of a Categorical distribution
Maximum Likelihood Estimation
We observe a dataset ${x^{(n)}}_{n=1\dots N}$. The model for a single observation is a categorical distribution with parameter $\pi = (\pi_1, \dots, \pi_I)$ where
\begin{eqnarray}
x^{(n)} & \sim & p(x|\pi) = \prod_{i=1}^{I} \pi_i^{\ind{i = x^{(n)}}}
\end{eqnarray}
where $\sum_i \pi_i = 1$.
The loglikelihood of the entire dataset is
\begin{eqnarray}
{\cal L}(\pi_1,\dots,\pi_I) & = & \sum_{n=1}^N\sum_{i=1}^I \ind{i = x^{(n)}} \log \pi_i
\end{eqnarray}
This is a constrained optimisation problem.
Form the Lagrangian
\begin{eqnarray}
\Lambda(\pi, \lambda) & = & \sum_{n=1}^N\sum_{i'=1}^I \ind{i' = x^{(n)}} \log \pi_{i'} + \lambda \left( 1 - \sum_{i'} \pi_{i'} \right ) \
\frac{\partial \Lambda(\pi, \lambda)}{\partial \pi_i} & = & \sum_{n=1}^N \ind{i = x^{(n)}} \frac{1}{\pi_i} - \lambda = 0 \
\pi_i & = & \frac{\sum_{n=1}^N \ind{i = x^{(n)}}}{\lambda}
\end{eqnarray}
We solve for $\lambda$
\begin{eqnarray}
1 & = & \sum_i \pi_i = \frac{\sum_{i=1}^I \sum_{n=1}^N \ind{i = x^{(n)}}}{\lambda} \
\lambda & = & \sum_{i=1}^I \sum_{n=1}^N \ind{i = x^{(n)}} = \sum_{n=1}^N 1 = N
\end{eqnarray}
Hence
\begin{eqnarray}
\pi_i & = & \frac{\sum_{n=1}^N \ind{i = x^{(n)}}}{N}
\end{eqnarray}
End of explanation
from IPython.display import display, Math, Latex, HTML
import html_utils as htm
wd = '65px'
L = [[htm.TableCell('', width=wd), htm.TableCell('$y=1$', width=wd), htm.TableCell('$y=j$', width=wd), htm.TableCell('$y=I_2$', width='80px')],
[r'$x=1$',r'$S_{1,1}$',r'',r'$S_{1,I_2}$'],
[r'$x=i$',r'',r'$S_{i,j}$',r''],
[r'$x=I_1$',r'$S_{I_1,1}$',r'',r'$S_{I_1,I_2}$']]
t = htm.make_htmlTable(L)
display(HTML(str(t)))
#print(str(t))
Explanation: Maximum A-Posteriori Estimation
$$
\pi \sim \mathcal{D}(\pi_{1:I}; a_{1:I} )
$$
where $\sum_i \pi_i = 1$. For $n = 1\dots N$
\begin{eqnarray}
x^{(n)} & \sim & p(x|\pi) = \prod_{i=1}^{I} \pi_i^{\ind{i = x^{(n)}}}
\end{eqnarray}
$X = {x^{(1)},\dots,x^{(N)} }$
The posterior is
\begin{align}
\log p(\pi_{1:I}| X) & =^+ \log p(\pi_{1:I}, X) \
& = \log{\Gamma(\sum_{i} a_{i})} - {\sum_{i} \log \Gamma(a_{i})}
+ \sum_{{i}=1}^{I} (a_{i} - 1) \log{\pi}{i}
+ \sum{i=1}^I\sum_{n=1}^N \ind{i = x^{(n)}} \log \pi_i \
& =^+ \sum_{i=1}^I \left(a_i - 1 + \sum_{n=1}^N \ind{i = x^{(n)}}\right) \log \pi_i
\end{align}
Finding the parameter vector $\pi_{1:I}$ that maximizes the posterior density is a constrained optimisation problem. After omitting constant terms that do not depend on $\pi$, we form the Lagrangian
\begin{eqnarray}
\Lambda(\pi, \lambda) & = & \sum_{i=1}^I \left(a_i - 1 + \sum_{n=1}^N \ind{i = x^{(n)}}\right) \log \pi_i + \lambda \left( 1 - \sum_{i'} \pi_{i'} \right ) \
\frac{\partial \Lambda(\pi, \lambda)}{\partial \pi_i} & = & \left(a_i - 1 + \sum_{n=1}^N \ind{i = x^{(n)}}\right) \frac{1}{\pi_i} - \lambda = 0 \
\pi_i & = & \frac{a_i - 1 + \sum_{n=1}^N \ind{i = x^{(n)}}}{\lambda}
\end{eqnarray}
We solve for $\lambda$
\begin{eqnarray}
1 & = & \sum_i \pi_i = \frac{- I + \sum_{i=1}^I \left( a_i + \sum_{n=1}^N \ind{i = x^{(n)} \right) }}{\lambda} \
\lambda & = & N - I + \sum_{i=1}^I a_i
\end{eqnarray}
Setting the count of observations equal to $i$ as $C_i \equiv \sum_{n=1}^N \ind{i = x^{(n)}}$, we obtain
Hence
\begin{eqnarray}
\pi_i & = & \frac{C_i + a_i - 1}{N + \sum_{i=1}^I a_i - I}
\end{eqnarray}
Full Bayesian Inference
Setting the count of observations equal to $i$ as $C_i \equiv \sum_{n=1}^N \ind{i = x^{(n)}}$, we obtain
The posterior is
\begin{eqnarray}
\log p(\pi_{1:I}, X) & = & \log{\Gamma(\sum_{i} a_{i})} - {\sum_{i} \log \Gamma(a_{i})} + \sum_{i=1}^I \left(\left(a_i + \sum_{n=1}^N \ind{i = x^{(n)}}\right) - 1 \right) \log \pi_i \
& = & \log{\Gamma(\sum_{i} a_{i})} - {\sum_{i} \log \Gamma(a_{i})}\
& & + \sum_{i=1}^I (a_i + C_i - 1) \log \pi_i \
& & + \log{\Gamma(\sum_{i} (a_{i} + C_i) )} - {\sum_{i} \log \Gamma(a_{i} + C_i)} \
& & - \log{\Gamma(\sum_{i} (a_{i} + C_i) )} + {\sum_{i} \log \Gamma(a_{i} + C_i)} \
& = & \log \mathcal{D}(\pi_{1:I}, a_{1:I} ) + \log p(X)
\end{eqnarray}
\begin{eqnarray}
\log p(X) & = & \log{\Gamma(\sum_{i} a_{i})} - {\sum_{i} \log \Gamma(a_{i})} - \log{\Gamma(\sum_{i} (a_{i} + C_i) )} + {\sum_{i} \log \Gamma(a_{i} + C_i)}
\end{eqnarray}
$a_+ = \sum_{i} a_{i}$ and $C_+ = \sum_{i} C_i$
\begin{eqnarray}
p(X) & = & \frac{\Gamma(a_+) \prod_{i} \Gamma(a_{i} + C_i)}{\Gamma(a_+ + C_+ ) \prod_{i} \Gamma(a_{i})} = \frac{ \left(\prod_{i} \Gamma(a_{i} + C_i)\right)/ \Gamma(a_+ + C_+ )}{ \left(\prod_{i} \Gamma(a_{i}) \right)/ \Gamma(a_+)} = \frac{B(a + C)}{B(a)}
\end{eqnarray}
Are $x$ and $y$ independent?
Suppose we observe a dataset of $(x, y)$ pairs where
$x \in {1,\dots,I_1}$ and
$y \in {1,\dots,I_2}$.
| | |
| --- | --- |
| $x^{(1)}$ | $y^{(1)}$ |
| $\vdots$ | $\vdots$ |
| $x^{(n)}$ | $y^{(n)}$ |
| $\vdots$ | $\vdots$ |
| $x^{(N)}$ | $y^{(N)}$ |
$\newcommand{\ind}[1]{\left[#1\right]}$
We are given the counts of observations where $x = i$ while $y = j$. These counts can be stored as an array, that is known as a contingency table
$
S_{i,j} = \sum_{n=1}^N \ind{x^{(n)} = i}\ind{y^{(n)} = j}
$
End of explanation
import numpy as np
from notes_utilities import randgen, log_sum_exp, normalize_exp, normalize
import scipy as sc
from scipy.special import gammaln
#C = np.array([[3,1,9],[7,9,17]])
C = 1*np.array([[4,1,1],[1,1,6]])
#C = np.array([[0,1,1],[1,0,2]])
C_i = np.sum(C, axis=1)
C_j = np.sum(C, axis=0)
N = np.sum(C)
S_1 = C.shape[0]
S_2 = C.shape[1]
#M1 Parameter
M1 = {'a_1': S_2*np.ones(S_1), 'a_2': S_1*np.ones(S_2), 'A_1': None, 'A_2': None}
M1['A_1'] = np.sum(M1['a_1'])
M1['A_2'] = np.sum(M1['a_2'])
#p(x_1) p(x_2)
log_marglik_M1 = gammaln(M1['A_1']) - np.sum(gammaln(M1['a_1'])) - gammaln(M1['A_1'] + N) + np.sum(gammaln(M1['a_1'] + C_i)) \
+ gammaln(M1['A_2']) - np.sum(gammaln(M1['a_2'])) - gammaln(M1['A_2'] + N) + np.sum(gammaln(M1['a_2'] + C_j))
# p(x_1, x_2)
M2 = {'a_12': np.ones((S_1,S_2)), 'A_12':None}
M2['A_12'] = np.sum(M2['a_12'])
log_marglik_M2 = gammaln(M2['A_12']) - np.sum(gammaln(M2['a_12'])) - gammaln(M2['A_12'] + N) + np.sum(gammaln(M2['a_12'] + C))
M3 = {'a_1': S_2*np.ones(S_1), 'a_2': np.ones(S_2), 'A_1': None, 'A_2': None}
M3['A_1'] = np.sum(M3['a_1'])
M3['A_2'] = np.sum(M3['a_2'])
#p(x_1) p(x_2|x_1)
log_marglik_M3 = gammaln(M3['A_1']) - np.sum(gammaln(M3['a_1'])) - gammaln(M3['A_1'] + N) + np.sum(gammaln(M3['a_1'] + C_i))
for i in range(S_1):
log_marglik_M3 += gammaln(M3['A_2']) - np.sum(gammaln(M3['a_2'])) - gammaln(M3['A_2'] + C_i[i]) + np.sum(gammaln(M3['a_2'] + C[i,:]))
# Beware the prior parameters
M3b = {'a_1': np.ones(S_1), 'a_2': S_1*np.ones(S_2), 'A_1': None, 'A_2': None}
M3b['A_1'] = np.sum(M3b['a_1'])
M3b['A_2'] = np.sum(M3b['a_2'])
#p(x_2) p(x_1|x_2)
log_marglik_M3b = gammaln(M3b['A_2']) - np.sum(gammaln(M3b['a_2'])) - gammaln(M3b['A_2'] + N) + np.sum(gammaln(M3b['a_2'] + C_j))
for j in range(S_2):
log_marglik_M3b += gammaln(M3b['A_1']) - np.sum(gammaln(M3b['a_1'])) - gammaln(M3b['A_1'] + C_j[j]) + np.sum(gammaln(M3b['a_1'] + C[:,j]))
print('M1:', log_marglik_M1)
print('M2:', log_marglik_M2)
print('M3:', log_marglik_M3)
print('M3b:', log_marglik_M3b)
print('Log Odds, M1-M2')
print(log_marglik_M1 - log_marglik_M2)
print(normalize_exp([log_marglik_M1, log_marglik_M2]))
print('Log Odds, M1-M3')
print(log_marglik_M1 - log_marglik_M3)
print(normalize_exp([log_marglik_M1, log_marglik_M3]))
print('Log Odds, M1-M3b')
print(log_marglik_M1 - log_marglik_M3b)
print(normalize_exp([log_marglik_M1, log_marglik_M3b]))
Explanation: Our goal is deciding if the random variables $x$ and $y$ are independent or dependent, given some observations.
| | || |
|-|- |-|-|
| |$y$|| |
|$x$ |$3$|$5$|$9$|
| |$7$|$9$|$17$|
Independent model $M_1$
\begin{equation}
p(x, y) = p(x) p(y)
\end{equation}
\begin{align}
\pi_1 & \sim \mathcal{D}(\pi_1; a_1) &
\pi_2 & \sim \mathcal{D}(\pi_2; a_2) \
x^{(n)} & \sim \mathcal{C}(x; \pi_1) &
y^{(n)} & \sim \mathcal{C}(y; \pi_2)
\end{align}
We let
$X_{i+} = \sum_j X_{i,j}$
$X_{+j} = \sum_i X_{i,j}$
The marginal likelihood can be found as
\begin{eqnarray}
\log p(X|M_1) & = & \log{\Gamma(\sum_{i} a_{1}(i))} - {\sum_{i} \log \Gamma(a_{1}(i)} - \log{\Gamma(\sum_{i} (a_{1}(i) + \sum_{j} C(i,j)) )} + {\sum_{i} \log \Gamma(a_{1}(i) + \sum_{j} C(i,j))} \
& & +\log{\Gamma(\sum_{j} a_{2}(j)} - {\sum_{j} \log \Gamma(a_{2}(j))} - \log{\Gamma(\sum_{j} (a_{2}(j) + \sum_{i} C(i,j)) )} + {\sum_{j} \log \Gamma(a_{2}(j) + \sum_{i} C(i,j))} \
& = & \log{\Gamma(A_1)} - \sum_{i} \log \Gamma(a_{1}(i)) - \log{\Gamma(A_1+ N)} + {\sum_{i} \log \Gamma(a_{1}(i) + C_1(i))} \
& & + \log{\Gamma(A_2)} - {\sum_{j} \log \Gamma(a_{2}(j))} - \log{\Gamma(A_2 + N )} + {\sum_{j} \log \Gamma(a_{2}(j) + C_2(j))} \
\end{eqnarray}
Dependent model $M_2$
\begin{equation}
p(x_1, x_2)
\end{equation}
$\pi_{1,2}$ is a $S_1 \times S_2$ matrix where the joint distribution of entries is Dirichlet $\mathcal{D}(\pi_{1,2}; a_{1,2})$ with $S_1 \times S_2$ parameter matrix $a_{1,2}$. Then, the probability that $p(x_1 = i, x_2 = j|\pi_{1,2}) = \pi_{1,2}(i,j)$.
\begin{eqnarray}
\pi_{1,2} & \sim & \mathcal{D}(\pi_{1,2}; a_{1,2}) \
(x_1, x_2)^{(n)} & \sim & \mathcal{C}((x_1,x_2); \pi_{1,2}) \
\end{eqnarray}
\begin{eqnarray}
\log p(X|M_2) & = & \log{\Gamma(A_{1,2})} - {\sum_{i,j} \log \Gamma(a_{1,2}(i,j))} - \log{\Gamma(A_{1,2}+ N)} + {\sum_{i,j} \log \Gamma(a_{1,2}(i,j) + C(i,j))}
\end{eqnarray}
Dependent model $M_3$
\begin{equation}
p(x_1, x_2) = p(x_1) p(x_2|x_1)
\end{equation}
\begin{eqnarray}
\pi_1 & \sim & \mathcal{D}(\pi_1; a_1) \
\pi_{2,1} & \sim & \mathcal{D}(\pi_2; a_2) \
\vdots \
\pi_{2,S_1} & \sim & \mathcal{D}(\pi_2; a_2) \
x_1^{(n)} & \sim & \mathcal{C}(x_1; \pi_1) \
x_2^{(n)} & \sim & \mathcal{C}(x_2; \pi_{2}(x_1^{(n)},:))
\end{eqnarray}
\begin{eqnarray}
\log p(x_1^{(1:N)}|\pi_1) & = & \sum_n \sum_i \sum_j \ind{x_1^{(n)} = i} \ind{x_2^{(n)} = j} \log \pi_{1}(i) = \sum_i \sum_j C(i,j) \log \pi_{1}(i)
\end{eqnarray}
\begin{eqnarray}
\log p(x_2^{(1:N)}|\pi_2, x_1^{(1:N)} ) & = & \sum_n \sum_i \sum_j \ind{x_1^{(n)} = i} \ind{x_2^{(n)} = j} \log \pi_{2}(i,j) = \sum_i \sum_j C(i,j) \log \pi_{2}(i,j)
\end{eqnarray}
\begin{eqnarray}
\log p(\pi_1) & = & \log{\Gamma(\sum_{i} a_{1}(i))} - {\sum_{i} \log \Gamma(a_{1}(i))} + \sum_{{i}=1}^{S_1} (a_{1}(i) - 1) \log{\pi_1}(i)
\end{eqnarray}
\begin{eqnarray}
\log p(\pi_2) & = & \sum_i \left(\log{\Gamma(\sum_{j} a_{2}(i,j))} - {\sum_{j} \log \Gamma(a_{2}(i,j))} + \sum_{{j}=1}^{S_2} (a_{2}(i,j) - 1) \log{\pi_2}(i,j) \right)
\end{eqnarray}
The joint distribution is
\begin{eqnarray}
\log p(X, \pi| M_2)&= & \log{\Gamma(\sum_{i} a_{1}(i))} - {\sum_{i} \log \Gamma(a_{1}(i))} + \sum_{{i}=1}^{S_1} (a_{1}(i) + C_1(i) - 1) \log{\pi_1}(i) \
& & + \sum_i \left(\log{\Gamma(\sum_{j} a_{2}(i,j))} - {\sum_{j} \log \Gamma(a_{2}(i,j))} + \sum_{{j}=1}^{S_2} (a_{2}(i,j) + C(i,j) - 1) \log{\pi_2}(i,j) \right)
\end{eqnarray}
We will assume $a_2(i,j) = a_2(i',j)$ for all $i$ and $i'$.
\begin{eqnarray}
\log p(X| M_2) & = & \log{\Gamma(\sum_{i} a_{1}(i))}
- {\sum_{i} \log \Gamma(a_{1}(i))}
- \log{\Gamma(\sum_{i} a_{1}(i) + C_1(i))}
+ \sum_{i} \log \Gamma(a_{1}(i) + C_1(i)) \
& & + \sum_i \left( \log\Gamma(\sum_{j} a_{2}(i,j)) - \sum_{j} \log \Gamma(a_{2}(i,j)) - \log\Gamma( \sum_{j} a_{2}(i,j) + C(i,j)) + \sum_j \log\Gamma( a_{2}(i,j) + C(i,j) ) \right) \
& = & \log{\Gamma(A_1)} - {\sum_{i} \log \Gamma(a_{1}(i))} - \log{\Gamma(A_1+ N)} + {\sum_{i} \log \Gamma(a_{1}(i) + C_1(i))} \
& & + \sum_i \left( \log{\Gamma(A_2)} - {\sum_{j} \log \Gamma(a_{2}(j))} - \log{\Gamma(A_2 + C_1(i) )} + {\sum_{j} \log \Gamma(a_{2}(j) + C(i,j))} \right)
\end{eqnarray}
Dependent model $M_3b$
The derivation is similar and corresponds to the factorization:
\begin{equation}
p(x_1, x_2) = p(x_2) p(x_1|x_2)
\end{equation}
End of explanation
S_1 = 2
S_2 = 10
M = S_1*S_2
a = np.ones(M)
N = 100
P = np.random.dirichlet(a, size=N)
A = np.zeros((N,S_1))
B = np.zeros((N*S_1,S_2))
for n in range(N):
temp = P[n,:].reshape((S_1,S_2))
A[n,:] = np.sum(temp, axis=1)
for i in range(S_1):
B[(n*S_1+i),:] = temp[i,:]/A[n,i]
import pylab as plt
plt.hist(A[:,0],bins=20)
plt.gca().set_xlim([0,1])
#plt.plot(B[:,0],B[:,1],'.')
plt.show()
P2 = np.random.dirichlet(S_2*np.ones(S_1), size=N)
plt.hist(P2[:,0],bins=20)
plt.gca().set_xlim([0,1])
plt.show()
import numpy as np
from notes_utilities import randgen, log_sum_exp, normalize_exp, normalize
import scipy as sc
from scipy.special import gammaln
# Log of multivariate Beta function
def log_mbeta(a):
return sum(gammaln(a.flatten())) - gammaln(sum(a.flatten()))
a = 1
b = 1
S = np.array([[2,1],[0,1]])
#S = np.array([[2,1,0],[1,0,1]])
S_ip = np.sum(S, axis=1)
S_pj = np.sum(S, axis=0)
S_pp = np.sum(S)
gamma = 1
alpha = gamma*np.ones_like(S)
alpha_ip = np.sum(alpha, axis=1)
alpha_pj = np.sum(alpha, axis=0)
alpha_pp = np.sum(alpha)
#
log_const = a*np.log(b) - (a+S_pp)*np.log(b+1) + gammaln(a + S_pp) - gammaln(a) - np.sum(gammaln(S+1))
print(log_const)
# inconsistent independent
alpha_wrong_ip = gamma*np.ones_like(alpha_ip)
alpha_wrong_pj = gamma*np.ones_like(alpha_pj)
# i j
marglik_1_wrong = log_mbeta(S_ip+alpha_wrong_ip) - log_mbeta(alpha_wrong_ip)
marglik_1_wrong += log_mbeta(S_pj+alpha_wrong_pj) - log_mbeta(alpha_wrong_pj)
# i -> j
marglik_2_wrong = log_mbeta(S_ip+alpha_wrong_ip) - log_mbeta(alpha_wrong_ip)
for i in range(S.shape[0]):
marglik_2_wrong += log_mbeta(S[i,:]+alpha[i,:]) - log_mbeta(alpha[i,:])
# i <- j
marglik_3_wrong = log_mbeta(S_pj+alpha_wrong_pj) - log_mbeta(alpha_wrong_pj)
for j in range(S.shape[1]):
marglik_3_wrong += log_mbeta(S[:,j]+alpha[:,j]) - log_mbeta(alpha[:,j])
# independent
marglik_1 = log_mbeta(S_ip+alpha_ip) - log_mbeta(alpha_ip)
marglik_1 += log_mbeta(S_pj+alpha_pj) - log_mbeta(alpha_pj)
# i -> j
marglik_2 = log_mbeta(S_ip+alpha_ip) - log_mbeta(alpha_ip)
for i in range(S.shape[0]):
marglik_2 += log_mbeta(S[i,:]+alpha[i,:]) - log_mbeta(alpha[i,:])
# i <- j
marglik_3 = log_mbeta(S_pj+alpha_pj) - log_mbeta(alpha_pj)
for j in range(S.shape[1]):
marglik_3 += log_mbeta(S[:,j]+alpha[:,j]) - log_mbeta(alpha[:,j])
# dependent
marglik_4 = log_mbeta(S+alpha) - log_mbeta(alpha)
#
print('model, consistent, inconsistent')
print('i j', marglik_1+log_const, marglik_1_wrong+log_const)
print('i->j', marglik_2+log_const, marglik_2_wrong+log_const)
print('i<-j', marglik_3+log_const, marglik_3_wrong+log_const)
print('i--j', marglik_4+log_const)
print(S)
Explanation: Conceptually $M_2$, $M_3$ and $M_3b$ should have the same marginal likelihood score, as the dependence should not depend on how we parametrize the conditional probability tables. However, this is dependent on the choice of the prior parameters.
How should the prior parameters of $M_2$, $M_3$ and $M_3b$ be chosen such that we get the same evidence score?
The models
$M_2$, $M_3$ and $M_3b$ are all equivlent, if the prior parameters are chosen appropriately. For $M_2$ and $M_3$, we need to take $a_1(i) = \sum_j a_{1,2}(i,j)$.
For example, if in $M_2$, the prior parameters $a_{1,2}$ are chosen as
\begin{eqnarray}
a_{1,2} & = & \left(\begin{array}{ccc} 1 & 1 & 1\ 1 & 1 & 1 \end{array} \right)
\end{eqnarray}
we need to choose in model $M_3$
\begin{eqnarray}
a_{1} & = & \left(\begin{array}{c} 3 \ 3 \end{array} \right)
\end{eqnarray}
\begin{eqnarray}
a_{2} & = & \left(\begin{array}{ccc} 1 & 1 & 1\ 1 & 1 & 1 \end{array} \right)
\end{eqnarray}
and in model $M_3b$
\begin{eqnarray}
a_{2} & = & \left(\begin{array}{ccc} 2 & 2 & 2 \end{array} \right)
\end{eqnarray}
\begin{eqnarray}
a_{1} & = & \left(\begin{array}{ccc} 1 & 1 & 1\ 1 & 1 & 1 \end{array} \right)
\end{eqnarray}
This is due to fact that the marginals of a Dirichlet distribution are also Dirichlet. In particular,
if a probability vector $x$ and corresponding parameter vector $a$ are partitioned as $x = (x_\iota, x_{-\iota})$
and $a = (a_\iota, a_{-\iota})$, the Dirichlet distribution
$$
\mathcal{D}(x_\iota, x_{-\iota}; a_\iota, a_{-\iota})
$$
has marginals
$$
\mathcal{D}(X_{\iota}, X_{-\iota}; A_{\iota}, A_{-\iota})
$$
where $X_\iota = \sum_{i \in \iota} x_i$ and $X_{-\iota} = \sum_{i \in -\iota} x_i$, where $A_\iota = \sum_{i \in \iota} a_i$ and $A_{-\iota} = \sum_{i \in -\iota} a_i$. The script below verifies that the marginals are indeed distributed according to this formula.
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
from functools import reduce
from scipy.special import gammaln
corners = np.array([[0, 0], [1, 0], [0.5, 0.75**0.5]])
triangle = tri.Triangulation(corners[:, 0], corners[:, 1])
refiner = tri.UniformTriRefiner(triangle)
trimesh = refiner.refine_triangulation(subdiv=4)
# Mid-points of triangle sides opposite of each corner
midpoints = [(corners[(i + 1) % 3] + corners[(i + 2) % 3]) / 2.0 \
for i in range(3)]
def xy2bc(xy, tol=1.e-3):
'''Converts 2D Cartesian coordinates to barycentric.'''
s = [(corners[i] - midpoints[i]).dot(xy - midpoints[i]) / 0.75 \
for i in range(3)]
return np.clip(s, tol, 1.0 - tol)
class Dirichlet(object):
def __init__(self, alpha):
self._alpha = np.array(alpha)
self._coef = gammaln(np.sum(self._alpha)) - np.sum(gammaln(self._alpha))
def log_pdf(self, x):
return self._coef + np.sum(np.log(x)*(self._alpha - 1))
def pdf(self, x):
'''Returns pdf value for `x`.'''
return np.exp(self.log_pdf(x))
def draw_pdf_contours(dist, nlevels=200, subdiv=8, **kwargs):
refiner = tri.UniformTriRefiner(triangle)
trimesh = refiner.refine_triangulation(subdiv=subdiv)
pvals = [dist.pdf(xy2bc(xy)) for xy in zip(trimesh.x, trimesh.y)]
plt.tricontourf(trimesh, pvals, nlevels, **kwargs)
plt.axis('equal')
plt.xlim(0, 1)
plt.ylim(0, 0.75**0.5)
plt.axis('off')
draw_pdf_contours(Dirichlet([1, 3, 1]))
draw_pdf_contours(Dirichlet([1.99, 3.99, 10.99]))
Explanation: Question
Are two given histograms drawn from the same distribution?
$[3,5,12,4]$
$[8, 14, 31, 14]$
Visualizing the Dirichlet Distribution
[http://blog.bogatron.net/blog/2014/02/02/visualizing-dirichlet-distributions/]
End of explanation
# Parameters
P = 100
R = 10
# Number of datapoints
N = 5
mu = np.random.normal(0, np.sqrt(P))
x = np.random.normal(mu, np.sqrt(R), size=(N))
plt.figure(figsize=(10,1))
plt.plot(mu, 0, 'r.')
plt.plot(x, np.zeros_like(x), 'x')
ax = plt.gca()
ax.set_xlim(3*np.sqrt(P)*np.array([-1,1]))
ax.set_ylim([-0.1,0.1])
ax.axis('off')
plt.show()
Explanation: 6-faced die with repeated labels
Consider a die where the numbers on each face are labeled, possibly with repetitions, from the set $1\dots 6$.
A 'normal' die has labels $1,2,3,4,5,6$ but we allow other labelings, for example as $1,1,3,5,5,5$ or $1,1,1,1,1,6$.
Can we construct a method to find how the die has been labeled from a sequence of outcomes?
Does my data have a single cluster or are there two clusters?
We observe a dataset of $N$ points and want to decide if there are one or two clusters. For example, the below dataset, when visualized seems to suggest two clusters; however the separation is not very clear; perhaps a single component might have been also sufficient. How can we derive a procedure that leads to a resonable answer in this ambigious situation?
<img src="clusters.png" width='180' align='center'>
One principled approach is based on Bayesian model selection. Our approach will be describing two alternative generative models for data. Each generative model will reflect our assumption what it means to have clusters. In other words, we should describe two different procedures: how to generate a dataset with a single cluster, or a datset that has two clusters. Once we have a description of each generative procedure, we may hope to convert our qualitative question (how many clusters?) into a well defined computational procedure.
Each generative procedure will be a different probability model and we will compute the marginal posterior distribution conditioned on an observed dataset.
The single cluster model will have a single cluster center, denoed as $\mu$. Once $\mu$ is generated, each observation is generated by a Gaussian distribution with variance $R$, centered around $\mu$.
Model $M =1$: Single Cluster
\begin{eqnarray}
\mu & \sim & {\mathcal N}(\mu; 0, P) \
x_i | \mu & \sim & {\mathcal N}(x; \mu, R)
\end{eqnarray}
The parameter $P$ denotes a natural range for the mean, $R$ denotes the variance of data, the amount of spread around the mean. To start simple, we will assume that these parameters are known; we will able to relax this assumption later easily.
Below, we show an example where each data point is a scalar.
End of explanation
# Parameters
P = 100
R = 2
# Number of datapoints
N = 10
# Number of clusters
M = 2
mu = np.random.normal(0, np.sqrt(P), size=(M))
c = np.random.binomial(1, 0.5, size=N)
x = np.zeros(N)
for i in range(N):
x[i] = np.random.normal(mu[c[i]], np.sqrt(R))
plt.figure(figsize=(10,1))
#plt.plot(mu, np.zeros_like(mu), 'r.')
plt.plot(x, np.zeros_like(x), 'x')
ax = plt.gca()
ax.set_xlim(3*np.sqrt(P)*np.array([-1,1]))
ax.set_ylim([-0.1,0.1])
ax.axis('off')
plt.show()
Explanation: Model $M = 2$: Two Clusters
\begin{eqnarray}
\mu_0 & \sim & {\mathcal{N}}(\mu; 0, P) \
\mu_1 & \sim & {\mathcal{N}}(\mu; 0, P) \
c_i & \sim & {\mathcal{BE}}(r; 0.5) \
x_i | \mu_0, \mu_1, c_i & \sim & {\mathcal N}(x; \mu_0, R)^{1-c_i} {\mathcal N}(x; \mu_1, R)^{c_i}
\end{eqnarray}
The parameter $P$ denotes a natural range for both means, $R$ denotes the variance of data in each cluster. The variables $r_i$ are the indicators that show the assignment of each datapoint to one of the clusters.
End of explanation
# %load template_equations.py
from IPython.display import display, Math, Latex, HTML
import notes_utilities as nut
from importlib import reload
reload(nut)
Latex('$\DeclareMathOperator{\trace}{Tr}$')
#L = nut.pdf2latex_gauss(x=r's', m=r'\mu',v=r'v')
#L = nut.pdf2latex_mvnormal(x=r's', m=r'\mu',v=r'\Sigma')
L = nut.pdf2latex_mvnormal(x=r'x_n', m=0,v=r'\Sigma')
#L = nut.pdf2latex_gamma(x=r'x', a=r'a',b=r'b')
#L = nut.pdf2latex_invgamma(x=r'x', a=r'a',b=r'b')
#L = nut.pdf2latex_beta(x=r'\pi', a=r'\alpha',b=r'\beta')
eq = L[0]+'='+L[1]+'='+L[2]
display(Math(eq))
display(Latex(eq))
display(HTML(nut.eqs2html_table(L)))
Explanation: Extension
We dont know the component variances
Combining into a single model
Capture
Recapture
Change point
Coin switch
Coal Mining Data
Single Change Point
Multiple Change Point
Bivariate Gaussian model selection
Suppose we are given a dataset $X = {x_1, x_2, \dots, x_N }$ where $x_n \in \mathbb{R}^K$ for $n=1 \dots N$ and consider two competing models:
Model $m=1$:
$\newcommand{\diag}{\text{diag}}$
Observation
\begin{eqnarray}
\text{for}\; n=1\dots N&& \
x_n| s_{1:K} & \sim & \mathcal{N}\left(x; 0, \diag{s_1, \dots, s_K}\right) = \mathcal{N}\left(x; 0, \left(\begin{array}{ccc} s_1 & 0 & 0\0 & \ddots & 0 \ 0 & \dots & s_K \end{array} \right) \right)
\end{eqnarray}
$$
p(x_n| s_{1:K}) =
\prod_{k=1}^K \mathcal{N}(x_{k,n}; 0, s_k)
$$
$$
p(X|s_{1:K} ) = \prod_{n=1}^N p(x_n| s_{1:K}) =
\prod_{n=1}^N \prod_{k=1}^K \mathcal{N}(x_{k,n}; 0, s_k)
$$
Prior
\begin{eqnarray}
\text{for}\; k=1\dots K&& \
s_k & \sim & \mathcal{IG}(s_k; \alpha, \beta)
\end{eqnarray}
Model $m=2$:
-- Observation
\begin{eqnarray}
x_n \sim \mathcal{N}(x_n; 0, \Sigma)=\left|{ 2\pi \Sigma } \right|^{-1/2} \exp\left(-\frac12 {x_n}^\top {\Sigma}^{-1} {x_n} \right)=\exp\left( -\frac{1}{2}\trace {\Sigma}^{-1} {x_n}{x_n}^\top -\frac{1}{2}\log \left|2{\pi}\Sigma\right|\right)
\end{eqnarray}
$$
{\cal IW}(\Sigma; 2a, 2B) = \exp( - (a + (k+1)/2) \log |\Sigma| - \trace B\Sigma^{-1} - \log\Gamma_k(a) + a\log |B|) \
$$
End of explanation
# %load template_equations.py
from IPython.display import display, Math, Latex, HTML
import notes_utilities as nut
from importlib import reload
reload(nut)
Latex('$\DeclareMathOperator{\trace}{Tr}$')
#L = nut.pdf2latex_gauss(x=r's', m=r'\mu',v=r'v')
L = nut.pdf2latex_mvnormal(x=r'x_t', m=r'(Ax_{t-1})',v=r'Q')
#L = nut.pdf2latex_mvnormal(x=r's', m=0,v=r'I')
#L = nut.pdf2latex_gamma(x=r'x', a=r'a',b=r'b')
#L = nut.pdf2latex_invgamma(x=r'x', a=r'a',b=r'b')
#L = nut.pdf2latex_beta(x=r'\pi', a=r'\alpha',b=r'\beta')
eq = L[0]+'='+L[1]+'='+L[2]
display(Math(eq))
L = nut.pdf2latex_mvnormal(x=r'y_t', m=r'(Cx_{t})',v=r'R')
eq = L[0]+'='+L[1]+'='+L[2]
display(Math(eq))
%connect_info
Explanation: Computing the marginal likelihood
Model 1
\begin{eqnarray}
p(X| m=1) & = & \int d{s_{1:K}} p(X|s_{1:K}) p(s_{1:K}) =
\end{eqnarray}
End of explanation |
12,742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch Serving Design Pattern
This notebook demonstrates the Batch Serving design pattern using BigQuery
Simple text classification model
Let's use the same model that was used in serving_function.ipynb -- a text classification model based on IMDB reviews trained, and exported.
Step1: Load model into BigQuery for batch serving
Step2: Now, do it at scale, on consumer complaints about financial products and services | Python Code:
!find export/probs/
%%bash
LOCAL_DIR=$(find export/probs | head -2 | tail -1)
BUCKET=ai-analytics-solutions-kfpdemo
gsutil rm -rf gs://${BUCKET}/mlpatterns/batchserving
gsutil cp -r $LOCAL_DIR gs://${BUCKET}/mlpatterns/batchserving
gsutil ls gs://${BUCKET}/mlpatterns/batchserving
Explanation: Batch Serving Design Pattern
This notebook demonstrates the Batch Serving design pattern using BigQuery
Simple text classification model
Let's use the same model that was used in serving_function.ipynb -- a text classification model based on IMDB reviews trained, and exported.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL mlpatterns.imdb_sentiment
OPTIONS(model_type='tensorflow', model_path='gs://ai-analytics-solutions-kfpdemo/mlpatterns/batchserving/*')
%%bigquery
SELECT * FROM ML.PREDICT(MODEL mlpatterns.imdb_sentiment,
(SELECT 'This was very well done.' AS reviews)
)
Explanation: Load model into BigQuery for batch serving
End of explanation
%%bigquery preds
SELECT * FROM ML.PREDICT(MODEL mlpatterns.imdb_sentiment,
(SELECT consumer_complaint_narrative AS reviews
FROM `bigquery-public-data`.cfpb_complaints.complaint_database
WHERE consumer_complaint_narrative IS NOT NULL
)
)
preds[:3]
# what's does a "positive" complaint look like?
preds.sort_values(by='positive_review_probability', ascending=False).iloc[1]['reviews']
# what's does a "typical" complaint look like?
preds.sort_values(by='positive_review_probability', ascending=False).iloc[len(preds)//2]['reviews']
Explanation: Now, do it at scale, on consumer complaints about financial products and services:
End of explanation |
12,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 2 - Logistic Regression (LR) with MNIST
This lab corresponds to Module 2 of the "Deep Learning Explained" course. We assume that you have successfully completed Lab 1 (Downloading the MNIST data).
In this lab we will build and train a Multiclass Logistic Regression model using the MNIST data.
Introduction
Problem
Step1: Goal
Step2: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU).
Step3: Initialization
Step4: Data reading
There are different ways one can read data into CNTK. The easiest way is to load the data in memory using NumPy / SciPy / Pandas readers. However, this can be done only for small data sets. Since deep learning requires large amount of data we have chosen in this course to show how to leverage built-in distributed readers that can scale to terrabytes of data with little extra effort.
We are using the MNIST data you have downloaded using Lab 1 DataLoader notebook. The dataset has 60,000 training images and 10,000 test images with each image being 28 x 28 pixels. Thus the number of features is equal to 784 (= 28 x 28 pixels), 1 per pixel. The variable num_output_classes is set to 10 corresponding to the number of digits (0-9) in the dataset.
In Lab 1, the data was downloaded and written to 2 CTF (CNTK Text Format) files, 1 for training, and 1 for testing. Each line of these text files takes the form
Step5: Model Creation
A multiclass logistic regression (LR) network is a simple building block that has been effectively powering many ML
applications in the past decade. The figure below summarizes the model in the context of the MNIST data.
LR is a simple linear model that takes as input, a vector of numbers describing the properties of what we are classifying (also known as a feature vector, $\bf \vec{x}$, the pixels in the input MNIST digit image) and emits the evidence ($z$). For each of the 10 digits, there is a vector of weights corresponding to the input pixels as show in the figure. These 10 weight vectors define the weight matrix ($\bf {W}$) with dimension of 10 x 784. Each feature in the input layer is connected with a summation node by a corresponding weight $w$ (individual weight values from the $\bf{W}$ matrix). Note there are 10 such nodes, 1 corresponding to each digit to be classified.
The first step is to compute the evidence for an observation.
$$\vec{z} = \textbf{W} \bf \vec{x}^T + \vec{b}$$
where $\bf{W}$ is the weight matrix of dimension 10 x 784 and $\vec{b}$ is known as the bias vector with lenght 10, one for each digit.
The evidence ($\vec{z}$) is not squashed (hence no activation). Instead the output is normalized using a softmax function such that all the outputs add up to a value of 1, thus lending a probabilistic iterpretation to the prediction. In CNTK, we use the softmax operation combined with the cross entropy error as our Loss Function for training.
Step6: Network input and output
Step7: Logistic Regression network setup
The CNTK Layers module provides a Dense function that creates a fully connected layer which performs the above operations of weighted input summing and bias addition.
Step8: z will be used to represent the output of a network.
Step9: Training
Below, we define the Loss function, which is used to guide weight changes during training.
As explained in the lectures, we use the softmax function to map the accumulated evidences or activations to a probability distribution over the classes (Details of the softmax function and other activation functions).
We minimize the cross-entropy between the label and predicted probability by the network.
Step10: Evaluation
Below, we define the Evaluation (or metric) function that is used to report a measurement of how well our model is performing.
For this problem, we choose the classification_error() function as our metric, which returns the average error over the associated samples (treating a match as "1", where the model's prediction matches the "ground truth" label, and a non-match as "0").
Step11: Configure training
The trainer strives to reduce the loss function by different optimization approaches, Stochastic Gradient Descent (sgd) being one of the most popular. Typically, one would start with random initialization of the model parameters. The sgd optimizer would calculate the loss or error between the predicted label against the corresponding ground-truth label and using gradient-decent generate a new set model parameters in a single iteration.
The aforementioned model parameter update using a single observation at a time is attractive since it does not require the entire data set (all observation) to be loaded in memory and also requires gradient computation over fewer datapoints, thus allowing for training on large data sets. However, the updates generated using a single observation sample at a time can vary wildly between iterations. An intermediate ground is to load a small set of observations and use an average of the loss or error from that set to update the model parameters. This subset is called a minibatch.
With minibatches, we sample observations from the larger training dataset. We repeat the process of model parameters update using different combination of training samples and over a period of time minimize the loss (and the error metric). When the incremental error rates are no longer changing significantly or after a preset number of maximum minibatches to train, we claim that our model is trained.
One of the key optimization parameters is called the learning_rate. For now, we can think of it as a scaling factor that modulates how much we change the parameters in any iteration.
With this information, we are ready to create our trainer.
Step12: First let us create some helper functions that will be needed to visualize different functions associated with training.
Step13: <a id='#Run the trainer'></a>
Run the trainer
We are now ready to train our fully connected neural net. We want to decide what data we need to feed into the training engine.
In this example, each iteration of the optimizer will work on minibatch_size sized samples. We would like to train on all 60000 observations. Additionally we will make multiple passes through the data specified by the variable num_sweeps_to_train_with. With these parameters we can proceed with training our simple feed forward network.
Step14: Let us plot the errors over the different training minibatches. Note that as we progress in our training, the loss decreases though we do see some intermediate bumps.
Step15: Evaluation / Testing
Now that we have trained the network, let us evaluate the trained network on the test data. This is done using trainer.test_minibatch.
Step16: We have so far been dealing with aggregate measures of error. Let us now get the probabilities associated with individual data points. For each observation, the eval function returns the probability distribution across all the classes. The classifier is trained to recognize digits, hence has 10 classes. First let us route the network output through a softmax function. This maps the aggregated activations across the network to probabilities across the 10 classes.
Step17: Let us test a small minibatch sample from the test data.
Step18: As you can see above, our model is not yet perfect.
Let us visualize one of the test images and its associated label. Do they match? | Python Code:
# Figure 1
Image(url= "http://3.bp.blogspot.com/_UpN7DfJA0j4/TJtUBWPk0SI/AAAAAAAAABY/oWPMtmqJn3k/s1600/mnist_originals.png", width=200, height=200)
Explanation: Lab 2 - Logistic Regression (LR) with MNIST
This lab corresponds to Module 2 of the "Deep Learning Explained" course. We assume that you have successfully completed Lab 1 (Downloading the MNIST data).
In this lab we will build and train a Multiclass Logistic Regression model using the MNIST data.
Introduction
Problem:
Optical Character Recognition (OCR) is a hot research area and there is a great demand for automation. The MNIST data is comprised of hand-written digits with little background noise making it a nice dataset to create, experiment and learn deep learning models with reasonably small comptuing resources.
End of explanation
# Import the relevant components
from __future__ import print_function # Use a function definition from future version (say 3.x from 2.7 interpreter)
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
import cntk as C
%matplotlib inline
Explanation: Goal:
Our goal is to train a classifier that will identify the digits in the MNIST dataset.
Approach:
There are 4 stages in this lab:
- Data reading: We will use the CNTK Text reader.
- Data preprocessing: Covered in part A (suggested extension section).
- Model creation: Multiclass Logistic Regression model.
- Train-Test-Predict: This is the same workflow introduced in the lectures
Logistic Regression
Logistic Regression (LR) is a fundamental machine learning technique that uses a linear weighted combination of features and generates probability-based predictions of different classes.
There are two basic forms of LR: Binary LR (with a single output that can predict two classes) and multiclass LR (with multiple outputs, each of which is used to predict a single class).
In Binary Logistic Regression (see top of figure above), the input features are each scaled by an associated weight and summed together. The sum is passed through a squashing (aka activation) function and generates an output in [0,1]. This output value is then compared with a threshold (such as 0.5) to produce a binary label (0 or 1), predicting 1 of 2 classes. This technique supports only classification problems with two output classes, hence the name binary LR. In the binary LR example shown above, the sigmoid function is used as the squashing function.
In Multiclass Linear Regression (see bottom of figure above), 2 or more output nodes are used, one for each output class to be predicted. Each summation node uses its own set of weights to scale the input features and sum them together. Instead of passing the summed output of the weighted input features through a sigmoid squashing function, the output is often passed through a softmax function (which in addition to squashing, like the sigmoid, the softmax normalizes each nodes' output value using the sum of all unnormalized nodes). (Details in the context of MNIST image to follow)
We will use multiclass LR for classifying the MNIST digits (0-9) using 10 output nodes (1 for each of our output classes). In our approach, we will move the softmax function out of the model and into our Loss function used in training (details to follow).
End of explanation
# Select the right target device when this notebook is being tested:
if 'TEST_DEVICE' in os.environ:
if os.environ['TEST_DEVICE'] == 'cpu':
C.device.try_set_default_device(C.device.cpu())
else:
C.device.try_set_default_device(C.device.gpu(0))
# Test for CNTK version
if not C.__version__ == "2.0":
raise Exception("this lab is designed to work with 2.0. Current Version: " + C.__version__)
Explanation: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU).
End of explanation
# Ensure we always get the same amount of randomness
np.random.seed(0)
C.cntk_py.set_fixed_random_seed(1)
C.cntk_py.force_deterministic_algorithms()
# Define the data dimensions
input_dim = 784
num_output_classes = 10
Explanation: Initialization
End of explanation
# Read a CTF formatted text (as mentioned above) using the CTF deserializer from a file
def create_reader(path, is_training, input_dim, num_label_classes):
labelStream = C.io.StreamDef(field='labels', shape=num_label_classes, is_sparse=False)
featureStream = C.io.StreamDef(field='features', shape=input_dim, is_sparse=False)
deserailizer = C.io.CTFDeserializer(path, C.io.StreamDefs(labels = labelStream, features = featureStream))
return C.io.MinibatchSource(deserailizer,
randomize = is_training, max_sweeps = C.io.INFINITELY_REPEAT if is_training else 1)
# Ensure the training and test data is generated and available for this lab.
# We search in two locations in the toolkit for the cached MNIST data set.
data_found = False
for data_dir in [os.path.join("..", "Examples", "Image", "DataSets", "MNIST"),
os.path.join("data", "MNIST")]:
train_file = os.path.join(data_dir, "Train-28x28_cntk_text.txt")
test_file = os.path.join(data_dir, "Test-28x28_cntk_text.txt")
if os.path.isfile(train_file) and os.path.isfile(test_file):
data_found = True
break
if not data_found:
raise ValueError("Please generate the data by completing Lab1_MNIST_DataLoader")
print("Data directory is {0}".format(data_dir))
Explanation: Data reading
There are different ways one can read data into CNTK. The easiest way is to load the data in memory using NumPy / SciPy / Pandas readers. However, this can be done only for small data sets. Since deep learning requires large amount of data we have chosen in this course to show how to leverage built-in distributed readers that can scale to terrabytes of data with little extra effort.
We are using the MNIST data you have downloaded using Lab 1 DataLoader notebook. The dataset has 60,000 training images and 10,000 test images with each image being 28 x 28 pixels. Thus the number of features is equal to 784 (= 28 x 28 pixels), 1 per pixel. The variable num_output_classes is set to 10 corresponding to the number of digits (0-9) in the dataset.
In Lab 1, the data was downloaded and written to 2 CTF (CNTK Text Format) files, 1 for training, and 1 for testing. Each line of these text files takes the form:
|labels 0 0 0 1 0 0 0 0 0 0 |features 0 0 0 0 ...
(784 integers each representing a pixel)
We are going to use the image pixels corresponding the integer stream named "features". We define a create_reader function to read the training and test data using the CTF deserializer. The labels are 1-hot encoded. Refer to Lab 1 for data format visualizations.
End of explanation
print(input_dim)
print(num_output_classes)
Explanation: Model Creation
A multiclass logistic regression (LR) network is a simple building block that has been effectively powering many ML
applications in the past decade. The figure below summarizes the model in the context of the MNIST data.
LR is a simple linear model that takes as input, a vector of numbers describing the properties of what we are classifying (also known as a feature vector, $\bf \vec{x}$, the pixels in the input MNIST digit image) and emits the evidence ($z$). For each of the 10 digits, there is a vector of weights corresponding to the input pixels as show in the figure. These 10 weight vectors define the weight matrix ($\bf {W}$) with dimension of 10 x 784. Each feature in the input layer is connected with a summation node by a corresponding weight $w$ (individual weight values from the $\bf{W}$ matrix). Note there are 10 such nodes, 1 corresponding to each digit to be classified.
The first step is to compute the evidence for an observation.
$$\vec{z} = \textbf{W} \bf \vec{x}^T + \vec{b}$$
where $\bf{W}$ is the weight matrix of dimension 10 x 784 and $\vec{b}$ is known as the bias vector with lenght 10, one for each digit.
The evidence ($\vec{z}$) is not squashed (hence no activation). Instead the output is normalized using a softmax function such that all the outputs add up to a value of 1, thus lending a probabilistic iterpretation to the prediction. In CNTK, we use the softmax operation combined with the cross entropy error as our Loss Function for training.
End of explanation
input = C.input_variable(input_dim)
label = C.input_variable(num_output_classes)
Explanation: Network input and output:
- input variable (a key CNTK concept):
An input variable is a container in which we fill different observations, in this case image pixels, during model learning (a.k.a.training) and model evaluation (a.k.a. testing). Thus, the shape of the input must match the shape of the data that will be provided. For example, when data are images each of height 10 pixels and width 5 pixels, the input feature dimension will be 50 (representing the total number of image pixels).
Knowledge Check: What is the input dimension of your chosen model? This is fundamental to our understanding of variables in a network or model representation in CNTK.
End of explanation
def create_model(features):
with C.layers.default_options(init = C.glorot_uniform()):
r = C.layers.Dense(num_output_classes, activation = None)(features)
#r = C.layers.Dense(num_output_classes, activation = None)(C.ops.splice(C.ops.sqrt(features), features, C.ops.square(features)))
return r
Explanation: Logistic Regression network setup
The CNTK Layers module provides a Dense function that creates a fully connected layer which performs the above operations of weighted input summing and bias addition.
End of explanation
# Scale the input to 0-1 range by dividing each pixel by 255.
input_s = input/255
z = create_model(input_s)
print(input_s)
print(input)
Explanation: z will be used to represent the output of a network.
End of explanation
loss = C.cross_entropy_with_softmax(z, label)
loss
Explanation: Training
Below, we define the Loss function, which is used to guide weight changes during training.
As explained in the lectures, we use the softmax function to map the accumulated evidences or activations to a probability distribution over the classes (Details of the softmax function and other activation functions).
We minimize the cross-entropy between the label and predicted probability by the network.
End of explanation
label_error = C.classification_error(z, label)
Explanation: Evaluation
Below, we define the Evaluation (or metric) function that is used to report a measurement of how well our model is performing.
For this problem, we choose the classification_error() function as our metric, which returns the average error over the associated samples (treating a match as "1", where the model's prediction matches the "ground truth" label, and a non-match as "0").
End of explanation
# Instantiate the trainer object to drive the model training
learning_rate = 0.1 # 0.2
lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch)
learner = C.sgd(z.parameters, lr_schedule)
trainer = C.Trainer(z, (loss, label_error), [learner])
Explanation: Configure training
The trainer strives to reduce the loss function by different optimization approaches, Stochastic Gradient Descent (sgd) being one of the most popular. Typically, one would start with random initialization of the model parameters. The sgd optimizer would calculate the loss or error between the predicted label against the corresponding ground-truth label and using gradient-decent generate a new set model parameters in a single iteration.
The aforementioned model parameter update using a single observation at a time is attractive since it does not require the entire data set (all observation) to be loaded in memory and also requires gradient computation over fewer datapoints, thus allowing for training on large data sets. However, the updates generated using a single observation sample at a time can vary wildly between iterations. An intermediate ground is to load a small set of observations and use an average of the loss or error from that set to update the model parameters. This subset is called a minibatch.
With minibatches, we sample observations from the larger training dataset. We repeat the process of model parameters update using different combination of training samples and over a period of time minimize the loss (and the error metric). When the incremental error rates are no longer changing significantly or after a preset number of maximum minibatches to train, we claim that our model is trained.
One of the key optimization parameters is called the learning_rate. For now, we can think of it as a scaling factor that modulates how much we change the parameters in any iteration.
With this information, we are ready to create our trainer.
End of explanation
# Define a utility function to compute the moving average sum.
# A more efficient implementation is possible with np.cumsum() function
def moving_average(a, w=5):
if len(a) < w:
return a[:] # Need to send a copy of the array
return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(a)]
# Defines a utility that prints the training progress
def print_training_progress(trainer, mb, frequency, verbose=1):
training_loss = "NA"
eval_error = "NA"
if mb%frequency == 0:
training_loss = trainer.previous_minibatch_loss_average
eval_error = trainer.previous_minibatch_evaluation_average
if verbose:
print ("Minibatch: {0}, Loss: {1:.4f}, Error: {2:.2f}%".format(mb, training_loss, eval_error*100))
return mb, training_loss, eval_error
Explanation: First let us create some helper functions that will be needed to visualize different functions associated with training.
End of explanation
# Initialize the parameters for the trainer
minibatch_size = 64
num_samples_per_sweep = 60000
num_sweeps_to_train_with = 10
num_minibatches_to_train = (num_samples_per_sweep * num_sweeps_to_train_with) / minibatch_size
num_minibatches_to_train
# Create the reader to training data set
reader_train = create_reader(train_file, True, input_dim, num_output_classes)
# Map the data streams to the input and labels.
input_map = {
label : reader_train.streams.labels,
input : reader_train.streams.features
}
# Run the trainer on and perform model training
training_progress_output_freq = 500
plotdata = {"batchsize":[], "loss":[], "error":[]}
import time
start = time.clock()
for i in range(0, int(num_minibatches_to_train)):
# Read a mini batch from the training data file
data = reader_train.next_minibatch(minibatch_size, input_map = input_map)
trainer.train_minibatch(data)
batchsize, loss, error = print_training_progress(trainer, i, training_progress_output_freq, verbose=1)
if not (loss == "NA" or error =="NA"):
plotdata["batchsize"].append(batchsize)
plotdata["loss"].append(loss)
plotdata["error"].append(error)
elapsed = (time.clock() - start)
print("Time used:",elapsed)
Explanation: <a id='#Run the trainer'></a>
Run the trainer
We are now ready to train our fully connected neural net. We want to decide what data we need to feed into the training engine.
In this example, each iteration of the optimizer will work on minibatch_size sized samples. We would like to train on all 60000 observations. Additionally we will make multiple passes through the data specified by the variable num_sweeps_to_train_with. With these parameters we can proceed with training our simple feed forward network.
End of explanation
# Compute the moving average loss to smooth out the noise in SGD
plotdata["avgloss"] = moving_average(plotdata["loss"])
plotdata["avgerror"] = moving_average(plotdata["error"])
# Plot the training loss and the training error
import matplotlib.pyplot as plt
plt.figure(1)
plt.subplot(211)
plt.plot(plotdata["batchsize"], plotdata["avgloss"], 'b--')
plt.xlabel('Minibatch number')
plt.ylabel('Loss')
plt.title('Minibatch run vs. Training loss')
plt.show()
plt.subplot(212)
plt.plot(plotdata["batchsize"], plotdata["avgerror"], 'r--')
plt.xlabel('Minibatch number')
plt.ylabel('Label Prediction Error')
plt.title('Minibatch run vs. Label Prediction Error')
plt.show()
Explanation: Let us plot the errors over the different training minibatches. Note that as we progress in our training, the loss decreases though we do see some intermediate bumps.
End of explanation
# Read the training data
reader_test = create_reader(test_file, False, input_dim, num_output_classes)
test_input_map = {
label : reader_test.streams.labels,
input : reader_test.streams.features,
}
# Test data for trained model
test_minibatch_size = 512
num_samples = 10000
num_minibatches_to_test = num_samples // test_minibatch_size
test_result = 0.0
for i in range(num_minibatches_to_test):
# We are loading test data in batches specified by test_minibatch_size
# Each data point in the minibatch is a MNIST digit image of 784 dimensions
# with one pixel per dimension that we will encode / decode with the
# trained model.
data = reader_test.next_minibatch(test_minibatch_size,
input_map = test_input_map)
eval_error = trainer.test_minibatch(data)
test_result = test_result + eval_error
# Average of evaluation errors of all test minibatches
print("Average test error: {0:.2f}%".format(test_result*100 / num_minibatches_to_test))
Explanation: Evaluation / Testing
Now that we have trained the network, let us evaluate the trained network on the test data. This is done using trainer.test_minibatch.
End of explanation
out = C.softmax(z)
Explanation: We have so far been dealing with aggregate measures of error. Let us now get the probabilities associated with individual data points. For each observation, the eval function returns the probability distribution across all the classes. The classifier is trained to recognize digits, hence has 10 classes. First let us route the network output through a softmax function. This maps the aggregated activations across the network to probabilities across the 10 classes.
End of explanation
# Read the data for evaluation
reader_eval = create_reader(test_file, False, input_dim, num_output_classes)
eval_minibatch_size = 25
eval_input_map = {input: reader_eval.streams.features}
data = reader_test.next_minibatch(eval_minibatch_size, input_map = test_input_map)
img_label = data[label].asarray()
img_data = data[input].asarray()
predicted_label_prob = [out.eval(img_data[i]) for i in range(len(img_data))]
# Find the index with the maximum value for both predicted as well as the ground truth
pred = [np.argmax(predicted_label_prob[i]) for i in range(len(predicted_label_prob))]
gtlabel = [np.argmax(img_label[i]) for i in range(len(img_label))]
print("Label :", gtlabel[:25])
print("Predicted:", pred)
Explanation: Let us test a small minibatch sample from the test data.
End of explanation
# Plot a random image
sample_number = 5
plt.imshow(img_data[sample_number].reshape(28,28), cmap="gray_r")
plt.axis('off')
img_gt, img_pred = gtlabel[sample_number], pred[sample_number]
print("Image Label: ", img_pred)
Explanation: As you can see above, our model is not yet perfect.
Let us visualize one of the test images and its associated label. Do they match?
End of explanation |
12,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cython
The Cython language is a superset of the Python language that additionally
supports calling C functions and declaring C types on variables and class
attributes.
This allows the compiler to generate very efficient C code from Cython code.
Write Python code that calls back and forth from and to C or C++ code natively at any point.
Easily tune readable Python code into plain C performance by adding static type declarations.
Use combined source code level debugging to find bugs in your Python, Cython and C code.
Interact efficiently with large data sets, e.g. using multi-dimensional NumPy arrays.
Quickly build your applications within the large, mature and widely used CPython ecosystem.
Integrate natively with existing code and data from legacy, low-level or high-performance libraries and applications.
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
Accelerating Python code with Cython
We use Cython to accelerate the generation of the Mandelbrot fractal.
Step1: We initialize the simulation and generate the grid
in the complex plane.
Step2: Pure Python
Step3: Cython versions
We first import Cython.
Step4: Take 1
First, we just add the %%cython magic.
Step5: Virtually no speedup.
Take 2
Now, we add type information, using memory views for NumPy arrays. | Python Code:
import numpy as np
Explanation: Cython
The Cython language is a superset of the Python language that additionally
supports calling C functions and declaring C types on variables and class
attributes.
This allows the compiler to generate very efficient C code from Cython code.
Write Python code that calls back and forth from and to C or C++ code natively at any point.
Easily tune readable Python code into plain C performance by adding static type declarations.
Use combined source code level debugging to find bugs in your Python, Cython and C code.
Interact efficiently with large data sets, e.g. using multi-dimensional NumPy arrays.
Quickly build your applications within the large, mature and widely used CPython ecosystem.
Integrate natively with existing code and data from legacy, low-level or high-performance libraries and applications.
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
Accelerating Python code with Cython
We use Cython to accelerate the generation of the Mandelbrot fractal.
End of explanation
size = 200
iterations = 100
Explanation: We initialize the simulation and generate the grid
in the complex plane.
End of explanation
def mandelbrot_python(m, size, iterations):
for i in range(size):
for j in range(size):
c = -2 + 3./size*j + 1j*(1.5-3./size*i)
z = 0
for n in range(iterations):
if np.abs(z) <= 10:
z = z*z + c
m[i, j] = n
else:
break
%%timeit -n1 -r1 m = np.zeros((size, size))
mandelbrot_python(m, size, iterations)
Explanation: Pure Python
End of explanation
%load_ext cythonmagic
Explanation: Cython versions
We first import Cython.
End of explanation
%%cython -a
import numpy as np
def mandelbrot_cython(m, size, iterations):
for i in range(size):
for j in range(size):
c = -2 + 3./size*j + 1j*(1.5-3./size*i)
z = 0
for n in range(iterations):
if np.abs(z) <= 10:
z = z*z + c
m[i, j] = n
else:
break
%%timeit -n1 -r1 m = np.zeros((size, size), dtype=np.int32)
mandelbrot_cython(m, size, iterations)
Explanation: Take 1
First, we just add the %%cython magic.
End of explanation
%%cython -a
import numpy as np
def mandelbrot_cython(int[:,::1] m,
int size,
int iterations):
cdef int i, j, n
cdef complex z, c
for i in range(size):
for j in range(size):
c = -2 + 3./size*j + 1j*(1.5-3./size*i)
z = 0
for n in range(iterations):
if z.real**2 + z.imag**2 <= 100:
z = z*z + c
m[i, j] = n
else:
break
%%timeit -n1 -r1 m = np.zeros((size, size), dtype=np.int32)
mandelbrot_cython(m, size, iterations)
Explanation: Virtually no speedup.
Take 2
Now, we add type information, using memory views for NumPy arrays.
End of explanation |
12,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data passing tutorial
Data passing is the most important aspect of Pipelines.
In Kubeflow Pipelines, the pipeline authors compose pipelines by creating component instances (tasks) and connecting them together.
Component have inputs and outputs. They can consume and produce arbitrary data.
Pipeline authors establish connections between component tasks by connecting their data inputs and outputs - by passing the output of one task as an argument to another task's input.
The system takes care of storing the data produced by components and later passing that data to other components for consumption as instructed by the pipeline.
This tutorial shows how to create python components that produce, consume and transform data.
It shows how to create data passing pipelines by instantiating components and connecting them together.
Step1: Small data
Small data is the data that you'll be comfortable passing as program's command-line argument. Small data size should not exceed few kilobytes.
Some examples of typical types of small data are
Step2: Producing small data
Step3: Producing and consuming multiple arguments
Step4: Consuming and producing data at the same time
Step5: big data (files)
big data should be read from files and written to files.
The paths for the input and output files are chosen by the system and are passed into the function (as strings).
Use the InputPath parameter annotation to tell the system that the function wants to consume the corresponding input data as a file. The system will download the data, write it to a local file and then pass the path of that file to the function.
Use the OutputPath parameter annotation to tell the system that the function wants to produce the corresponding output data as a file. The system will prepare and pass the path of a file where the function should write the output data. After the function exits, the system will upload the data to the storage system so that it can be passed to downstream components.
You can specify the type of the consumed/produced data by specifying the type argument to InputPath and OutputPath. The type can be a python type or an arbitrary type name string. OutputPath('TFModel') means that the function states that the data it has written to a file has type 'TFModel'. InputPath('TFModel') means that the function states that it expect the data it reads from a file to have type 'TFModel'. When the pipeline author connects inputs to outputs the system checks whether the types match.
Note on input/output names
Step6: Processing big data
Step7: Processing big data with pre-opened files
Step8: Example | Python Code:
from typing import NamedTuple
import kfp
from kfp.components import InputPath, InputTextFile, OutputPath, OutputTextFile
from kfp.components import func_to_container_op
from kfp_tekton.compiler import TektonCompiler
import os
os.environ["DEFAULT_ACCESSMODES"] = "ReadWriteMany"
os.environ["DEFAULT_STORAGE_SIZE"] = "2Gi"
Explanation: Data passing tutorial
Data passing is the most important aspect of Pipelines.
In Kubeflow Pipelines, the pipeline authors compose pipelines by creating component instances (tasks) and connecting them together.
Component have inputs and outputs. They can consume and produce arbitrary data.
Pipeline authors establish connections between component tasks by connecting their data inputs and outputs - by passing the output of one task as an argument to another task's input.
The system takes care of storing the data produced by components and later passing that data to other components for consumption as instructed by the pipeline.
This tutorial shows how to create python components that produce, consume and transform data.
It shows how to create data passing pipelines by instantiating components and connecting them together.
End of explanation
@func_to_container_op
def print_small_text(text: str):
'''Print small text'''
print(text)
def constant_to_consumer_pipeline():
'''Pipeline that passes small constant string to to consumer'''
consume_task = print_small_text('Hello world') # Passing constant as argument to consumer
TektonCompiler().compile(constant_to_consumer_pipeline,
'constant_to_consumer_pipeline.yaml')
!kubectl apply -f constant_to_consumer_pipeline.yaml
!tkn pr describe constant-to-consumer-pipeline
def pipeline_parameter_to_consumer_pipeline(text: str):
'''Pipeline that passes small pipeline parameter string to to consumer'''
consume_task = print_small_text(text) # Passing pipeline parameter as argument to consumer
TektonCompiler().compile(pipeline_parameter_to_consumer_pipeline,
'pipeline_parameter_to_consumer_pipeline.yaml')
!kubectl apply -f pipeline_parameter_to_consumer_pipeline.yaml
!tkn pr describe pipeline-parameter-to-consumer-pipeline
Explanation: Small data
Small data is the data that you'll be comfortable passing as program's command-line argument. Small data size should not exceed few kilobytes.
Some examples of typical types of small data are: number, URL, small string (e.g. column name).
Small lists, dictionaries and JSON structures are fine, but keep an eye on the size and consider switching to file-based data passing methods that are more suitable for big data (more than several kilobytes) or binary data.
All small data outputs will be at some point serialized to strings and all small data input values will be at some point deserialized from strings (passed as command-line argumants). There are built-in serializers and deserializers for several common types (e.g. str, int, float, bool, list, dict). All other types of data need to be serialized manually before returning the data. Make sure to properly specify type annotations, otherwize there would be no automatic deserialization and the component function will receive strings instead of deserialized objects.
Consuming small data
End of explanation
@func_to_container_op
def produce_one_small_output() -> str:
return 'Hello world'
def task_output_to_consumer_pipeline():
'''Pipeline that passes small data from producer to consumer'''
produce_task = produce_one_small_output()
# Passing producer task output as argument to consumer
consume_task1 = print_small_text(produce_task.output) # task.output only works for single-output components
consume_task2 = print_small_text(produce_task.outputs['output']) # task.outputs[...] always works
TektonCompiler().compile(task_output_to_consumer_pipeline,
'task_output_to_consumer_pipeline.yaml')
!kubectl apply -f task_output_to_consumer_pipeline.yaml
!tkn pr describe task-output-to-consumer-pipeline
Explanation: Producing small data
End of explanation
@func_to_container_op
def produce_two_small_outputs() -> NamedTuple('Outputs', [('text', str), ('number', int)]):
return ("data 1", 42)
@func_to_container_op
def consume_two_arguments(text: str, number: int):
print('Text={}'.format(text))
print('Number={}'.format(str(number)))
def producers_to_consumers_pipeline(text: str = "Hello world"):
'''Pipeline that passes data from producer to consumer'''
produce1_task = produce_one_small_output()
produce2_task = produce_two_small_outputs()
consume_task1 = consume_two_arguments(produce1_task.output, 42)
consume_task2 = consume_two_arguments(text, produce2_task.outputs['number'])
consume_task3 = consume_two_arguments(produce2_task.outputs['text'], produce2_task.outputs['number'])
TektonCompiler().compile(producers_to_consumers_pipeline,
'producers_to_consumers_pipeline.yaml')
!kubectl apply -f producers_to_consumers_pipeline.yaml
!tkn pr describe producers-to-consumers-pipeline
Explanation: Producing and consuming multiple arguments
End of explanation
@func_to_container_op
def get_item_from_list(list_of_strings: list, index: int) -> str:
return list_of_strings[index]
@func_to_container_op
def truncate_text(text: str, max_length: int) -> str:
return text[0:max_length]
def processing_pipeline(text: str = "Hello world"):
truncate_task = truncate_text(text, max_length=5)
get_item_task = get_item_from_list(list_of_strings=[3, 1, truncate_task.output, 1, 5, 9, 2, 6, 7], index=2)
print_small_text(get_item_task.output)
TektonCompiler().compile(processing_pipeline,
'processing_pipeline.yaml')
!kubectl apply -f processing_pipeline.yaml
!tkn pr describe processing-pipeline
Explanation: Consuming and producing data at the same time
End of explanation
# Writing big data
@func_to_container_op
def repeat_line(line: str, output_text_path: OutputPath(str), count: int = 10):
'''Repeat the line specified number of times'''
with open(output_text_path, 'w') as writer:
for i in range(count):
writer.write(line + '\n')
# Reading big data
@func_to_container_op
def print_text(text_path: InputPath()): # The "text" input is untyped so that any data can be printed
'''Print text'''
with open(text_path, 'r') as reader:
for line in reader:
print(line, end = '')
def print_repeating_lines_pipeline():
repeat_lines_task = repeat_line(line='Hello', count=5000)
print_text(repeat_lines_task.output) # Don't forget .output !
TektonCompiler().compile(print_repeating_lines_pipeline,
'print_repeating_lines_pipeline.yaml')
!kubectl apply -f print_repeating_lines_pipeline.yaml
!tkn pr describe print-repeating-lines-pipeline
Explanation: big data (files)
big data should be read from files and written to files.
The paths for the input and output files are chosen by the system and are passed into the function (as strings).
Use the InputPath parameter annotation to tell the system that the function wants to consume the corresponding input data as a file. The system will download the data, write it to a local file and then pass the path of that file to the function.
Use the OutputPath parameter annotation to tell the system that the function wants to produce the corresponding output data as a file. The system will prepare and pass the path of a file where the function should write the output data. After the function exits, the system will upload the data to the storage system so that it can be passed to downstream components.
You can specify the type of the consumed/produced data by specifying the type argument to InputPath and OutputPath. The type can be a python type or an arbitrary type name string. OutputPath('TFModel') means that the function states that the data it has written to a file has type 'TFModel'. InputPath('TFModel') means that the function states that it expect the data it reads from a file to have type 'TFModel'. When the pipeline author connects inputs to outputs the system checks whether the types match.
Note on input/output names: When the function is converted to component, the input and output names generally follow the parameter names, but the "_path" and "_file" suffixes are stripped from file/path inputs and outputs. E.g. the number_file_path: InputPath(int) parameter becomes the number: int input. This makes the argument passing look more natural: number=42 instead of number_file_path=42.
Notes: As we used 'workspaces' in Tekton pipelines to handle big data processing, the compiler will generate the PVC definitions and needs the volume to store the data.
User need to create volume manually, or enable dynamic volume provisioning, refer to the link of:
https://kubernetes.io/docs/concepts/storage/dynamic-provisioning
Writing and reading big data
End of explanation
@func_to_container_op
def split_text_lines(source_path: InputPath(str), odd_lines_path: OutputPath(str), even_lines_path: OutputPath(str)):
with open(source_path, 'r') as reader:
with open(odd_lines_path, 'w') as odd_writer:
with open(even_lines_path, 'w') as even_writer:
while True:
line = reader.readline()
if line == "":
break
odd_writer.write(line)
line = reader.readline()
if line == "":
break
even_writer.write(line)
def text_splitting_pipeline():
text = '\n'.join(['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten'])
split_text_task = split_text_lines(text)
print_text(split_text_task.outputs['odd_lines'])
print_text(split_text_task.outputs['even_lines'])
TektonCompiler().compile(text_splitting_pipeline,
'text_splitting_pipeline.yaml')
!kubectl apply -f text_splitting_pipeline.yaml
!tkn pr describe text-splitting-pipeline
Explanation: Processing big data
End of explanation
@func_to_container_op
def split_text_lines2(source_file: InputTextFile(str), odd_lines_file: OutputTextFile(str), even_lines_file: OutputTextFile(str)):
while True:
line = source_file.readline()
if line == "":
break
odd_lines_file.write(line)
line = source_file.readline()
if line == "":
break
even_lines_file.write(line)
def text_splitting_pipeline2():
text = '\n'.join(['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten'])
split_text_task = split_text_lines2(text)
print_text(split_text_task.outputs['odd_lines']).set_display_name('Odd lines')
print_text(split_text_task.outputs['even_lines']).set_display_name('Even lines')
TektonCompiler().compile(text_splitting_pipeline2,
'text_splitting_pipeline2.yaml')
!kubectl apply -f text_splitting_pipeline2.yaml
!tkn pr describe text-splitting-pipeline2
Explanation: Processing big data with pre-opened files
End of explanation
# Writing many numbers
@func_to_container_op
def write_numbers(numbers_path: OutputPath(str), start: int = 0, count: int = 10):
with open(numbers_path, 'w') as writer:
for i in range(start, count):
writer.write(str(i) + '\n')
# Reading and summing many numbers
@func_to_container_op
def sum_numbers(numbers_path: InputPath(str)) -> int:
sum = 0
with open(numbers_path, 'r') as reader:
for line in reader:
sum = sum + int(line)
return sum
# Pipeline to sum 100000 numbers
def sum_pipeline(count: 'Integer' = 100000):
numbers_task = write_numbers(count=count)
print_text(numbers_task.output)
sum_task = sum_numbers(numbers_task.outputs['numbers'])
print_text(sum_task.output)
TektonCompiler().compile(sum_pipeline,
'sum_pipeline.yaml')
!kubectl apply -f sum_pipeline.yaml
!tkn pr describe sum-pipeline
Explanation: Example: Pipeline that generates then sums many numbers
End of explanation |
12,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, here's the SPA power function
Step1: Here are two helper functions for computing the dot product over space, and for plotting the results
Step2: So, that lets us take a vector and turn it into a spatial map. Now let's try going the other way around
Step3: This can be treated as a least-sqares minimization problem. In paticular, we're trying to build the above map using a basis space. The basis vectors in that space are the spatial maps of the D unit vectors in our vector space!! So let's compute those, and use our standard nengo solver
Step4: Yay!
However, one possible problem with this approach is that the norm of this vector is unconstrained
Step5: A better solution would add a constraint on the norm. For that, we use cvxpy
Step6: Looks like the accuracy depends on what limit we put on the norm. Let's see how that varies
Step7: Looks like it works fine with norms that aren't too large. | Python Code:
def power(s, e):
x = np.fft.ifft(np.fft.fft(s.v) ** e).real
return spa.SemanticPointer(data=x)
Explanation: First, here's the SPA power function:
End of explanation
def spatial_dot(v, X, Y, Z, xs, ys, transform=1):
if isinstance(v, spa.SemanticPointer):
v = v.v
vs = np.zeros((len(ys),len(xs)))
for i,x in enumerate(xs):
for j, y in enumerate(ys):
hx = 2/3 * y
hy = (np.sqrt(3)/3 * x - y/3 )
hz = -(np.sqrt(3)/3 * x + y/3 )
t = power(X, hx)*power(Y,hy)*power(Z, hz)*transform
vs[j,i] = np.dot(v, t.v)
return vs
def spatial_plot(vs, colorbar=True, vmin=-1, vmax=1, cmap='plasma'):
vs = vs[::-1, :]
plt.imshow(vs, interpolation='none', extent=(xs[0],xs[-1],ys[0],ys[-1]), vmax=vmax, vmin=vmin, cmap=cmap)
if colorbar:
plt.colorbar()
D = 64
X = spa.SemanticPointer(D)
X.make_unitary()
Y = spa.SemanticPointer(D)
Y.make_unitary()
Z = spa.SemanticPointer(D)
Z.make_unitary()
xs = np.linspace(-3, 3, 50)
ys = np.linspace(-3, 3, 50)
Explanation: Here are two helper functions for computing the dot product over space, and for plotting the results
End of explanation
desired = np.zeros((len(xs),len(ys)))
for i,x in enumerate(xs):
for j, y in enumerate(ys):
if 0<x<2 and -1<y<=3:
val = 1
else:
val = 0
desired[j, i] = val
spatial_plot(desired)
Explanation: So, that lets us take a vector and turn it into a spatial map. Now let's try going the other way around: specify a desired map, and find the vector that gives that.
End of explanation
A = np.array([spatial_dot(np.eye(D)[i], X, Y, Z, xs, ys).flatten() for i in range(D)])
import nengo
v, info = nengo.solvers.LstsqL2(reg=0)(np.array(A).T, desired.flatten())
vs = spatial_dot(v, X, Y, Z, xs, ys)
rmse = np.sqrt(np.mean((vs-desired)**2))
print(rmse)
spatial_plot(vs)
Explanation: This can be treated as a least-sqares minimization problem. In paticular, we're trying to build the above map using a basis space. The basis vectors in that space are the spatial maps of the D unit vectors in our vector space!! So let's compute those, and use our standard nengo solver:
End of explanation
np.linalg.norm(v)
Explanation: Yay!
However, one possible problem with this approach is that the norm of this vector is unconstrained:
End of explanation
import cvxpy as cvx
class CVXSolver(nengo.solvers.Solver):
def __init__(self, norm_limit):
super(CVXSolver, self).__init__(weights=False)
self.norm_limit = norm_limit
def __call__(self, A, Y, rng=np.random, E=None):
N = A.shape[1]
D = Y.shape[1]
d = cvx.Variable((N, D))
error = cvx.sum_squares(A * d - Y)
cvx_prob = cvx.Problem(cvx.Minimize(error), [cvx.norm(d) <= self.norm_limit])
cvx_prob.solve()
decoder = d.value
rmses = np.sqrt(np.mean((Y-np.dot(A, decoder))**2, axis=0))
return decoder, dict(rmses=rmses)
v2, info2 = CVXSolver(norm_limit=10)(np.array(A).T, desired.flatten().reshape(-1,1))
v2.shape = D,
vs2 = spatial_dot(v2, X, Y, Z, xs, ys)
rmse2 = np.sqrt(np.mean((vs2-desired)**2))
print('rmse:', rmse2)
spatial_plot(vs2)
print('norm:', np.linalg.norm(v2))
Explanation: A better solution would add a constraint on the norm. For that, we use cvxpy
End of explanation
plt.figure(figsize=(10,4))
limits = np.arange(10)+1
for i, limit in enumerate(limits):
plt.subplot(2, 5, i+1)
vv, _ = CVXSolver(norm_limit=limit)(np.array(A).T, desired.flatten().reshape(-1,1))
s = spatial_dot(vv.flatten(), X, Y, Z, xs, ys)
error = np.sqrt(np.mean((s-desired)**2))
spatial_plot(s, colorbar=False)
plt.title('norm: %g\nrmse: %1.2f' % (limit, error))
plt.xticks([])
plt.yticks([])
Explanation: Looks like the accuracy depends on what limit we put on the norm. Let's see how that varies:
End of explanation
import seaborn
SA = np.array(A).T # A matrix passed to solver
gamma = SA.T.dot(SA)
U, S, V = np.linalg.svd(gamma)
w = int(np.sqrt(D))
h = int(np.ceil(D // w))
plt.figure(figsize=(16, 16))
for i in range(len(U)):
# the columns of U are the left-singular vectors
vs = spatial_dot(U[:, i], X, Y, Z, xs, ys)
plt.subplot(w, h, i+1)
spatial_plot(vs, colorbar=False, vmin=None, vmax=None, cmap=seaborn.diverging_palette(150, 275, s=80, l=55, as_cmap=True))
plt.title(r"$\sigma_{%d}(A^T A) = %d$" % (i+1, S[i]))
plt.xticks([])
plt.yticks([])
plt.show()
Explanation: Looks like it works fine with norms that aren't too large.
End of explanation |
12,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DV360 Report To BigQuery
Move existing DV360 reports into a BigQuery table.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter DV360 Report To BigQuery Recipe Parameters
Specify either report name or report id to move a report.
A schema is recommended, if not provided it will be guessed.
The most recent valid file will be moved to the table.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute DV360 Report To BigQuery
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: DV360 Report To BigQuery
Move existing DV360 reports into a BigQuery table.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'auth_write':'service', # Authorization used for writing data.
'dbm_report_id':'', # DV360 report ID given in UI, not needed if name used.
'dbm_report_name':'', # Name of report, not needed if ID used.
'dbm_dataset':'', # Existing BigQuery dataset.
'dbm_table':'', # Table to create from this report.
'dbm_schema':'', # Schema provided in JSON list format or empty value to auto detect.
'is_incremental_load':False, # Clear data in destination table during this report's time period, then append report data to destination table.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter DV360 Report To BigQuery Recipe Parameters
Specify either report name or report id to move a report.
A schema is recommended, if not provided it will be guessed.
The most recent valid file will be moved to the table.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dbm':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for reading data.'}},
'report':{
'report_id':{'field':{'name':'dbm_report_id','kind':'integer','order':2,'default':'','description':'DV360 report ID given in UI, not needed if name used.'}},
'name':{'field':{'name':'dbm_report_name','kind':'string','order':3,'default':'','description':'Name of report, not needed if ID used.'}}
},
'out':{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'dataset':{'field':{'name':'dbm_dataset','kind':'string','order':4,'default':'','description':'Existing BigQuery dataset.'}},
'table':{'field':{'name':'dbm_table','kind':'string','order':5,'default':'','description':'Table to create from this report.'}},
'schema':{'field':{'name':'dbm_schema','kind':'json','order':6,'description':'Schema provided in JSON list format or empty value to auto detect.'}},
'header':True,
'is_incremental_load':{'field':{'name':'is_incremental_load','kind':'boolean','order':7,'default':False,'description':"Clear data in destination table during this report's time period, then append report data to destination table."}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute DV360 Report To BigQuery
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
12,748 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
i am trying to do hyperparemeter search with using scikit-learn's GridSearchCV on XGBoost. During gridsearch i'd like it to early stop, since it reduce search time drastically and (expecting to) have better results on my prediction/regression task. I am using XGBoost via its Scikit-Learn API. | Problem:
import numpy as np
import pandas as pd
import xgboost.sklearn as xgb
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import TimeSeriesSplit
gridsearch, testX, testY, trainX, trainY = load_data()
assert type(gridsearch) == sklearn.model_selection._search.GridSearchCV
assert type(trainX) == list
assert type(trainY) == list
assert type(testX) == list
assert type(testY) == list
fit_params = {"early_stopping_rounds": 42,
"eval_metric": "mae",
"eval_set": [[testX, testY]]}
gridsearch.fit(trainX, trainY, **fit_params) |
12,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iterables
Some steps in a neuroimaging analysis are repetitive. Running the same preprocessing on multiple subjects or doing statistical inference on multiple files. To prevent the creation of multiple individual scripts, Nipype has as execution plugin for Workflow, called iterables.
<img src="../static/images/iterables.png" width="240">
If you are interested in more advanced procedures, such as synchronizing multiple iterables or using conditional iterables, check out the synchronizeand intersource section in the JoinNode notebook.
Realistic example
Let's assume we have a workflow with two nodes, node (A) does simple skull stripping, and is followed by a node (B) that does isometric smoothing. Now, let's say, that we are curious about the effect of different smoothing kernels. Therefore, we want to run the smoothing node with FWHM set to 2mm, 8mm, and 16mm.
Step1: Create a smoothing Node with IsotropicSmooth
Step2: Now, to use iterables and therefore smooth with different fwhm is as simple as that
Step3: And to wrap it up. We need to create a workflow, connect the nodes and finally, can run the workflow in parallel.
Step4: Note, that iterables is set on a specific node (isosmooth in this case), but Workflow is needed to expend the graph to three subgraphs with three different versions of the isosmooth node.
If we visualize the graph with exec, we can see where the parallelization actually takes place.
Step5: If you look at the structure in the workflow directory, you can also see, that for each smoothing, a specific folder was created, i.e. _fwhm_16.
Step6: Now, let's visualize the results!
Step7: IdentityInterface (special use case of iterables)
We often want to start our worflow from creating subgraphs, e.g. for running preprocessing for all subjects. We can easily do it with setting iterables on the IdentityInterface. The IdentityInterface interface allows you to create Nodes that does simple identity mapping, i.e. Nodes that only work on parameters/strings.
For example, you want to start your workflow by collecting anatomical files for 5 subjects.
Step8: Now, we can create the IdentityInterface Node
Step9: That's it. Now, we can connect the output fields of this infosource node to SelectFiles and DataSink nodes.
Step10: Now we can check that five anatomicl images are in anat_files directory
Step11: This was just a simple example of using IdentityInterface, but a complete example of preprocessing workflow you can find in Preprocessing Example).
Exercise 1
Create a workflow to calculate various powers of 2 using two nodes, one for IdentityInterface with iterables, and one for Function interface to calculate the power of 2. | Python Code:
from nipype import Node, Workflow
from nipype.interfaces.fsl import BET, IsotropicSmooth
# Initiate a skull stripping Node with BET
skullstrip = Node(BET(mask=True,
in_file='/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz'),
name="skullstrip")
Explanation: Iterables
Some steps in a neuroimaging analysis are repetitive. Running the same preprocessing on multiple subjects or doing statistical inference on multiple files. To prevent the creation of multiple individual scripts, Nipype has as execution plugin for Workflow, called iterables.
<img src="../static/images/iterables.png" width="240">
If you are interested in more advanced procedures, such as synchronizing multiple iterables or using conditional iterables, check out the synchronizeand intersource section in the JoinNode notebook.
Realistic example
Let's assume we have a workflow with two nodes, node (A) does simple skull stripping, and is followed by a node (B) that does isometric smoothing. Now, let's say, that we are curious about the effect of different smoothing kernels. Therefore, we want to run the smoothing node with FWHM set to 2mm, 8mm, and 16mm.
End of explanation
isosmooth = Node(IsotropicSmooth(), name='iso_smooth')
Explanation: Create a smoothing Node with IsotropicSmooth
End of explanation
isosmooth.iterables = ("fwhm", [4, 8, 16])
Explanation: Now, to use iterables and therefore smooth with different fwhm is as simple as that:
End of explanation
# Create the workflow
wf = Workflow(name="smoothflow")
wf.base_dir = "/output"
wf.connect(skullstrip, 'out_file', isosmooth, 'in_file')
# Run it in parallel (one core for each smoothing kernel)
wf.run('MultiProc', plugin_args={'n_procs': 3})
Explanation: And to wrap it up. We need to create a workflow, connect the nodes and finally, can run the workflow in parallel.
End of explanation
# Visualize the detailed graph
from IPython.display import Image
wf.write_graph(graph2use='exec', format='png', simple_form=True)
Image(filename='/output/smoothflow/graph_detailed.png')
Explanation: Note, that iterables is set on a specific node (isosmooth in this case), but Workflow is needed to expend the graph to three subgraphs with three different versions of the isosmooth node.
If we visualize the graph with exec, we can see where the parallelization actually takes place.
End of explanation
!tree /output/smoothflow -I '*txt|*pklz|report*|*.json|*js|*.dot|*.html'
Explanation: If you look at the structure in the workflow directory, you can also see, that for each smoothing, a specific folder was created, i.e. _fwhm_16.
End of explanation
from nilearn import plotting
%matplotlib inline
plotting.plot_anat(
'/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz', title='original',
display_mode='z', dim=-1, cut_coords=(-50, -35, -20, -5), annotate=False);
plotting.plot_anat(
'/output/smoothflow/skullstrip/sub-01_ses-test_T1w_brain.nii.gz', title='skullstripped',
display_mode='z', dim=-1, cut_coords=(-50, -35, -20, -5), annotate=False);
plotting.plot_anat(
'/output/smoothflow/_fwhm_4/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=4',
display_mode='z', dim=-0.5, cut_coords=(-50, -35, -20, -5), annotate=False);
plotting.plot_anat(
'/output/smoothflow/_fwhm_8/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=8',
display_mode='z', dim=-0.5, cut_coords=(-50, -35, -20, -5), annotate=False);
plotting.plot_anat(
'/output/smoothflow/_fwhm_16/iso_smooth/sub-01_ses-test_T1w_brain_smooth.nii.gz', title='FWHM=16',
display_mode='z', dim=-0.5, cut_coords=(-50, -35, -20, -5), annotate=False);
Explanation: Now, let's visualize the results!
End of explanation
# First, let's specify the list of subjects
subject_list = ['01', '02', '03', '04', '05']
Explanation: IdentityInterface (special use case of iterables)
We often want to start our worflow from creating subgraphs, e.g. for running preprocessing for all subjects. We can easily do it with setting iterables on the IdentityInterface. The IdentityInterface interface allows you to create Nodes that does simple identity mapping, i.e. Nodes that only work on parameters/strings.
For example, you want to start your workflow by collecting anatomical files for 5 subjects.
End of explanation
from nipype import IdentityInterface
infosource = Node(IdentityInterface(fields=['subject_id']),
name="infosource")
infosource.iterables = [('subject_id', subject_list)]
Explanation: Now, we can create the IdentityInterface Node
End of explanation
from os.path import join as opj
from nipype.interfaces.io import SelectFiles, DataSink
anat_file = opj('sub-{subject_id}', 'ses-test', 'anat', 'sub-{subject_id}_ses-test_T1w.nii.gz')
templates = {'anat': anat_file}
selectfiles = Node(SelectFiles(templates,
base_directory='/data/ds000114'),
name="selectfiles")
# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory="/output",
container="datasink"),
name="datasink")
wf_sub = Workflow(name="choosing_subjects")
wf_sub.connect(infosource, "subject_id", selectfiles, "subject_id")
wf_sub.connect(selectfiles, "anat", datasink, "anat_files")
wf_sub.run()
Explanation: That's it. Now, we can connect the output fields of this infosource node to SelectFiles and DataSink nodes.
End of explanation
! ls -lh /output/datasink/anat_files/
Explanation: Now we can check that five anatomicl images are in anat_files directory:
End of explanation
# write your solution here
# lets start from the Identity node
from nipype import Function, Node, Workflow
from nipype.interfaces.utility import IdentityInterface
iden = Node(IdentityInterface(fields=['number']), name="identity")
iden.iterables = [("number", range(8))]
# the second node should use the Function interface
def power_of_two(n):
return 2**n
# Create Node
power = Node(Function(input_names=["n"],
output_names=["pow"],
function=power_of_two),
name='power')
#and now the workflow
wf_ex1 = Workflow(name="exercise1")
wf_ex1.connect(iden, "number", power, "n")
res_ex1 = wf_ex1.run()
# we can print the results
for i in range(8):
print(list(res_ex1.nodes())[i].result.outputs)
Explanation: This was just a simple example of using IdentityInterface, but a complete example of preprocessing workflow you can find in Preprocessing Example).
Exercise 1
Create a workflow to calculate various powers of 2 using two nodes, one for IdentityInterface with iterables, and one for Function interface to calculate the power of 2.
End of explanation |
12,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
About
This notebook demonstrates classifiers, which are provided by Reproducible experiment platform (REP) package. <br /> REP contains following classifiers
* scikit-learn
* TMVA
* XGBoost
Also classifiers from hep_ml (as any other sklearn-compatible classifiers may be used)
In this notebook we show the most simple way to
train classifier
build predictions
measure quality
combine metaclassifiers
Loading data
download particle identification Data Set from UCI
Step1: First rows of our data
Step2: Splitting into train and test
Step3: Classifiers
All classifiers inherit from sklearn.BaseEstimator and have the following methods
Step4: Sklearn
wrapper for scikit-learn classifiers. In this example we use GradientBoosting with default settings
Step5: Predicting probabilities, measuring the quality
Step6: Predictions of classes
Step7: TMVA
Step8: Predict probabilities and estimate quality
Step9: XGBoost
Step10: Predict probabilities and estimate quality
Step11: Predict labels
Step12: Advantages of common interface
As one can see above, all the classifiers implement the same interface,
this simplifies work, simplifies comparison of different classifiers,
but this is not the only profit.
Sklearn provides different tools to combine different classifiers and transformers.
One of this tools is AdaBoost, which is abstract metaclassifier built on the top of some other classifier (usually, decision dree)
Let's show that now you can run AdaBoost over classifiers from other libraries! <br />
(isn't boosting over neural network what you were dreaming of all your life?)
AdaBoost over TMVA classifier
Step13: AdaBoost over XGBoost | Python Code:
!cd toy_datasets; wget -O MiniBooNE_PID.txt -nc MiniBooNE_PID.txt https://archive.ics.uci.edu/ml/machine-learning-databases/00199/MiniBooNE_PID.txt
import numpy, pandas
from rep.utils import train_test_split
from sklearn.metrics import roc_auc_score
data = pandas.read_csv('toy_datasets/MiniBooNE_PID.txt', sep='\s*', skiprows=[0], header=None, engine='python')
labels = pandas.read_csv('toy_datasets/MiniBooNE_PID.txt', sep=' ', nrows=1, header=None)
labels = [1] * labels[1].values[0] + [0] * labels[2].values[0]
data.columns = ['feature_{}'.format(key) for key in data.columns]
Explanation: About
This notebook demonstrates classifiers, which are provided by Reproducible experiment platform (REP) package. <br /> REP contains following classifiers
* scikit-learn
* TMVA
* XGBoost
Also classifiers from hep_ml (as any other sklearn-compatible classifiers may be used)
In this notebook we show the most simple way to
train classifier
build predictions
measure quality
combine metaclassifiers
Loading data
download particle identification Data Set from UCI
End of explanation
data[:5]
Explanation: First rows of our data
End of explanation
# Get train and test data
train_data, test_data, train_labels, test_labels = train_test_split(data, labels, train_size=0.5)
Explanation: Splitting into train and test
End of explanation
variables = list(data.columns[:26])
Explanation: Classifiers
All classifiers inherit from sklearn.BaseEstimator and have the following methods:
classifier.fit(X, y, sample_weight=None) - train classifier
classifier.predict_proba(X) - return probabilities vector for all classes
classifier.predict(X) - return predicted labels
classifier.staged_predict_proba(X) - return probabilities after each iteration (not supported by TMVA)
classifier.get_feature_importances()
Here we use X to denote matrix with data of shape [n_samples, n_features], y is vector with labels (0 or 1) of shape [n_samples], <br /> sample_weight is vector with weights.
Difference from default scikit-learn interface
X should be* pandas.DataFrame, not numpy.array. <br />
Provided this, you'll be able to choose features used in training by setting e.g. features=['FlightTime', 'p'] in constructor.
* it works fine with numpy.array as well, but in this case all the features will be used.
Variables used in training
End of explanation
from rep.estimators import SklearnClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Using gradient boosting with default settings
sk = SklearnClassifier(GradientBoostingClassifier(), features=variables)
# Training classifier
sk.fit(train_data, train_labels)
print('training complete')
Explanation: Sklearn
wrapper for scikit-learn classifiers. In this example we use GradientBoosting with default settings
End of explanation
# predict probabilities for each class
prob = sk.predict_proba(test_data)
print prob
print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])
Explanation: Predicting probabilities, measuring the quality
End of explanation
sk.predict(test_data)
sk.get_feature_importances()
Explanation: Predictions of classes
End of explanation
from rep.estimators import TMVAClassifier
print TMVAClassifier.__doc__
tmva = TMVAClassifier(method='kBDT', NTrees=50, Shrinkage=0.05, features=variables)
tmva.fit(train_data, train_labels)
print('training complete')
Explanation: TMVA
End of explanation
# predict probabilities for each class
prob = tmva.predict_proba(test_data)
print prob
print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])
# predict labels
tmva.predict(test_data)
Explanation: Predict probabilities and estimate quality
End of explanation
from rep.estimators import XGBoostClassifier
print XGBoostClassifier.__doc__
# XGBoost with default parameters
xgb = XGBoostClassifier(features=variables)
xgb.fit(train_data, train_labels, sample_weight=numpy.ones(len(train_labels)))
print('training complete')
Explanation: XGBoost
End of explanation
prob = xgb.predict_proba(test_data)
print 'ROC AUC:', roc_auc_score(test_labels, prob[:, 1])
Explanation: Predict probabilities and estimate quality
End of explanation
xgb.predict(test_data)
xgb.get_feature_importances()
Explanation: Predict labels
End of explanation
from sklearn.ensemble import AdaBoostClassifier
# Construct AdaBoost with TMVA as base estimator
base_tmva = TMVAClassifier(method='kBDT', NTrees=15, Shrinkage=0.05)
ada_tmva = SklearnClassifier(AdaBoostClassifier(base_estimator=base_tmva, n_estimators=5), features=variables)
ada_tmva.fit(train_data, train_labels)
print('training complete')
prob = ada_tmva.predict_proba(test_data)
print 'AUC', roc_auc_score(test_labels, prob[:, 1])
Explanation: Advantages of common interface
As one can see above, all the classifiers implement the same interface,
this simplifies work, simplifies comparison of different classifiers,
but this is not the only profit.
Sklearn provides different tools to combine different classifiers and transformers.
One of this tools is AdaBoost, which is abstract metaclassifier built on the top of some other classifier (usually, decision dree)
Let's show that now you can run AdaBoost over classifiers from other libraries! <br />
(isn't boosting over neural network what you were dreaming of all your life?)
AdaBoost over TMVA classifier
End of explanation
# Construct AdaBoost with xgboost base estimator
base_xgb = XGBoostClassifier(n_estimators=50)
# ada_xgb = SklearnClassifier(AdaBoostClassifier(base_estimator=base_xgb, n_estimators=1), features=variables)
ada_xgb = AdaBoostClassifier(base_estimator=base_xgb, n_estimators=1)
ada_xgb.fit(train_data[variables], train_labels)
print('training complete!')
# predict probabilities for each class
prob = ada_xgb.predict_proba(test_data[variables])
print 'AUC', roc_auc_score(test_labels, prob[:, 1])
# predict probabilities for each class
prob = ada_xgb.predict_proba(train_data[variables])
print 'AUC', roc_auc_score(train_labels, prob[:, 1])
Explanation: AdaBoost over XGBoost
End of explanation |
12,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression
Timothy Helton
<a id='toc'></a>
Table of Contents
Imports
Framework
Correlation Grid Function
Correlation Heatmap Function
Plot Regression Function
Plot Residuals Function
Predict Function
Load Data
Load Auto Dataset
Load Boston Dataset
Load Carseats Dataset
Exercise 1
Exercise 2
Exercise 3
Exercise 4
Exercise 5
Exercise 6
Exercise 7
Exercise 8
<a id='imports'></a>
Imports
Table of Contents
Step2: <a id='framework'></a>
Framework
<a id='correlation_grid'></a>
Table of Contents
Step4: <a id='correlation_heatmap'></a>
Table of Contents
Step6: <a id='plot_regression'></a>
Table of Contents
Step8: <a id='plot_residuals'></a>
Table of Contents
Step10: <a id='predict'></a>
Table of Contents
Step11: <a id='load_data'></a>
Load Data
Table of Contents
Step12: <a id='load_auto'></a>
Auto Dataset
Table of Contents
Step13: <a id='load_boston'></a>
Boston Dataset
Table of Contents
Step14: <a id='load_carseats'></a>
Carseats Dataset
Table of Contents
Step15: <a id='exercise_1'></a>
Exercise 1 - Use simple linear regression on the Auto data set.
Use statsmodels or scikit-learn to perform a simple linear regression with
mpg as the response and horsepower as the predictor. Print the results. Comment on the output.
For example
Step16: Findings
The F-statistic is not equal to zero, so a relationship exists between MPG and Horsepower.
This is confirmed by the P-value of zero.
The model has an $R^2$ value of 0.6, which implies poor accuracy.
In the next cell the relationship model appears to have the following form. $$y = 1/x$$
The T-statistic has a mixed response for this model.
The intercept coefficient appears to be well suited, but the Horsepower coefficient is too imprecise to determine the effect on the response.
2) Plot the response and the predictor. Plot the least squares regression line.
Step17: 3) Produce diagnostic plots of the least squares regression fit. Comment on any problems you see with the fit.
Step18: <a id='exercise_2'></a>
Exercise 2 - Use multiple linear regression on the Auto data set.
Produce a scatterplot matrix which includes all of the variables
in the data set.
Compute the matrix of correlations between the variables using
the function corr(). Plot a matrix correlation heatmap as well.
Perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Print the results. Comment on the output. For instance
Step19: 2) Compute the matrix of correlations between the variables using the function corr(). Plot a matrix correlation heatmap as well.
Step20: 3) Perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Print the results. Comment on the output.
Step21: Findings
The F-statistic is not equal to zero, so a relationship exists between MPG and at least one of the features.
This is confirmed by the P-values. Features with a relationship to MPG are the following
Step22: Findings
The residules are fair for this model, but not great.
Some outliers exist in this dataset.
Observation 13 has a large influence on the data.
Displacement
Step23: Findings
The F-statistic is not equal to zero, so a relationship exists between MPG and at least one of the features.
This is confirmed by the P-values. Features with a relationship to MPG are the following
Step24: 6) Try a few different transformations of the variables, such as $log(X)$, $\sqrt{X}$, $X^2$. Comment on your findings.
Step25: Findings
The log(horsepower) has the most linear fit of the models tested.
Step26: Findings
From the correlation scatter plot I would like to transform displacement, horsepower and weight to be inverse relastionships, but it appears StatsModels patsy does not support this transformation.
Converting displacement to be a logrithmic relationship was the only positive effect on the residules I was able to identify.
Step27: <a id='exercise_3'></a>
Exercise 3 - Use multiple regression using the Carseats data set.
Fit a multiple regression model to predict Sales using Price,
Urban, and US.
Provide an interpretation of each coefficient in the model. Be
careful—some of the variables in the model are qualitative!
Write out the model in equation form, being careful to handle
the qualitative variables properly.
For which of the predictors can you reject the null hypothesis
H
Step28: 2) Provide an interpretation of each coefficient in the model. Be careful—some of the variables in the model are qualitative!
Price $$\frac{1\ Price}{-5.45\ Sales}$$
Urban $$\frac{2.2\ Rural}{1\ Urban}$$
US $$\frac{1.2\ US}{1\ Non-US}$$
3) Write out the model in equation form, being careful to handle the qualitative variables properly.
$$Sales = 13.0435 - 0.0545\ Price - 0.0219\ Urban + 1.2006\ US$$
4) For which of the predictors can you reject the null hypothesis H
Step29: 6) How well do the models in (1) and (5) fit the data ?
Neither model is very good with $R^2$ values of 0.239.
7) Using the model from (5), obtain 95% confidence intervals for the coefficient(s).
Step30: 8) Is there evidence of outliers or high leverage observations in the model from (5) ?
Step31: Findings
There are Studentized Residuals outside the range {-2, 2} indicating the presents of outliers.
<a id='exercise_4'></a>
Exercise 4 - Investigate the t-statistic for the null hypothesis.
In this problem we will investigate the t-statistic for the null hypothesis
H
Step32: 1) Perform a simple linear regression of y onto x, without an intercept. Report the coefficient estimate β, the standard error of this coefficient estimate, and the t-statistic and p-value associated with the null hypothesis H0. Comment on these results.
Step33: Findings
Coefficient estimate
Step34: This matches the calculated value above
5) Using the results from (4), argue that the t-statistic for the regression of y onto x is the same t-statistic for the regression of x onto y.
The commutative property of multiplication states that two numbers can be multiplied in either order.
Inverting the response and feature will not alter the equations outcome.
6) In Python, show that when regression is performed with an intercept, the t-statistic for H0
Step35: In both cases above the T-statistic is 19.783.
<a id='exercise_5'></a>
Exercise 5 - Explore linear regression without an intercept.
Recall that the coefficient estimate β^ for the linear regression of Y onto X witout an intercept is given by (3.38). Under what circumstance is the coefficient estimate for the regression of X onto Y the same as the coefficient estimate for the regression of Y onto X ?
Generate an example in Python with n = 100 observations in which
the coefficient estimate for the regression of X onto Y is different
from the coefficient estimate for the regression of Y onto X.
Generate an example in Python with n = 100 observations in which
the coefficient estimate for the regression of X onto Y is the
same as the coefficient estimate for the regression of Y onto X.
Table of Contents
1) Recall that the coefficient estimate β^ for the linear regression of Y onto X witout an intercept is given by (3.38). Under what circumstance is the coefficient estimate for the regression of X onto Y the same as the coefficient estimate for the regression of Y onto X?
The coefficient estimate for the regression of Y onto X is
$$\hat{\beta} = \frac{\sum_ix_iy_i}{\sum_jx_j^2}$$
The coefficient estimate for the regression of X onto Y is
$$\hat{\beta}' = \frac{\sum_ix_iy_i}{\sum_jy_j^2}$$
The coefficients are the same iff $\sum_jx_j^2 = \sum_jy_j^2$
2) Generate an example in Python with n = 100 observations in which the coefficient estimate for the regression of X onto Y is different from the coefficient estimate for the regression of Y onto X.
Step36: 3) Generate an example in Python with n = 100 observations in which the coefficient estimate for the regression of X onto Y is the same as the coefficient estimate for the regression of Y onto X.
Step37: <a id='exercise_6'></a>
Exercise 6 - Explore linear regression with simulated data.
In this exercise you will create some simulated data and will fit simple
linear regression models to it. Make sure to set the seed prior to
starting part (1) to ensure consistent results.
Create a vector, x, containing 100 observations drawn from a N(0, 1) distribution. This represents a feature, X.
Create a vector, eps, containing 100 observations drawn from a N(0, 0.25) distribution i.e. a normal distribution with mean zero and variance 0.25.
Using x and eps, generate a vector y according to the model
Y = −1 + 0.5X + e
What is the length of the vector y? What are the values of β0 and β1 in this linear model?
Create a scatterplot displaying the relationship between x and
y. Comment on what you observe.
Fit a least squares linear model to predict y using x. Comment
on the model obtained. How do β0 and β1 compare to β0 and
β1?
Display the least squares line on the scatterplot obtained in (4).
Draw the population regression line on the plot, in a different
color. Create an appropriate legend.
Now fit a polynomial regression model that predicts y using x
and x^2. Is there evidence that the quadratic term improves the
model fit? Explain your answer.
Repeat (1)–(6) after modifying the data generation process in
such a way that there is less noise in the data. The model (3.39)
should remain the same. You can do this by decreasing the variance
of the normal distribution used to generate the error term
e in (2). Describe your results.
Repeat (1)–(6) after modifying the data generation process in
such a way that there is more noise in the data. The model
(3.39) should remain the same. You can do this by increasing
the variance of the normal distribution used to generate the
error term in (b). Describe your results.
What are the confidence intervals for β0 and β1 based on the
original data set, the noisier data set, and the less noisy data
set? Comment on your results.
Table of Contents
1) Create a vector, x, containing 100 observations drawn from a N(0, 1) distribution. This represents a feature, X.
Step38: 2) Create a vector, eps, containing 100 observations drawn from a N(0, 0.25) distribution i.e. a normal distribution with mean zero and variance 0.25.
Step39: 3) Using x and eps, generate a vector y according to the model
Y = −1 + 0.5X + eps
What is the length of the vector y? What are the values of β0 and β1 in this linear model?
Step40: β0
Step41: Findings
The relationship has a linear distribution.
5) Fit a least squares linear model to predict y using x. Comment on the model obtained. How do β^0 and β^1 compare to β0 and β1?
Step42: Findings
predicted intercept is -0.9969 (actual = -1)
predicted x-coefficient is 0.4897 (actual = 0.5)
The F-statistic is not equal to zero, so a relationship exists between x and y.
This is confirmed by the P-value of zero.
The model has an $R^2$ value of 0.749, which is fair.
The T-statistic is large indicating the y coefficient estimate is both large and precise.
6) Display the least squares line on the scatterplot obtained in (4). Draw the population regression line on the plot, in a different color. Use the legend() function to create an appropriate legend.
Step43: 7) Now fit a polynomial regression model that predicts y using x and $x^2$. Is there evidence that the quadratic term improves the model fit ? Explain your answer.
Step44: Findings
The model's $R^2$ value only showed negligent improvement.
Going to a quadradic model is not justified here.
8) Repeat (1)-(6) after modifying the data generation process in such a way that there is less noise in the data. The initial model should remain the same. Describe your results.
Step45: Findings
predicted intercept is -0.9926 (actual = -1)
predicted x-coefficient is 0.5048 (actual = 0.5)
The F-statistic is not equal to zero, so a relationship exists between x and y.
This is confirmed by the P-value of zero.
The model has an $R^2$ value of 0.989, which is very good.
Acurate predictions could be made with this model.
The T-statistic is large indicating the y coefficient estimate is both large and precise.
Step46: Findings
The model's $R^2$ value did not imporove.
The P-value for the $x^2$ coefficient is large implying this coefficient is not a good estimate.
Going to a quadradic model is not justified here.
9) Repeat (1)-(6) after modifying the data generation process in such a way that there is more noise in the data. The initial model should remain the same. Describe your results.
Step47: Findings
The F-statistic is not equal to zero, so a relationship exists between x and y.
This is confirmed by the P-value of zero.
The model has an $R^2$ value of 0.244, which is poor.
This model does not represent the data.
The T-statistics are not large indicating the x coefficient estimate is imprecise.
Step48: Findings
The model's $R^2$ value did not imporove.
This model does not represent the data.
The P-value for the $x^2$ coefficient is large implying this coefficient is not a good estimate.
Going to a quadradic model is not justified here.
10) What are the confidence intervals for β0 and β1 based on the original data set, the noisier data set, and the less noisy data set ? Comment on your results.
Step49: Findings
As expected the confidence interval is directory proportional to the amount of error present in the data.
<a id='exercise_7'></a>
Exercise 7 - Explore the problem of collinearity.
Perform the following commands
Step50: 1) The last line corresponds to creating a linear model in which y is a function of x1 and x2. Write out the form of the linear model. What are the regression coefficients?
y = β0 + 2β1 + 0.3β2 + error
2) What is the correlation between x1 and x2? Create a scatterplot displaying the relationship between the variables.
Step51: Findings
The variables x1 and x2 are highly correlated.
3) Using this data, fit a least squares regression to predict y using x1 and x2. Describe the results obtained. What are β0, β1, and β2? How do these relate to the true β0, β1, and β2? Can you reject the null hypothesis Ho
Step52: Findings
The model does not acurately represent the data.
Coefficients
Step53: Findings
The model improved, but is still poor.
Based on the F-statistic there is a relationship between x and y.
The null hypothesis may be rejected for x1.
5) Now fit a least squares regression to predict y using only x2. Comment on your results. Can you reject the null hypothesis Ho
Step54: Findings
The model is better than the x1 + x2 combined model, not as good as the y = x1 model.
All three of them are poor predictors of the data.
Based on the F-statistic there is a relationship between x and y.
The null hypothesis may be rejected for x2.
6) Do the results obtained in (3)–(5) contradict each other? Explain your answer.
The answers do not contradict each other, but display the interdependence of x1 and x2. Only one of these features should be used in the model and x1 produced better results.
7) Now suppose we obtain one additional observation, which was unfortunately mismeasured.
x1=c(x1, 0.1)
x2=c(x2, 0.8)
y=c(y, 6)
Re-fit the linear models from (3) to (5) using this new data. What
effect does this new observation have on the each of the models?
In each model, is this observation an outlier? A high-leverage
point? Both? Explain your answers.
Step55: Findings
$R^2$ value decresed
F-statisic value decresed
x1 may no longer reject the null hypothesis
Observation 100 is an outlier
Observastion 100 has high leverage
Step56: Findings
$R^2$ value increased
F-statisic value decresed
x1 may still reject the null hypothesis
Observation 100 is an outlier
Observastion 100 has low leverage
Step57: Findings
$R^2$ value increased
F-statisic value increased
x2 may still reject the null hypothesis
Observation 100 is not an outlier
Observastion 100 has high leverage
With the erroneous data entry now the feature x2 is preferred over x1.
<a id='exercise_8'></a>
Exercise 8 - Predict per capita crime rate.
This problem involves the Boston data set. We will now try to predict per capita crime rate
using the other variables in this data set. In other words, per capita
crime rate is the response, and the other variables are the predictors.
For each predictor, fit a simple linear regression model to predict
the response. Describe your results. In which of the models is
there a statistically significant association between the predictor
and the response? Create some plots to back up your assertions.
Fit a multiple regression model to predict the response using
all of the predictors. Describe your results. For which predictors
can we reject the null hypothesis H
Step58: Findings
All features except for chas have P-values below 0.05 and have a relationship to crim.
2) Fit a multiple regression model to predict the response using all of the predictors. Describe your results. For which predictors can we reject the null hypothesis H
Step59: Findings
The following features were able to reject the null hypothesis and have a relationship to crim.
zn
nox
dis
rad
black
medv
$R^2$ value of 0.454 implys the model is a poor predictor of the data
3) How do your results from (1) compare to your results from (2)? Create a plot displaying the univariate regression coefficients from (1) on the x-axis, and the multiple regression coefficients from (2) on the y-axis. That is, each predictor is displayed as a single point in the plot. Its coefficient in a simple linear regression model is shown on the x-axis, and its coefficient estimate in the multiple linear regression model is shown on the y-axis.
Step60: 3) Is there evidence of non-linear association between any of the predictors and the response? To answer this question, for each predictor X, fit a model of the form $Y = β0 + β1X + β2X^2 + β3X^3 + E$. | Python Code:
from collections import OrderedDict
import itertools
import os
import os.path as osp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import statsmodels.api as sm
import statsmodels.formula.api as smf
import statsmodels.graphics.regressionplots as smrp
import statsmodels.sandbox.regression as smsr
from k2datascience.utils import save_fig, size
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
Explanation: Regression
Timothy Helton
<a id='toc'></a>
Table of Contents
Imports
Framework
Correlation Grid Function
Correlation Heatmap Function
Plot Regression Function
Plot Residuals Function
Predict Function
Load Data
Load Auto Dataset
Load Boston Dataset
Load Carseats Dataset
Exercise 1
Exercise 2
Exercise 3
Exercise 4
Exercise 5
Exercise 6
Exercise 7
Exercise 8
<a id='imports'></a>
Imports
Table of Contents
End of explanation
def correlation_grid(data, bins=30, title=None, save=False):
Plot the correlation grid.
:param pd.DataFrame data: original data
:param int bins: number of bins for diagonal histogram plots
:param bool save: if True the figure will be saved
:param str title: dataset title
plot_title = 'Dataset Correlation'
if title:
title = f'{title} {plot_title}'
else:
title = plot_title
grid = sns.pairplot(data,
diag_kws={'alpha': 0.5, 'bins': bins,
'edgecolor': 'black'},
plot_kws={'alpha': 0.7})
grid.fig.suptitle(title,
fontsize=size['super_title'], y=1.03)
cols = data.columns
for n, col in enumerate(cols):
grid.axes[cols.size - 1, n].set_xlabel(cols[n],
fontsize=size['label'])
grid.axes[n, 0].set_ylabel(cols[n], fontsize=size['label'])
save_fig(title, save)
Explanation: <a id='framework'></a>
Framework
<a id='correlation_grid'></a>
Table of Contents
End of explanation
def correlation_heatmap(data, title=None, save=False):
Plot the correlation values as a heatmap.
:param pd.DataFrame data: data object
:param str title: dataset title
:param bool save: if True the figure will be saved
plot_title = 'Dataset Correlation'
if title:
title = f'{title} {plot_title}'
else:
title = plot_title
fig = plt.figure('Correlation Heatmap', figsize=(10, 8),
facecolor='white', edgecolor='black')
rows, cols = (1, 1)
ax0 = plt.subplot2grid((rows, cols), (0, 0))
sns.heatmap(data.corr(),
annot=True, cbar_kws={'orientation': 'vertical'},
fmt='.2f', linewidths=5, vmin=-1, vmax=1, ax=ax0)
ax0.set_title(title, fontsize=size['title'])
ax0.set_xticklabels(ax0.xaxis.get_majorticklabels(),
fontsize=size['label'], rotation=80)
ax0.set_yticklabels(ax0.yaxis.get_majorticklabels(),
fontsize=size['label'], rotation=0)
save_fig(title, save)
Explanation: <a id='correlation_heatmap'></a>
Table of Contents
End of explanation
def plot_regression(data, lin_reg, features, target,
exact_reg=None, save=False, title=None):
Plot the original data with linear regression line.
:param pd.DataFrame data: data object
:param lin_reg: linear regression model
:type: statsmodels.regression.linear_model.RegressionRes
:param list features: names of feature columns
:param str target: name of target column
:param pd.DataFrame exact_reg: x and y points defining /
exact linear regression
:param bool save: if True the figure will be saved
:param str title: data set title
plot_title = 'Linear Regression'
if title:
title = f'{title} {plot_title}'
else:
title = plot_title
fig = plt.figure(title, figsize=(8, 6),
facecolor='white', edgecolor='black')
rows, cols = (1, 1)
ax = plt.subplot2grid((rows, cols), (0, 0))
x = data.loc[:, features]
y = data[target].values
for n, feature in enumerate(x.columns):
x = np.squeeze(data.loc[:, feature].values)
ax.scatter(x, y, alpha=0.5, color=f'C{n}', label=feature)
# Regression Line
sort_idx = np.argsort(x)
ax.plot(x[sort_idx], predict(x, lin_reg.params)[sort_idx],
color='black', label='Linear Regression', linestyle='--')
# Upper and Lower Confidence Intervals
std, upper, lower = smsr.predstd.wls_prediction_std(lin_reg)
ax.plot(x[sort_idx], lower[sort_idx], alpha=0.5, color='green',
label='Upper Confidence Interval', linestyle='-.')
ax.plot(x[sort_idx], upper[sort_idx], alpha=0.5, color='red',
label='Lower Confidence Interval', linestyle='-.')
if not exact_reg.empty:
exact_reg.plot(x='x', y='y', color='magenta',
label='Exact Linear Regression',
linestyle=':', ax=ax)
features_list = f'({", ".join(features)})'
ax.set_title(f'{target} vs {features_list}',
fontsize=size['title'])
ax.legend()
ax.set_xlabel(features_list, fontsize=size['label'])
ax.set_ylabel(target, fontsize=size['label'])
save_fig(title, save)
Explanation: <a id='plot_regression'></a>
Table of Contents
End of explanation
def plot_residuals(lin_reg, save=False, title=None):
Plot resdual statistics
:param lin_reg: linear regression model
:type: statsmodels.regression.linear_model.RegressionRes
:param bool save: if True the figure will be saved
:param str title: data set title
plot_title = 'Dataset Residuals'
if title:
title = f'{title} {plot_title}'
else:
title = plot_title
fig = plt.figure(title, figsize=(14, 21),
facecolor='white', edgecolor='black')
rows, cols = (3, 2)
ax0 = plt.subplot2grid((rows, cols), (0, 0))
ax1 = plt.subplot2grid((rows, cols), (0, 1))
ax2 = plt.subplot2grid((rows, cols), (1, 0), rowspan=2)
ax3 = plt.subplot2grid((rows, cols), (1, 1), rowspan=2)
# Normalized Residuals Histogram
ax0.hist(lin_reg.resid_pearson, alpha=0.5, edgecolor='black')
ax0.set_title('Normalized Residuals Histogram',
fontsize=size['title'])
ax0.set_xlabel('Normalized Residuals', fontsize=size['label'])
ax0.set_ylabel('Counts', fontsize=size['label'])
# Residuals vs Fitted Values
ax1.scatter(lin_reg.fittedvalues, lin_reg.resid)
ax1.set_title('Raw Residuals vs Fitted Values',
fontsize=size['title'])
ax1.set_xlabel('Fitted Values', fontsize=size['label'])
ax1.set_ylabel('Raw Residuals', fontsize=size['label'])
# Leverage vs Normalized Residuals Squared
leverage = smrp.plot_leverage_resid2(lin_reg, ax=ax2)
ax2.set_title('Leverage vs Normalized $Residuals^2$',
fontsize=size['title'])
ax2.set_xlabel('Normalized $Residuals^2$',
fontsize=size['label'])
ax2.set_ylabel('Leverage', fontsize=size['label'])
# Influence Plot
influence = smrp.influence_plot(lin_reg, ax=ax3)
ax3.set_title('Influence Plot',
fontsize=size['title'])
ax3.set_xlabel('H Leverage',
fontsize=size['label'])
ax3.set_ylabel('Studentized Residuals',
fontsize=size['label'])
plt.tight_layout
plt.suptitle(title, fontsize=size['super_title'], y=0.92)
save_fig(title, save)
Explanation: <a id='plot_residuals'></a>
Table of Contents
End of explanation
def predict(x, parameters):
Return predicted values provided regression parameters.
.. note:: StatsModels provides regression coefficients in increasing
order, while NumPy would like to recive them in decreasing order.
This function is designed to recive the StatsModels format,
reverse the dimensionality, and then allow NumPy to perform the
calculation.
:param np.array x: array of input values
:param pd.Series parameters: linear regression coefficients in
order of increasing dimension
:return: predicted target values
:rtype: np.array
p = np.poly1d(parameters.values[::-1])
return p(x)
Explanation: <a id='predict'></a>
Table of Contents
End of explanation
data_dir = osp.realpath(osp.join(os.getcwd(), '..', 'data',
'linear_regression'))
Explanation: <a id='load_data'></a>
Load Data
Table of Contents
End of explanation
auto = pd.read_csv(osp.join(data_dir, 'auto.csv'))
data = auto
data.info()
data.head()
data.describe()
Explanation: <a id='load_auto'></a>
Auto Dataset
Table of Contents
End of explanation
boston = pd.read_csv(osp.join(data_dir, 'boston.csv'), index_col=0)
data = boston
data.info()
data.head()
data.describe()
Explanation: <a id='load_boston'></a>
Boston Dataset
Table of Contents
End of explanation
carseats = pd.read_csv(osp.join(data_dir, 'carseats.csv'))
data = carseats
data.info()
data.head()
data.describe()
Explanation: <a id='load_carseats'></a>
Carseats Dataset
Table of Contents
End of explanation
lr = smf.ols(formula='mpg ~ horsepower', data=auto).fit()
lr.summary()
print(f'Horsepower: 98\tMPG:{predict(98, lr.params):.2f}')
Explanation: <a id='exercise_1'></a>
Exercise 1 - Use simple linear regression on the Auto data set.
Use statsmodels or scikit-learn to perform a simple linear regression with
mpg as the response and horsepower as the predictor. Print the results. Comment on the output.
For example:
Is there a relationship between the predictor and the response?
How strong is the relationship between the predictor and the response?
Is the relationship between the predictor and the response positive or negative?
What is the predicted mpg associated with a horsepower of 98? Optional: What are the associated 95% confidence and prediction intervals?
Plot the response and the predictor. Plot the least squares regression line.
Produce diagnostic plots of the least squares regression fit. Comment on any problems you see with the fit.
Table of Contents
1) Use statsmodels or scikit-learn to perform a simple linear regression with mpg as the response and horsepower as the predictor. Print the results. Comment on the output.
End of explanation
plot_regression(data=auto, lin_reg=lr, features=['horsepower'],
target='mpg')
Explanation: Findings
The F-statistic is not equal to zero, so a relationship exists between MPG and Horsepower.
This is confirmed by the P-value of zero.
The model has an $R^2$ value of 0.6, which implies poor accuracy.
In the next cell the relationship model appears to have the following form. $$y = 1/x$$
The T-statistic has a mixed response for this model.
The intercept coefficient appears to be well suited, but the Horsepower coefficient is too imprecise to determine the effect on the response.
2) Plot the response and the predictor. Plot the least squares regression line.
End of explanation
plot_residuals(lr)
Explanation: 3) Produce diagnostic plots of the least squares regression fit. Comment on any problems you see with the fit.
End of explanation
numeric = auto.select_dtypes(include=[np.int, np.float])
correlation_grid(numeric, bins=10)
Explanation: <a id='exercise_2'></a>
Exercise 2 - Use multiple linear regression on the Auto data set.
Produce a scatterplot matrix which includes all of the variables
in the data set.
Compute the matrix of correlations between the variables using
the function corr(). Plot a matrix correlation heatmap as well.
Perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Print the results. Comment on the output. For instance:
Is there a relationship between the predictors and the response?
Which predictors appear to have a statistically significant relationship to the response?
What does the coefficient for the year variable suggest?
Produce diagnostic plots of the linear
regression fit. Comment on any problems you see with the fit.
Do the residual plots suggest any unusually large outliers? Does
the leverage plot identify any observations with unusually high
leverage?
Use the - and + symbols to fit linear regression models with
interaction effects. Do any interactions appear to be statistically
significant?
Try a few different transformations of the variables, such as
$log(X)$, $\sqrt{X}$, $X^2$. Comment on your findings.
Table of Contents
1) Produce a scatterplot matrix which includes all of the variables in the data set.
End of explanation
auto.corr()
correlation_heatmap(auto)
Explanation: 2) Compute the matrix of correlations between the variables using the function corr(). Plot a matrix correlation heatmap as well.
End of explanation
features = numeric.drop('mpg', axis=1)
features = ' + '.join(features.columns)
lr = smf.ols(formula=f'mpg ~ {features}', data=auto).fit()
lr.summary()
Explanation: 3) Perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Print the results. Comment on the output.
End of explanation
plot_residuals(lr)
auto.loc[10:15, :]
numeric.loc[13, :] / numeric.max()
Explanation: Findings
The F-statistic is not equal to zero, so a relationship exists between MPG and at least one of the features.
This is confirmed by the P-values. Features with a relationship to MPG are the following:
Displacement
Weight
Year
Origin
The model has an $R^2$ value of 0.821
This will have to be improved for the model to be used in a predictive manner.
The Year coefficient states the relationship to MPG is $$\frac{0.75\ MPG}{1 \ Year}$$
4) Produce diagnostic plots of the linear regression fit. Comment on any problems you see with the fit. Do the residual plots suggest any unusually large outliers? Does the leverage plot identify any observations with unusually high leverage?
End of explanation
features = numeric.drop('mpg', axis=1)
interactions = [f'{x[0]} * {x[1]}' for x
in itertools.combinations(features.columns, 2)]
interactions = ' + '.join(interactions)
inter_formula = f'mpg ~ {" + ".join(features)} + {interactions}'
new_lr = smf.ols(formula=inter_formula, data=auto).fit()
new_lr.summary()
Explanation: Findings
The residules are fair for this model, but not great.
Some outliers exist in this dataset.
Observation 13 has a large influence on the data.
Displacement: 100 percentile
Horsepower: 97.8 percentile
Year: 85.3 percentile
5) Use the - and + symbols to fit linear regression models with interaction effects. Do any interactions appear to be statistically significant?
End of explanation
plot_residuals(new_lr)
Explanation: Findings
The F-statistic is not equal to zero, so a relationship exists between MPG and at least one of the features.
This is confirmed by the P-values. Features with a relationship to MPG are the following:
Acceleration
Displacement
Origin
Acceleration + Origin
Acceleration + Year
Displacement + Year
Acceleration appears multiple times and also masks the weight feature.
The model has an $R^2$ value of 0.881
This will have to be improved for the model to be used in a predictive manner.
End of explanation
transformations = OrderedDict()
for trans in ('', 'np.log', 'np.sqrt', 'np.square'):
model = smf.ols(formula=f'mpg ~ {trans}(horsepower)',
data=auto).fit()
transformations[trans] = model.rsquared
fig = sm.qqplot(model.resid)
if not trans:
trans = 'Native'
plt.title(trans, fontsize=size['title']);
plt.show();
transformations
Explanation: 6) Try a few different transformations of the variables, such as $log(X)$, $\sqrt{X}$, $X^2$. Comment on your findings.
End of explanation
trans_lr = smf.ols(formula=(
'mpg ~ cylinders + np.log(displacement) + horsepower'
'+ weight + acceleration - year + origin - 1'), data=auto).fit()
trans_lr.summary()
Explanation: Findings
The log(horsepower) has the most linear fit of the models tested.
End of explanation
plot_residuals(trans_lr)
Explanation: Findings
From the correlation scatter plot I would like to transform displacement, horsepower and weight to be inverse relastionships, but it appears StatsModels patsy does not support this transformation.
Converting displacement to be a logrithmic relationship was the only positive effect on the residules I was able to identify.
End of explanation
features = ' + '.join(['Price', 'Urban', 'US'])
lr = smf.ols(formula=f'Sales ~ {features}', data=carseats).fit()
lr.summary()
Explanation: <a id='exercise_3'></a>
Exercise 3 - Use multiple regression using the Carseats data set.
Fit a multiple regression model to predict Sales using Price,
Urban, and US.
Provide an interpretation of each coefficient in the model. Be
careful—some of the variables in the model are qualitative!
Write out the model in equation form, being careful to handle
the qualitative variables properly.
For which of the predictors can you reject the null hypothesis
H: β = 0?
On the basis of your response to the previous question, fit a
smaller model that only uses the predictors for which there is
evidence of association with the outcome.
How well do the models in (1) and (5) fit the data?
Using the model from (5), obtain 95% confidence intervals for
the coefficient(s).
Is there evidence of outliers or high leverage observations in the
model from (5)?
Table of Contents
1) Fit a multiple regression model to predict Sales using Price, Urban, and US.
End of explanation
lr = smf.ols(formula='Sales ~ Price + US', data=carseats).fit()
lr.summary()
Explanation: 2) Provide an interpretation of each coefficient in the model. Be careful—some of the variables in the model are qualitative!
Price $$\frac{1\ Price}{-5.45\ Sales}$$
Urban $$\frac{2.2\ Rural}{1\ Urban}$$
US $$\frac{1.2\ US}{1\ Non-US}$$
3) Write out the model in equation form, being careful to handle the qualitative variables properly.
$$Sales = 13.0435 - 0.0545\ Price - 0.0219\ Urban + 1.2006\ US$$
4) For which of the predictors can you reject the null hypothesis H: β = 0?
$H_0$: There is no relationship between X and Y.
Features with high P-values:
Price
US
5) On the basis of your response to the previous question, fit a smaller model that only uses the predictors for which there is evidence of association with the outcome.
End of explanation
(lr.conf_int(0.05)
.rename(columns={0: 'Lower', 1: 'Upper'}))
Explanation: 6) How well do the models in (1) and (5) fit the data ?
Neither model is very good with $R^2$ values of 0.239.
7) Using the model from (5), obtain 95% confidence intervals for the coefficient(s).
End of explanation
plot_residuals(lr)
Explanation: 8) Is there evidence of outliers or high leverage observations in the model from (5) ?
End of explanation
np.random.seed(1)
x = np.random.randn(100)
y = 2 * x + np.random.randn(100)
data = pd.DataFrame(np.c_[x, y], columns=['x', 'y'])
data.info()
data.head()
correlation_grid(data, title='Random')
correlation_heatmap(data, title='Random')
Explanation: Findings
There are Studentized Residuals outside the range {-2, 2} indicating the presents of outliers.
<a id='exercise_4'></a>
Exercise 4 - Investigate the t-statistic for the null hypothesis.
In this problem we will investigate the t-statistic for the null hypothesis
H: β = 0 in simple linear regression without an intercept. To
begin, we generate a predictor x and a response y as follows.
import numpy as np
np.random.seed(1)
x = np.random.randn(100)
y = 2 * x + np.random.randn(100)
Perform a simple linear regression of y onto x, without an intercept.
Report the coefficient estimate β, the standard error of
this coefficient estimate, and the t-statistic and p-value associated
with the null hypothesis H: β = 0. Comment on these
results. (You can perform regression without an intercept)
Now perform a simple linear regression of x onto y without an
intercept, and report the coefficient estimate, its standard error,
and the corresponding t-statistic and p-values associated with
the null hypothesis H: β = 0. Comment on these results.
What is the relationship between the results obtained in (1) and
(2)?
For the regrssion of Y onto X without an intercept, the t-statistic for H0:β=0 takes the form β^/SE(β^), where β^ is given by (3.38), and where
$$SE(\hat{\beta}) = \sqrt{\frac{\sum_{i=1}^n(y_i - x_i\hat{\beta})^2}{(n - 1)\sum_{i=1}^nx_i^2}}$$
Confirm numerically in Python, that the t-statistic can be written as
$$\frac{\sqrt{n - 1}\sum_{i=1}^nx_iy_i}{\sqrt{(\sum_{i=1}^nx_i^2)(\sum_{i=1}^ny_i^2) - (\sum_{i=1}^nx_iy_i)}}$$
5 . Using the results from (4), argue that the t-statistic for the regression of y onto x is the same t-statistic for the regression of x onto y.
6 . In Python, show that when regression is performed with an intercept, the t-statistic for H0:β1=0 is the same for the regression of y onto x as it is the regression of x onto y.
Table of Contents
End of explanation
lr = smf.ols(formula='y ~ x - 1', data=data).fit()
lr.summary()
lr = smf.OLS(y, x).fit()
lr.summary()
Explanation: 1) Perform a simple linear regression of y onto x, without an intercept. Report the coefficient estimate β, the standard error of this coefficient estimate, and the t-statistic and p-value associated with the null hypothesis H0. Comment on these results.
End of explanation
n = x.size
((np.sqrt(n - 1) * np.sum(x * y))
/ np.sqrt(np.sum(x**2) * np.sum(y**2) - np.sum(x * y)**2))
Explanation: Findings
Coefficient estimate: 2.1067
Coefficient estimate standard error: 0.106
The F-statistic is not equal to zero, so a relationship exists between y and x.
This is confirmed by the P-value of zero.
The model has an $R^2$ value of 0.798, which is fair.
The T-statistic is large indicating the x coefficient estimate is both large and precise.
2) Now perform a simple linear regression of x onto y, without an intercept. Report the coefficient estimate β, the standard error of this coefficient estimate, and the t-statistic and p-value associated with the null hypothesis H0. Comment on these results.
Findings
Coefficient estimate: 0.3789
Coefficient estimate standard error: 0.019
The F-statistic is not equal to zero, so a relationship exists between x and y.
This is confirmed by the P-value of zero.
The model has an $R^2$ value of 0.798, which is fair.
The T-statistic is large indicating the y coefficient estimate is both large and precise.
3) What is the relationship between the results obtained in (1) and (2)?
Both regresions are just rearragements of the same fit.
4) For the regrssion of Y onto X without an intercept, the t-statistic for H0:β=0 takes the form β^/SE(β^), where β^ is given by (3.38), and where
$$SE(\hat{\beta}) = \sqrt{\frac{\sum_{i=1}^n(y_i - x_i\hat{\beta})^2}{(n - 1)\sum_{i=1}^nx_i^2}}$$
Show algebraically, and confirm numerically in Python, that the t-statistic can be written as
$$\frac{\sqrt{n - 1}\sum_{i=1}^nx_iy_i}{\sqrt{(\sum_{i=1}^nx_i^2)(\sum_{i=1}^ny_i^2) - (\sum_{i=1}^nx_iy_i)}}$$
We have
$$t = \frac{\sum_ix_iy_y/\sum_jx_j^2}{\sqrt{\sum_i(y_i - x_i\hat{\beta})^2/(n - 1)\sum_jx_j^2}} = \frac{\sqrt{n - 1}\sum_ix_iy_i}{\sqrt{\sum_jx_j^2\sum_i(y_i - x_i\sum_jx_jy_j/\sum_jx_j^2)^2}} = \frac{\sqrt{n - 1}\sum_ix_iy_i}{\sqrt{(\sum_jx_j^2)(\sum_jy_j^2) - (\sum_jx_jy_j)^2}}$$
Now let’s verify this result numerically.
End of explanation
lr = smf.ols(formula='y ~ x', data=data).fit()
lr.summary()
lr = smf.ols(formula='x ~ y', data=data).fit()
lr.summary()
Explanation: This matches the calculated value above
5) Using the results from (4), argue that the t-statistic for the regression of y onto x is the same t-statistic for the regression of x onto y.
The commutative property of multiplication states that two numbers can be multiplied in either order.
Inverting the response and feature will not alter the equations outcome.
6) In Python, show that when regression is performed with an intercept, the t-statistic for H0:β1=0 is the same for the regression of y onto x as it is the regression of x onto y.
End of explanation
np.random.seed(1)
x = np.random.randn(100)
y = 2 * x + np.random.randn(100)
assert np.sum(x**2) != np.sum(y**2)
Explanation: In both cases above the T-statistic is 19.783.
<a id='exercise_5'></a>
Exercise 5 - Explore linear regression without an intercept.
Recall that the coefficient estimate β^ for the linear regression of Y onto X witout an intercept is given by (3.38). Under what circumstance is the coefficient estimate for the regression of X onto Y the same as the coefficient estimate for the regression of Y onto X ?
Generate an example in Python with n = 100 observations in which
the coefficient estimate for the regression of X onto Y is different
from the coefficient estimate for the regression of Y onto X.
Generate an example in Python with n = 100 observations in which
the coefficient estimate for the regression of X onto Y is the
same as the coefficient estimate for the regression of Y onto X.
Table of Contents
1) Recall that the coefficient estimate β^ for the linear regression of Y onto X witout an intercept is given by (3.38). Under what circumstance is the coefficient estimate for the regression of X onto Y the same as the coefficient estimate for the regression of Y onto X?
The coefficient estimate for the regression of Y onto X is
$$\hat{\beta} = \frac{\sum_ix_iy_i}{\sum_jx_j^2}$$
The coefficient estimate for the regression of X onto Y is
$$\hat{\beta}' = \frac{\sum_ix_iy_i}{\sum_jy_j^2}$$
The coefficients are the same iff $\sum_jx_j^2 = \sum_jy_j^2$
2) Generate an example in Python with n = 100 observations in which the coefficient estimate for the regression of X onto Y is different from the coefficient estimate for the regression of Y onto X.
End of explanation
np.random.seed(1)
x = np.arange(100)
y = x.copy()
np.random.shuffle(x)
np.random.shuffle(y)
data = pd.DataFrame(np.c_[x, y], columns=['x', 'y'])
x[:10]
y[:10]
if np.sum(np.square(x)) == np.sum(np.square(y)):
print('The sums of squares are equal.')
lr = smf.ols(formula='y ~ x', data=data).fit()
lr.summary()
lr = smf.ols(formula='x ~ y', data=data).fit()
lr.summary()
Explanation: 3) Generate an example in Python with n = 100 observations in which the coefficient estimate for the regression of X onto Y is the same as the coefficient estimate for the regression of Y onto X.
End of explanation
np.random.seed(1)
x = 1 * np.random.randn(100) + 0
Explanation: <a id='exercise_6'></a>
Exercise 6 - Explore linear regression with simulated data.
In this exercise you will create some simulated data and will fit simple
linear regression models to it. Make sure to set the seed prior to
starting part (1) to ensure consistent results.
Create a vector, x, containing 100 observations drawn from a N(0, 1) distribution. This represents a feature, X.
Create a vector, eps, containing 100 observations drawn from a N(0, 0.25) distribution i.e. a normal distribution with mean zero and variance 0.25.
Using x and eps, generate a vector y according to the model
Y = −1 + 0.5X + e
What is the length of the vector y? What are the values of β0 and β1 in this linear model?
Create a scatterplot displaying the relationship between x and
y. Comment on what you observe.
Fit a least squares linear model to predict y using x. Comment
on the model obtained. How do β0 and β1 compare to β0 and
β1?
Display the least squares line on the scatterplot obtained in (4).
Draw the population regression line on the plot, in a different
color. Create an appropriate legend.
Now fit a polynomial regression model that predicts y using x
and x^2. Is there evidence that the quadratic term improves the
model fit? Explain your answer.
Repeat (1)–(6) after modifying the data generation process in
such a way that there is less noise in the data. The model (3.39)
should remain the same. You can do this by decreasing the variance
of the normal distribution used to generate the error term
e in (2). Describe your results.
Repeat (1)–(6) after modifying the data generation process in
such a way that there is more noise in the data. The model
(3.39) should remain the same. You can do this by increasing
the variance of the normal distribution used to generate the
error term in (b). Describe your results.
What are the confidence intervals for β0 and β1 based on the
original data set, the noisier data set, and the less noisy data
set? Comment on your results.
Table of Contents
1) Create a vector, x, containing 100 observations drawn from a N(0, 1) distribution. This represents a feature, X.
End of explanation
eps = 0.25 * np.random.randn(100) + 0
Explanation: 2) Create a vector, eps, containing 100 observations drawn from a N(0, 0.25) distribution i.e. a normal distribution with mean zero and variance 0.25.
End of explanation
y = -1 + 0.5 * x + eps
data = pd.DataFrame(np.c_[x, y], columns=['x', 'y'])
f'Length of Vector y: {y.shape[0]}'
Explanation: 3) Using x and eps, generate a vector y according to the model
Y = −1 + 0.5X + eps
What is the length of the vector y? What are the values of β0 and β1 in this linear model?
End of explanation
ax = data.plot(kind='scatter', x='x', y='y')
ax.set_title('y vs x', fontsize=size['title'])
ax.set_xlabel('x', fontsize=size['label'])
ax.set_ylabel('y', fontsize=size['label'])
plt.show();
Explanation: β0: y-intercept (-1)
β1: slope of regression line (0.5)
4) Create a scatterplot displaying the relationship between x and y. Comment on what you observe.
End of explanation
lr = smf.ols(formula='y ~ x', data=data).fit()
lr.summary()
Explanation: Findings
The relationship has a linear distribution.
5) Fit a least squares linear model to predict y using x. Comment on the model obtained. How do β^0 and β^1 compare to β0 and β1?
End of explanation
exact_x = np.linspace(-2, 2, 100)
exact_y = -1 + 0.5 * exact_x
exact_data = pd.DataFrame(np.c_[exact_x, exact_y],
columns=['x', 'y'])
plot_regression(data, lr, ['x'], 'y', exact_reg=exact_data)
original_confidence = (lr.conf_int(0.05)
.rename(columns={0: 'Lower', 1: 'Upper'}))
Explanation: Findings
predicted intercept is -0.9969 (actual = -1)
predicted x-coefficient is 0.4897 (actual = 0.5)
The F-statistic is not equal to zero, so a relationship exists between x and y.
This is confirmed by the P-value of zero.
The model has an $R^2$ value of 0.749, which is fair.
The T-statistic is large indicating the y coefficient estimate is both large and precise.
6) Display the least squares line on the scatterplot obtained in (4). Draw the population regression line on the plot, in a different color. Use the legend() function to create an appropriate legend.
End of explanation
lr = smf.ols(formula='y ~ x + np.square(x)', data=data).fit()
lr.summary()
Explanation: 7) Now fit a polynomial regression model that predicts y using x and $x^2$. Is there evidence that the quadratic term improves the model fit ? Explain your answer.
End of explanation
np.random.seed(1)
x = 1 * np.random.randn(100) + 0
eps = 0.05 * np.random.randn(100) + 0
y = -1 + 0.5 * x + eps
data = pd.DataFrame(np.c_[x, y], columns=['x', 'y'])
lr = smf.ols(formula='y ~ x', data=data).fit()
lr.summary()
plot_regression(data, lr, ['x'], 'y', exact_reg=exact_data)
low_error_confidence = (lr.conf_int(0.05)
.rename(columns={0: 'Lower', 1: 'Upper'}))
Explanation: Findings
The model's $R^2$ value only showed negligent improvement.
Going to a quadradic model is not justified here.
8) Repeat (1)-(6) after modifying the data generation process in such a way that there is less noise in the data. The initial model should remain the same. Describe your results.
End of explanation
lr = smf.ols(formula='y ~ x + np.square(x)', data=data).fit()
lr.summary()
Explanation: Findings
predicted intercept is -0.9926 (actual = -1)
predicted x-coefficient is 0.5048 (actual = 0.5)
The F-statistic is not equal to zero, so a relationship exists between x and y.
This is confirmed by the P-value of zero.
The model has an $R^2$ value of 0.989, which is very good.
Acurate predictions could be made with this model.
The T-statistic is large indicating the y coefficient estimate is both large and precise.
End of explanation
np.random.seed(1)
x = 1 * np.random.randn(100) + 0
eps = 1 * np.random.randn(100) + 0
y = -1 + 0.5 * x + eps
data = pd.DataFrame(np.c_[x, y], columns=['x', 'y'])
lr = smf.ols(formula='y ~ x', data=data).fit()
lr.summary()
Explanation: Findings
The model's $R^2$ value did not imporove.
The P-value for the $x^2$ coefficient is large implying this coefficient is not a good estimate.
Going to a quadradic model is not justified here.
9) Repeat (1)-(6) after modifying the data generation process in such a way that there is more noise in the data. The initial model should remain the same. Describe your results.
End of explanation
plot_regression(data, lr, ['x'], 'y', exact_reg=exact_data)
high_error_confidence = (lr.conf_int(0.05)
.rename(columns={0: 'Lower', 1: 'Upper'}))
lr = smf.ols(formula='y ~ x + np.square(x)', data=data).fit()
lr.summary()
Explanation: Findings
The F-statistic is not equal to zero, so a relationship exists between x and y.
This is confirmed by the P-value of zero.
The model has an $R^2$ value of 0.244, which is poor.
This model does not represent the data.
The T-statistics are not large indicating the x coefficient estimate is imprecise.
End of explanation
print('Original Confidence Intervals')
original_confidence
print('Low Error Confidence Intervals')
low_error_confidence
print('High Error Confidence Intervals')
high_error_confidence
Explanation: Findings
The model's $R^2$ value did not imporove.
This model does not represent the data.
The P-value for the $x^2$ coefficient is large implying this coefficient is not a good estimate.
Going to a quadradic model is not justified here.
10) What are the confidence intervals for β0 and β1 based on the original data set, the noisier data set, and the less noisy data set ? Comment on your results.
End of explanation
np.random.seed(8)
x1 = np.random.rand(100)
x2 = 0.5 * x1 + np.random.rand(100) / 10
y = 2 + 2 * x1 + 0.3 * x2 + np.random.randn(100)
data = pd.DataFrame(np.c_[x1, x2, y], columns=['x1', 'x2', 'y'])
Explanation: Findings
As expected the confidence interval is directory proportional to the amount of error present in the data.
<a id='exercise_7'></a>
Exercise 7 - Explore the problem of collinearity.
Perform the following commands:
np.random.seed(8)
x1 = np.random.rand(100)
x2 = .5 * x1 + np.random.rand(100) / 10
y = 2 + 2 * x1 + .3 * x2 + np.random.randn(100)
The last line corresponds to creating a linear model in which y is
a function of x1 and x2. Write out the form of the linear model.
What are the regression coefficients?
What is the correlation between x1 and x2? Create a scatterplot
displaying the relationship between the variables.
Using this data, fit a least squares regression to predict y using
x1 and x2. Describe the results obtained. What are β0, β1, and β2? How do these relate to the true β0, β1, and β2? Can you reject the null hypothesis Ho:β1 = 0? How about the null
hypothesis Ho:β2 = 0?
Now fit a least squares regression to predict y using only x1.
Comment on your results. Can you reject the null hypothesis
Ho: β1 = 0?
Now fit a least squares regression to predict y using only x2.
Comment on your results. Can you reject the null hypothesis
Ho: β1 = 0?
Do the results obtained in (3)–(5) contradict each other? Explain
your answer.
Now suppose we obtain one additional observation, which was unfortunately mismeasured.
x1=c(x1 , 0.1)
x2=c(x2 , 0.8)
y=c(y,6)
Re-fit the linear models from (3) to (5) using this new data. What
effect does this new observation have on the each of the models?
In each model, is this observation an outlier? A high-leverage
point? Both? Explain your answers.
Table of Contents
End of explanation
data.corr()
correlation_grid(data)
Explanation: 1) The last line corresponds to creating a linear model in which y is a function of x1 and x2. Write out the form of the linear model. What are the regression coefficients?
y = β0 + 2β1 + 0.3β2 + error
2) What is the correlation between x1 and x2? Create a scatterplot displaying the relationship between the variables.
End of explanation
lr = smf.ols(formula='y ~ x1 + x2', data=data).fit()
lr.summary()
Explanation: Findings
The variables x1 and x2 are highly correlated.
3) Using this data, fit a least squares regression to predict y using x1 and x2. Describe the results obtained. What are β0, β1, and β2? How do these relate to the true β0, β1, and β2? Can you reject the null hypothesis Ho:β1 = 0? How about the null hypothesis Ho:β2 = 0?
End of explanation
lr = smf.ols(formula='y ~ x1', data=data).fit()
lr.summary()
Explanation: Findings
The model does not acurately represent the data.
Coefficients:
β0: 1.9615 (actual: 1)
β1: 5.9895 (actual: 2)
β2: -6.3538 (actual: 0.3)
The x1 variable has a P-value of 0 and is statistically significant.
The null hypothesis my be rejected.
The x2 variable has a P-value greater than 0.05 implying it is not statistically significant.
The null hypothesis may be not rejected.
4) Now fit a least squares regression to predict y using only x1. Comment on your results. Can you reject the null hypothesis H0:β1=0 ?
End of explanation
lr = smf.ols(formula='y ~ x2', data=data).fit()
lr.summary()
Explanation: Findings
The model improved, but is still poor.
Based on the F-statistic there is a relationship between x and y.
The null hypothesis may be rejected for x1.
5) Now fit a least squares regression to predict y using only x2. Comment on your results. Can you reject the null hypothesis Ho: β1 = 0?
End of explanation
data.loc[100] = [0.1, 0.8, 6]
lr = smf.ols(formula='y ~ x1 + x2', data=data).fit()
lr.summary()
plot_residuals(lr)
Explanation: Findings
The model is better than the x1 + x2 combined model, not as good as the y = x1 model.
All three of them are poor predictors of the data.
Based on the F-statistic there is a relationship between x and y.
The null hypothesis may be rejected for x2.
6) Do the results obtained in (3)–(5) contradict each other? Explain your answer.
The answers do not contradict each other, but display the interdependence of x1 and x2. Only one of these features should be used in the model and x1 produced better results.
7) Now suppose we obtain one additional observation, which was unfortunately mismeasured.
x1=c(x1, 0.1)
x2=c(x2, 0.8)
y=c(y, 6)
Re-fit the linear models from (3) to (5) using this new data. What
effect does this new observation have on the each of the models?
In each model, is this observation an outlier? A high-leverage
point? Both? Explain your answers.
End of explanation
lr = smf.ols(formula='y ~ x1', data=data).fit()
lr.summary()
plot_residuals(lr)
Explanation: Findings
$R^2$ value decresed
F-statisic value decresed
x1 may no longer reject the null hypothesis
Observation 100 is an outlier
Observastion 100 has high leverage
End of explanation
lr = smf.ols(formula='y ~ x2', data=data).fit()
lr.summary()
plot_residuals(lr)
Explanation: Findings
$R^2$ value increased
F-statisic value decresed
x1 may still reject the null hypothesis
Observation 100 is an outlier
Observastion 100 has low leverage
End of explanation
correlation_heatmap(boston, 'Boston')
correlation_grid(boston, title='Boston')
single_params = pd.Series()
features = [x for x in boston.columns
if x not in ('crim')]
for feature in features:
print('{}{}\n{}'.format('\n' * 2, '*' * 80, feature))
lr = smf.ols(formula=f'crim ~ {feature}', data=boston).fit()
lr.summary()
single_params.loc[feature] = lr.params.loc[feature]
boston.plot(kind='scatter', x='crim', y=feature)
Explanation: Findings
$R^2$ value increased
F-statisic value increased
x2 may still reject the null hypothesis
Observation 100 is not an outlier
Observastion 100 has high leverage
With the erroneous data entry now the feature x2 is preferred over x1.
<a id='exercise_8'></a>
Exercise 8 - Predict per capita crime rate.
This problem involves the Boston data set. We will now try to predict per capita crime rate
using the other variables in this data set. In other words, per capita
crime rate is the response, and the other variables are the predictors.
For each predictor, fit a simple linear regression model to predict
the response. Describe your results. In which of the models is
there a statistically significant association between the predictor
and the response? Create some plots to back up your assertions.
Fit a multiple regression model to predict the response using
all of the predictors. Describe your results. For which predictors
can we reject the null hypothesis H: β = 0?
How do your results from (1) compare to your results from (2)?
Create a plot displaying the univariate regression coefficients
from (1) on the x-axis, and the multiple regression coefficients
from (2) on the y-axis. That is, each predictor is displayed as a
single point in the plot. Its coefficient in a simple linear regression
model is shown on the x-axis, and its coefficient estimate
in the multiple linear regression model is shown on the y-axis.
Is there evidence of non-linear association between any of the
predictors and the response? To answer this question, for each
predictor X, fit a model of the form
Y = β0 + β1X + β2X^2 + β3X^3 + E.
Table of Contents
1) For each predictor, fit a simple linear regression model to predict the response. Describe your results. In which of the models is there a statistically significant association between the predictor and the response? Create some plots to back up your assertions.
End of explanation
features = ' + '.join([x for x in boston.columns
if x not in ('crim')])
lr = smf.ols(formula=f'crim ~ {features}', data=boston).fit()
lr.summary()
Explanation: Findings
All features except for chas have P-values below 0.05 and have a relationship to crim.
2) Fit a multiple regression model to predict the response using all of the predictors. Describe your results. For which predictors can we reject the null hypothesis H: β = 0?
End of explanation
mult_params = lr.params.iloc[1:]
models = pd.DataFrame({'multiple': mult_params, 'single': single_params})
models
ax = models.plot(kind='scatter', x='multiple', y='single')
ax.set_title('Multiple Regression vs Single Regression',
fontsize=size['title'])
ax.set_xlabel('Multiple', fontsize=size['label'])
ax.set_ylabel('Single', fontsize=size['label'])
plt.show();
Explanation: Findings
The following features were able to reject the null hypothesis and have a relationship to crim.
zn
nox
dis
rad
black
medv
$R^2$ value of 0.454 implys the model is a poor predictor of the data
3) How do your results from (1) compare to your results from (2)? Create a plot displaying the univariate regression coefficients from (1) on the x-axis, and the multiple regression coefficients from (2) on the y-axis. That is, each predictor is displayed as a single point in the plot. Its coefficient in a simple linear regression model is shown on the x-axis, and its coefficient estimate in the multiple linear regression model is shown on the y-axis.
End of explanation
features = [x for x in boston.columns
if x not in ('crim')]
for feature in features:
print('{}{}\n{}'.format('\n' * 2, '*' * 80, feature))
model = f'{feature} + np.square({feature}) + np.power({feature}, 3)'
lr = smf.ols(formula=f'crim ~ {model}', data=boston).fit()
lr.summary()
Explanation: 3) Is there evidence of non-linear association between any of the predictors and the response? To answer this question, for each predictor X, fit a model of the form $Y = β0 + β1X + β2X^2 + β3X^3 + E$.
End of explanation |
12,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ASE Analysis
Step1: General
This is Table S8 from the 2015 GTEx paper.
Total sites ≥30 reads | Sites 30 reads ASE p < 0.005 | Sites 30 reads ASE p < 0.005 (%)
Minimum 221 | 8 | 1.59%
Median 6383.5 | 389.5 | 5.99%
Maximum 16422 | 1349 | 15.0%
In the paper they say that "the fraction of significant ASE sites varied widely
across tissues, with a range of 1.7 to 3.7% (median 2.3%)."
Step2: It seems that the fraction of genes we see ASE for agrees with GTEx. We may have a bit
more power from MBASED although our coverage is probably not quite as high.
ASE/eQTL Enrichment | Python Code:
import cPickle
import glob
import gzip
import os
import random
import shutil
import subprocess
import sys
import cdpybio as cpb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
pd.options.mode.chained_assignment = None
import pybedtools as pbt
from scipy.stats import fisher_exact
import scipy.stats as stats
import seaborn as sns
import ciepy
import cardipspy as cpy
%matplotlib inline
%load_ext rpy2.ipython
dy_name = 'ase_analysis'
import socket
if socket.gethostname() == 'fl-hn1' or socket.gethostname() == 'fl-hn2':
dy = os.path.join(ciepy.root, 'sandbox', dy_name)
cpy.makedir(dy)
pbt.set_tempdir(dy)
outdir = os.path.join(ciepy.root, 'output', dy_name)
cpy.makedir(outdir)
private_outdir = os.path.join(ciepy.root, 'private_output', dy_name)
cpy.makedir(private_outdir)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'rnaseq_metadata.tsv')
meta = pd.read_table(fn, index_col=0)
tg = pd.read_table(cpy.gencode_transcript_gene, index_col=0,
header=None, squeeze=True)
gene_info = pd.read_table(cpy.gencode_gene_info, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'eqtl_input',
'tpm_log_filtered_phe_std_norm_peer_resid.tsv')
exp = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'rsem_tpm.tsv')
tpm = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'eqtl_processing', 'eqtls01', 'qvalues.tsv')
qvalues = pd.read_table(fn, index_col=0)
qvalues.columns = ['{}_gene'.format(x) for x in qvalues.columns]
fn = os.path.join(ciepy.root, 'output', 'eqtl_processing', 'eqtls01', 'lead_variants.tsv')
most_sig = pd.read_table(fn, index_col=0)
genes = pbt.BedTool(cpy.gencode_gene_bed)
fn = os.path.join(ciepy.root, 'output', 'input_data',
'mbased_major_allele_freq.tsv')
maj_af = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data',
'mbased_p_val_ase.tsv')
ase_pval = pd.read_table(fn, index_col=0)
locus_p = pd.Panel({'major_allele_freq':maj_af, 'p_val_ase':ase_pval})
locus_p = locus_p.swapaxes(0, 2)
snv_fns = glob.glob(os.path.join(ciepy.root, 'private_output', 'input_data', 'mbased_snv',
'*_snv.tsv'))
count_fns = glob.glob(os.path.join(ciepy.root, 'private_output', 'input_data', 'allele_counts',
'*mbased_input.tsv'))
snv_res = {}
for fn in snv_fns:
snv_res[os.path.split(fn)[1].split('_')[0]] = pd.read_table(fn, index_col=0)
count_res = {}
for fn in count_fns:
count_res[os.path.split(fn)[1].split('_')[0]] = pd.read_table(fn, index_col=0)
snv_p = pd.Panel(snv_res)
Explanation: ASE Analysis
End of explanation
frac = []
for k in locus_p.keys():
frac.append(sum(locus_p.ix[k, :, 'p_val_ase'].dropna() < 0.005) /
float(locus_p.ix[k, :, 'p_val_ase'].dropna().shape[0]))
plt.hist(frac)
plt.title('Fraction of genes with ASE: median = {:.2f}'.format(np.median(frac)))
plt.ylabel('Number of samples')
plt.xlabel('Fraction of genes with ASE ($p$ < 0.005)');
frac = []
for k in locus_p.keys():
d = dict(zip(count_res[k]['feature'], count_res[k]['totalFeatureCount']))
t = locus_p[k, :, ['major_allele_freq', 'p_val_ase']].dropna()
t['totalFeatureCount'] = [d[i] for i in t.index]
t = t[t.totalFeatureCount >= 30]
frac.append(sum(t['p_val_ase'] < 0.005) / float(t.shape[0]))
plt.hist(frac)
plt.title('Fraction of genes with ASE (total counts $\geq$ 30): median = {:.2f}'.format(np.median(frac)))
plt.ylabel('Number of samples')
plt.xlabel('Fraction of genes with ASE ($p$ < 0.005)');
Explanation: General
This is Table S8 from the 2015 GTEx paper.
Total sites ≥30 reads | Sites 30 reads ASE p < 0.005 | Sites 30 reads ASE p < 0.005 (%)
Minimum 221 | 8 | 1.59%
Median 6383.5 | 389.5 | 5.99%
Maximum 16422 | 1349 | 15.0%
In the paper they say that "the fraction of significant ASE sites varied widely
across tissues, with a range of 1.7 to 3.7% (median 2.3%)."
End of explanation
df = locus_p.ix[:, :, 'p_val_ase']
df = df[meta[meta.in_eqtl].index]
df = df.ix[set(df.index) & set(qvalues.index)]
s = set(df.index) & set(qvalues[qvalues.perm_sig_gene].index)
ns = set(df.index) & set(qvalues[qvalues.perm_sig_gene == False].index)
t = df.ix[s]
s_s = (t[t.isnull() == False] < 0.005).sum().sum()
s_ns = (t[t.isnull() == False] >= 0.005).sum().sum()
t = df.ix[ns]
ns_s = (t[t.isnull() == False] < 0.005).sum().sum()
ns_ns = (t[t.isnull() == False] >= 0.005).sum().sum()
odds, pval = fisher_exact([[s_s, s_ns], [ns_s, ns_ns]])
print('eQTL genes enriched for ASE with p = {}, odds = {:.2f}'.format(pval, odds))
a = float(s_s) / (s_s + s_ns)
b = float(ns_s) / (ns_s + ns_ns)
print('{:.2f}% of gene expression measurements for eGenes have ASE.'.format(a * 100))
print('{:.2f}% of gene expression measurements for non-eGenes have ASE.'.format(b * 100))
Explanation: It seems that the fraction of genes we see ASE for agrees with GTEx. We may have a bit
more power from MBASED although our coverage is probably not quite as high.
ASE/eQTL Enrichment
End of explanation |
12,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Population rate model of generalized integrate-and-fire neurons
This script simulates a finite network of generalized integrate-and-fire (GIF) neurons directly on the mesoscopic population level using the effective stochastic population rate dynamics derived in the paper [Schwalger et al. PLoS Comput Biol. 2017]. The stochastic population dynamics is implemented in the NEST model gif_pop_psc_exp. We demonstrate this model using the example of a Brunel network of two coupled populations, one exhitory and one inhibitory population.
Note that the population model represents the mesoscopic level description of the corresponding microscopic network based on the NEST model gif_psc_exp.
At first, we load the necessary modules
Step1: Next, we set the parameters of the microscopic model
Step2: Simulation on the mesoscopic level
To directly simulate the mesoscopic population activities (i.e. generating the activity of a finite-size population without simulating single neurons), we can build the populations using the NEST model gif_pop_psc_exp
Step3: To record the instantaneous population rate $\bar A(t)$ we use a multimeter, and to get the population activity $A_N(t)$ we use spike detector
Step4: All neurons in a given population will be stimulated with a step input current
Step5: We can now start the simulation
Step6: and plot the activity
Step7: Microscopic ("direct") simulation
As mentioned above, the population model gif_pop_psc_exp directly simulates the mesoscopic population activities, i.e. without the need to simulate single neurons. On the other hand, if we want to know single neuron activities, we must simulate on the microscopic level. This is possible by building a corresponding network of gif_psc_exp neuron models
Step8: We want to record all spikes of each population in order to compute the mesoscopic population activities $A_N(t)$ from the microscopic simulation. We also record the membrane potentials of five example neurons
Step9: As before, all neurons in a given population will be stimulated with a step input current. The following code block is identical to the one for the mesoscopic simulation above
Step10: We can now start the microscopic simulation
Step11: Let's retrieve the data of the spike detector and plot the activity of the excitatory population (in Hz)
Step12: This looks similar to the population activity obtained from the mesoscopic simulation based on the NEST model gif_pop_psc_exp (cf. previous figure). Now we retrieve the data of the multimeter, which allows us to look at the membrane potentials of single neurons. Here we plot the voltage traces (in mV) of five example neurons | Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import nest
Explanation: Population rate model of generalized integrate-and-fire neurons
This script simulates a finite network of generalized integrate-and-fire (GIF) neurons directly on the mesoscopic population level using the effective stochastic population rate dynamics derived in the paper [Schwalger et al. PLoS Comput Biol. 2017]. The stochastic population dynamics is implemented in the NEST model gif_pop_psc_exp. We demonstrate this model using the example of a Brunel network of two coupled populations, one exhitory and one inhibitory population.
Note that the population model represents the mesoscopic level description of the corresponding microscopic network based on the NEST model gif_psc_exp.
At first, we load the necessary modules:
End of explanation
#all times given in milliseconds
dt=0.5
dt_rec=1.
#Simulation time
t_end=2000.
#Parameters
size = 200
N = np.array([ 4, 1 ]) * size
M = len(N) #number of populations
#neuronal parameters
t_ref = 4. * np.ones(M) #absolute refractory period
tau_m = 20 * np.ones(M) #membrane time constant
mu = 24. * np.ones(M) #constant base current mu=R*(I0+Vrest)
c = 10. * np.ones(M) #base rate of exponential link function
Delta_u = 2.5 * np.ones(M) #softness of exponential link function
V_reset = 0. * np.ones(M) #Reset potential
V_th = 15. * np.ones(M) #baseline threshold (non-accumulating part)
tau_sfa_exc = [100., 1000.] #adaptation time constants of excitatory neurons
tau_sfa_inh = [100., 1000.] #adaptation time constants of inhibitory neurons
J_sfa_exc = [1000.,1000.] #size of feedback kernel theta (= area under exponential) in mV*ms
J_sfa_inh = [1000.,1000.] #in mV*ms
tau_theta = np.array([tau_sfa_exc, tau_sfa_inh])
J_theta = np.array([J_sfa_exc, J_sfa_inh ])
#connectivity
J = 0.3 #excitatory synaptic weight in mV if number of input connections is C0 (see below)
g = 5. #inhibition-to-excitation ratio
pconn = 0.2 * np.ones((M, M))
delay = 1. * np.ones((M, M))
C0 = np.array([[ 800, 200 ], [800, 200]]) * 0.2 #constant reference matrix
C = np.vstack((N,N)) * pconn #numbers of input connections
J_syn = np.array([[ J, -g * J], [J, -g * J]]) * C0 / C #final synaptic weights scaling as 1/C
taus1_ = [3., 6.] #time constants of exc. and inh. post-synaptic currents (PSC's)
taus1 = np.array([taus1_ for k in range(M)])
#step current input
step=[[20.],[20.]] #jump size of mu in mV
tstep=np.array([[1500.],[1500.]]) #times of jumps
#synaptic time constants of excitatory and inhibitory connections
tau_ex = 3. # in ms
tau_in = 6. # in ms
Explanation: Next, we set the parameters of the microscopic model
End of explanation
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
nest.SetKernelStatus({'resolution': dt, 'print_time': True, 'local_num_threads': 1})
t0=nest.GetKernelStatus('time')
nest_pops = nest.Create('gif_pop_psc_exp', M)
C_m = 250. # irrelavant value for membrane capacity, cancels out in simulation
g_L = C_m / tau_m
for i, nest_i in enumerate( nest_pops ):
nest.SetStatus([nest_i], {
'C_m': C_m,
'I_e': mu[i] * g_L[i],
'lambda_0': c[i], # in Hz!
'Delta_V': Delta_u[i],
'tau_m': tau_m[i],
'tau_sfa': tau_theta[i],
'q_sfa': J_theta[i] / tau_theta[i], # [J_theta]= mV*ms -> [q_sfa]=mV
'V_T_star': V_th[i],
'V_reset': V_reset[i],
'len_kernel': -1, # -1 triggers automatic history size
'N': N[i],
't_ref': t_ref[i],
'tau_syn_ex': max([tau_ex, dt]),
'tau_syn_in': max([tau_in, dt]),
'E_L': 0.
})
# connect the populations
g_syn = np.ones_like(J_syn) #synaptic conductance
g_syn[:,0] = C_m / tau_ex
g_syn[:,1] = C_m / tau_in
for i, nest_i in enumerate( nest_pops ):
for j, nest_j in enumerate( nest_pops ):
nest.SetDefaults('static_synapse', {
'weight': J_syn[i,j] * g_syn[i,j] * pconn[i,j],
'delay': delay[i,j]} )
nest.Connect( [nest_j], [nest_i], 'all_to_all')
Explanation: Simulation on the mesoscopic level
To directly simulate the mesoscopic population activities (i.e. generating the activity of a finite-size population without simulating single neurons), we can build the populations using the NEST model gif_pop_psc_exp:
End of explanation
# monitor the output using a multimeter, this only records with dt_rec!
nest_mm = nest.Create('multimeter')
nest.SetStatus( nest_mm, {'record_from':['n_events', 'mean'],
'withgid': True,
'withtime': False,
'interval': dt_rec})
nest.Connect(nest_mm, nest_pops, 'all_to_all')
# monitor the output using a spike detector
nest_sd = []
for i, nest_i in enumerate( nest_pops ):
nest_sd.append( nest.Create('spike_detector') )
nest.SetStatus( nest_sd[i], {'withgid': False,
'withtime': True,
'time_in_steps': True})
nest.SetDefaults('static_synapse', {'weight': 1.,
'delay': dt} )
nest.Connect( [nest_pops[i]], nest_sd[i], 'all_to_all')
Explanation: To record the instantaneous population rate $\bar A(t)$ we use a multimeter, and to get the population activity $A_N(t)$ we use spike detector:
End of explanation
#set initial value (at t0+dt) of step current generator to zero
tstep = np.hstack((dt * np.ones((M,1)), tstep))
step = np.hstack((np.zeros((M,1)), step))
# create the step current devices
nest_stepcurrent = nest.Create('step_current_generator', M )
# set the parameters for the step currents
for i in range(M):
nest.SetStatus( [nest_stepcurrent[i]], {
'amplitude_times': tstep[i] + t0,
'amplitude_values': step[i] *g_L[i], 'origin': t0, 'stop': t_end})
pop_ = nest_pops[i]
if type(nest_pops[i])==int:
pop_ = [pop_]
nest.Connect( [nest_stepcurrent[i]], pop_, syn_spec={'weight':1.} )
Explanation: All neurons in a given population will be stimulated with a step input current:
End of explanation
local_num_threads = 1
seed=1
msd =local_num_threads * seed + 1 #master seed
nest.SetKernelStatus({'rng_seeds': range(msd, msd + local_num_threads)})
t = np.arange(0., t_end, dt_rec)
A_N = np.ones( (t.size, M) ) * np.nan
Abar = np.ones_like( A_N ) * np.nan
#simulate 1 step longer to make sure all t are simulated
nest.Simulate(t_end + dt)
data_mm = nest.GetStatus( nest_mm )[0]['events']
for i, nest_i in enumerate( nest_pops ):
a_i = data_mm['mean'][ data_mm['senders']==nest_i ]
a = a_i / N[i] / dt
min_len = np.min([len(a), len(Abar)])
Abar[:min_len,i] = a[:min_len]
data_sd = nest.GetStatus(nest_sd[i], keys=['events'])[0][0]['times'] * dt - t0
bins = np.concatenate((t, np.array([t[-1] + dt_rec])))
A = np.histogram(data_sd, bins=bins)[0] / float(N[i]) / dt_rec
A_N[:,i]=A
Explanation: We can now start the simulation:
End of explanation
plt.clf()
plt.subplot(2,1,1)
plt.plot(t,A_N*1000) #plot population activities (in Hz)
plt.ylabel(r'$A_N$')
plt.subplot(2,1,2)
plt.plot(t,Abar*1000) #plot instantaneous population rates (in Hz)
plt.ylabel(r'$\bar A$')
Explanation: and plot the activity:
End of explanation
nest.ResetKernel()
nest.SetKernelStatus({'resolution': dt, 'print_time': True, 'local_num_threads': 1})
t0=nest.GetKernelStatus('time')
nest_pops = nest.Create('gif_pop_psc_exp', M)
nest_pops = []
for k in range(M):
nest_pops.append( nest.Create('gif_psc_exp', N[k]) )
# set single neuron properties
for i, nest_i in enumerate( nest_pops ):
nest.SetStatus(nest_i, {
'C_m': C_m,
'I_e': mu[i] * g_L[i],
'lambda_0': c[i], # in Hz!
'Delta_V': Delta_u[i],
'g_L': g_L[i],
'tau_sfa': tau_theta[i],
'q_sfa': J_theta[i] / tau_theta[i], # [J_theta]= mV*ms -> [q_sfa]=mV
'V_T_star': V_th[i],
'V_reset': V_reset[i],
't_ref': t_ref[i],
'tau_syn_ex': max([tau_ex, dt]),
'tau_syn_in': max([tau_in, dt]),
'E_L': 0.,
'V_m': 0.
})
# connect the populations
for i, nest_i in enumerate( nest_pops ):
for j, nest_j in enumerate( nest_pops ):
nest.SetDefaults('static_synapse', {
'weight': J_syn[i,j] * g_syn[i,j],
'delay': delay[i,j]} )
if np.allclose( pconn[i,j], 1. ):
conn_spec = {'rule': 'all_to_all'}
else:
conn_spec = {'rule': 'fixed_indegree', 'indegree': int(pconn[i,j] * N[j])}
nest.Connect( nest_j, nest_i, conn_spec )
Explanation: Microscopic ("direct") simulation
As mentioned above, the population model gif_pop_psc_exp directly simulates the mesoscopic population activities, i.e. without the need to simulate single neurons. On the other hand, if we want to know single neuron activities, we must simulate on the microscopic level. This is possible by building a corresponding network of gif_psc_exp neuron models:
End of explanation
# monitor the output using a multimeter and a spike detector
nest_sd = []
for i, nest_i in enumerate(nest_pops ):
nest_sd.append( nest.Create('spike_detector') )
nest.SetStatus(nest_sd[i], {'withgid': False,
'withtime': True, 'time_in_steps': True})
nest.SetDefaults('static_synapse', {'weight': 1., 'delay': dt} )
#record all spikes from population to compute population activity
nest.Connect(nest_pops[i], nest_sd[i], 'all_to_all')
Nrecord=[5,0] #for each population i the first Nrecord[i] neurons are recorded
nest_mm_Vm = []
for i, nest_i in enumerate( nest_pops ):
nest_mm_Vm.append( nest.Create('multimeter') )
nest.SetStatus(nest_mm_Vm[i], {'record_from':['V_m'], \
'withgid': True, 'withtime': True, \
'interval': dt_rec})
nest.Connect(nest_mm_Vm[i], list( np.array(nest_pops[i])[:Nrecord[i]]), 'all_to_all')
Explanation: We want to record all spikes of each population in order to compute the mesoscopic population activities $A_N(t)$ from the microscopic simulation. We also record the membrane potentials of five example neurons:
End of explanation
# create the step current devices if they do not exist already
nest_stepcurrent = nest.Create('step_current_generator', M )
# set the parameters for the step currents
for i in range(M):
nest.SetStatus( [nest_stepcurrent[i]], {
'amplitude_times': tstep[i] + t0,
'amplitude_values': step[i] *g_L[i], 'origin': t0, 'stop': t_end #, 'stop': sim_T + t0
})
pop_ = nest_pops[i]
if type(nest_pops[i])==int:
pop_ = [pop_]
nest.Connect( [nest_stepcurrent[i]], pop_, syn_spec={'weight':1.} )
Explanation: As before, all neurons in a given population will be stimulated with a step input current. The following code block is identical to the one for the mesoscopic simulation above:
End of explanation
local_num_threads = 1
seed=1
msd =local_num_threads * seed + 1 #master seed
nest.SetKernelStatus({'rng_seeds': range(msd, msd + local_num_threads)})
t = np.arange(0., t_end, dt_rec)
A_N = np.ones( (t.size, M) ) * np.nan
#simulate 1 step longer to make sure all t are simulated
nest.Simulate(t_end + dt)
Explanation: We can now start the microscopic simulation:
End of explanation
for i, nest_i in enumerate( nest_pops ):
data_sd = nest.GetStatus(nest_sd[i], keys=['events'])[0][0]['times'] * dt - t0
bins = np.concatenate((t, np.array([t[-1] + dt_rec])))
A = np.histogram(data_sd, bins=bins)[0] / float(N[i]) / dt_rec
A_N[:,i]=A * 1000 #in Hz
t = np.arange(dt,t_end+dt,dt_rec)
plt.plot(t, A_N[:,0])
plt.xlabel('time [ms]')
plt.ylabel('population activity [Hz]')
Explanation: Let's retrieve the data of the spike detector and plot the activity of the excitatory population (in Hz):
End of explanation
voltage=[]
for i in range(M):
if Nrecord[i]>0:
senders = nest.GetStatus(nest_mm_Vm[i])[0]['events']['senders']
v = nest.GetStatus(nest_mm_Vm[i])[0]['events']['V_m']
voltage.append( np.array([v[np.where(senders==j)] for j in set(senders)]) )
else:
voltage.append(np.array([]))
f, axarr = plt.subplots(Nrecord[0], sharex=True)
for i in range(Nrecord[0]):
axarr[i].plot(voltage[0][i])
axarr[i].set_yticks((0,15,30))
axarr[i].set_xlabel('time [ms]')
Explanation: This looks similar to the population activity obtained from the mesoscopic simulation based on the NEST model gif_pop_psc_exp (cf. previous figure). Now we retrieve the data of the multimeter, which allows us to look at the membrane potentials of single neurons. Here we plot the voltage traces (in mV) of five example neurons:
End of explanation |
12,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First approach
Write a function that reads an xyz trajectory file in. We are going to need to be able to separate numbers from atomic symbols; an XYZ trajectory file looks like
Step1: CODING TIME
Step2: DataFrames
People spend a lot of time reading code, especially their own code.
Lets do two things in using DataFrames
Step3: Second approach
Step4: One possible solution (run this only if you have already finished the above!)
Step5: Testing your functions is key
A couple of quick tests should suffice...though these barely make the cut...
Step6: Lets attach a meaningful index
This is easy since we know the number of atoms and number of frames...
Step7: CODING TIME
Step8: Saving your work!
We did all of this work parsing our data, but this Python kernel won't be alive eternally so lets save
our data so that we can load it later (i.e. in the next notebook!).
We are going to create an HDF5 store to save our DataFrame(s) to disk.
HDF is a high performance, portable, binary data storage format designed with scientific data exchange in mind. Use it!
Also note that pandas has extensive IO functionality. | Python Code:
def skeleton_naive_xyz_parser(path):
'''
Simple xyz parser.
'''
# Read in file
lines = None
with open(path) as f:
lines = f.readlines()
# Process lines
# ...
# Return processed lines
# ...
return lines
lines = skeleton_naive_xyz_parser(xyz_path)
lines
Explanation: First approach
Write a function that reads an xyz trajectory file in. We are going to need to be able to separate numbers from atomic symbols; an XYZ trajectory file looks like:
nat [unit]
[first frame]
symbol1 x11 y11 z11
symbol2 x21 y21 z21
nat [unit]
[second frame]
symbol1 x12 y12 z12
symbol2 x22 y22 z22
Stuff in [ ] are optional (if units are absent, angstroms are assumed; a blank is included if no comments are present).
Here is an example file parser. All it does is read line by line and return a list of these lines.
End of explanation
%load -s naive_xyz_parser, snippets/parsing.py
data = naive_xyz_parser(xyz_path)
data
Explanation: CODING TIME: Try to expand the skeleton above to convert the line strings into
into a list of xyz data rows (i.e. convert the strings to floats).
If you can't figure out any approach, run the cell below which will print one possible (of many) ways of
approaching this problem.
Note that you may have to run "%load" cells twice, once to load the code and once to instantiate the function.
End of explanation
np.random.seed = 1
df = pd.DataFrame(np.random.randint(0, 10, size=(6, 4)), columns=['A', 'B', 'C', 'D'])
df
df += 1
df
df.loc[:, 'A'] = [0, 0, 1, 1, 2, 2]
df
df.groupby('A')[['B', 'C', 'D']].apply(lambda f: f.sum())
Explanation: DataFrames
People spend a lot of time reading code, especially their own code.
Lets do two things in using DataFrames: make our code more readable
and not reinvent the wheel (i.e. parsers). We have pride in the
code we write!
First an example of using DataFrames...
End of explanation
def skeleton_pandas_xyz_parser(path):
'''
Parses xyz files using pandas read_csv function.
'''
# Read from disk
df = pd.read_csv(path, delim_whitespace=True, names=['symbol', 'x', 'y', 'z'])
# Remove nats and comments
# ...
# ...
return df
df = skeleton_pandas_xyz_parser(xyz_path)
df.head()
Explanation: Second approach: pandas.read_csv
Like 99% (my estimate) of all widely established Python packages, pandas is very well
documented.
Let's use this function of pandas to read in our well structured xyz data.
names: specifies column names (and implicitly number of columns)
delim_whitespace: tab or space separated files
CODING TIME: Figure out what options we need to correctly parse in the XYZ trajectory data using pandas.read_csv
End of explanation
%load -s pandas_xyz_parser, snippets/parsing.py
df = pandas_xyz_parser(xyz_path)
df.head()
Explanation: One possible solution (run this only if you have already finished the above!):
End of explanation
print(len(df) == nframe * nat) # Make sure that we have the correct number of rows
print(df.dtypes) # Make sure that each column's type is correct
Explanation: Testing your functions is key
A couple of quick tests should suffice...though these barely make the cut...
End of explanation
df = pandas_xyz_parser(xyz_path)
df.index = pd.MultiIndex.from_product((range(nframe), range(nat)), names=['frame', 'atom'])
df
Explanation: Lets attach a meaningful index
This is easy since we know the number of atoms and number of frames...
End of explanation
%load -s parse, snippets/parsing.py
Explanation: CODING TIME: Put parsing and indexing together into a single function..
End of explanation
xyz = parse(xyz_path, nframe, nat)
store = pd.HDFStore('xyz.hdf5', mode='w')
store.put('xyz', xyz)
store.close()
Explanation: Saving your work!
We did all of this work parsing our data, but this Python kernel won't be alive eternally so lets save
our data so that we can load it later (i.e. in the next notebook!).
We are going to create an HDF5 store to save our DataFrame(s) to disk.
HDF is a high performance, portable, binary data storage format designed with scientific data exchange in mind. Use it!
Also note that pandas has extensive IO functionality.
End of explanation |
12,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 14</font>
Download
Step1: Web Scraping | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 14</font>
Download: http://github.com/dsacademybr
End of explanation
# Biblioteca usada para requisitar uma página de um web site
import urllib.request
# Definimos a url
# Verifique as permissões em https://www.python.org/robots.txt
with urllib.request.urlopen("https://www.python.org") as url:
page = url.read()
# Imprime o conteúdo
print(page)
from bs4 import BeautifulSoup
# Analise o html na variável 'page' e armazene-o no formato Beautiful Soup
soup = BeautifulSoup(page, "html.parser")
soup.title
soup.title.string
soup.a
soup.find_all("a")
tables = soup.find('table')
print(tables)
Explanation: Web Scraping
End of explanation |
12,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cython, que no CPython
No, no nos hemos equivocado en el título, hoy vamos a hablar de Cython.
¿Qué es Cython?
Cython son dos cosas
Step1: Creamos una matriz cuadrada relativamente grande (4 millones de elementos).
Step2: Ya tenemos los datos listos para empezar a trabajar.
Vamos a crear una función en Python que busque los mínimos tal como los hemos definido.
Step3: Veamos cuanto tarda esta función en mi máquina
Step4: Buff, tres segundos y pico en un i7... Si tengo que buscar los mínimos en 500 de estos casos me va a tardar casi media hora.
Por casualidad, vamos a probar numba a ver si es capaz de resolver el problema sin mucho esfuerzo, es código Python muy sencillo en el cual no usamos cosas muy 'extrañas' del lenguaje.
Step5: Ooooops! Parece que la magia de numba no funciona aquí.
Vamos a especificar los tipos de entrada y de salida (y a modificar el output) a ver si mejora algo
Step6: Pues parece que no, el resultado es del mismo pelo. Usando la opción nopython me casca un error un poco feo,...
Habrá que seguir esperando a que numba esté un poco más maduro. En mis pocas experiencias no he conseguido aun el efecto que buscaba y en la mayoría de los casos obtengo errores muy crípticos. No es que no tenga confianza en la gente que está detrás, solo estoy diciendo que aun no está listo para 'producción'. Esto no pretende ser una guerra Cython/numba, solo he usado numba para ver si a pelo era capaz de mejorar algo el tema. Como no ha sido así, nos olvidamos de numba de momento.
Cythonizando, que es gerundio (toma 1).
Lo más sencillo y evidente es usar directamente el compilador cython y ver si usando el código python tal cual es un poco más rápido. Para ello, vamos a usar las funciones mágicas que Cython pone a nuestra disposición en el notebook. Solo vamos a hablar de la función mágica %%cython, de momento, aunque hay otras.
Step7: EL comando %%cython nos permite escribir código Cython en una celda. Una vez que ejecutamos la celda, IPython se encarga de coger el código, crear un fichero de código Cython con extensión .pyx, compilarlo a C y, si todo está correcto, importar ese fichero para que todo esté disponible dentro del notebook.
[INCISO] a la función mágica %%cython le podemos pasar una serie de argumentos. Veremos alguno en este análisis pero ahora vamos a definir uno que sirve para que podamos nombrar a la funcíon que se crea y compila al vuelo, -n o --name.
Step8: El fichero se creará dentro de la carpeta cython disponible dentro del directorio resultado de la función get_ipython_cache_dir. Veamos la localización del fichero en mi equipo
Step9: No lo muestro por aquí porque el resultado son más de ¡¡2400!! líneas de código C.
Veamos ahora lo que tarda.
Step10: Bueno, parece que sin hacer mucho esfuerzo hemos conseguido ganar en torno a un 5% - 25% de rendimiento (dependerá del caso). No es gran cosa pero Cython es capaz de mucho más...
Cythonizando, que es gerundio (toma 2).
En esta parte vamos a introducir una de las palabras clave que Cython introduce para extender Python, cdef. La palabra clave cdef sirve para 'tipar' estáticamente variables en Cython (luego veremos que se usa también para definir funciones). Por ejemplo
Step11: Vaya decepción... No hemos conseguido gran cosa, tenemos un código un poco más largo y estamos peor que en la toma 1.
En realidad, estamos usando objetos Python como listas (no es un tipo C/C++ puro pero Cython lo declara como puntero a algún tipo struct de Python) o numpy arrays y no hemos definido las variables de entrada y de salida.
[INCISO] Cuando existe un tipo Python y C que tienen el mismo nombre (por ejemplo, int) predomina el de C (porque es lo deseable, ¿no?).
Cythonizando, que es gerundio (toma 3).
En Cython existen tres tipos de funciones, las definidas en el espacio Python con def, las definidas en el espacio C con cdef (sí, lo mismo que usamos para declarar los tipos) y las definidas en ambos espacios con cpdef.
def
Step12: Vaya, seguimos sin estar muy a gusto con estos resultados.
Seguimos sin definir el tipo del valor de entrada.
La función mágica %%cython dispone de una serie de funcionalidades entre la que se encuentra -a o --annotate (además del -n o --name que ya hemos visto). Si le pasamos este parámetro podremos ver una representación del código con colores marcando las partes más lentas (amarillo más oscuro) y más optmizadas (más claro) o a la velocidad de C (blanco). Vamos a usarlo para saber donde tenemos cuellos de botella (aplicado a nuestra última versión del código)
Step13: El if parece la parte más lenta. Estamos usando el valor de entrada que no tiene un tipo Cython definido.
Los bucles parece que están optimizados (las variables envueltas en el bucle las hemos declarado como unsigned int).
Pero todas las partes por las que pasa el numpy array parece que no están muy optimizadas...
Cythonizando, que es gerundio (toma 4).
Ahora mismo, haciendo import numpy as np tenemos acceso a la funcionalidad Python de numpy. Para poder acceder a la funcionalidad C de numpy hemos de hacer un cimport de numpy.
El cimport se usa para importar información especial del módulo numpy en el momento de compilación. Esta información se encuentra en el fichero numpy.pxd que es parte de la distribución Cython. El cimport también se usa para poder importar desde la stdlib de C.
Vamos a usar esto para declarar el tipo del array de numpy.
Step14: Guauuuu!!! Acabamos de obtener un incremento de entre 25x a 30x veces más rápido.
Vamos a comprobar que el resultado sea el mismo que la función original
Step15: Pues parece que sí
Step16: Vemos que muchas de las partes oscuras ahora son más claras!!! Pero parece que sigue quedando espacio para la mejora.
Cythonizando, que es gerundio (toma 5).
Vamos a ver si definiendo el tipo del resultado de la función como un numpy array en lugar de como una tupla nos introduce alguna mejora
Step17: Vaya, parece que con respecto a la versión anterior solo obtenemos una ganancia de un 2% - 4%.
Cythonizando, que es gerundio (toma 6).
Vamos a dejar de usar listas y vamos a usar numpy arrays vacios que iremos 'rellenando' con numpy.append. A ver si usando todo numpy arrays conseguimos algún tipo de mejora
Step18: En realidad, en la anterior porción de código estoy usando algo muy ineficiente. La función numpy.append no funciona como una lista a la que vas anexando elementos. Lo que estamos haciendo en realidad es crear copias del array existente para convertirlo a un nuevo array con un elemento nuevo. Esto no es lo que pretendiamos!!!!
Cythonizando, que es gerundio (toma 7).
En Python existen arrays eficientes para valores numéricos (según reza la documentación) que también pueden ser usados de la forma en que estoy usando las listas en mi función (arrays vacios a los que les vamos añadiendo elementos). Vamos a usarlos con Cython.
Step19: Parece que hemos ganado otro 25% - 30% con respecto a lo anterior más eficiente que habíamos conseguido. Con respecto a la implementación inicial en Python puro tenemos una mejora de 30x - 35x veces la velocidad inicial.
Vamos a comprobar si seguimos teniendo los mismos resultados.
Step20: ¿Qué pasa si el tamaño del array se incrementa?
Step21: Parece que al ir aumentando el tamaño de los datos de entrada a la función los números son consistentes y el rendimiento se mantiene. En este caso concreto parece que ya hemos llegado a rendimientos de más de ¡¡35x!! con respecto a la implementación inicial.
Cythonizando, que es gerundio (toma 8).
Podemos usar directivas de compilación que ayuden al compilador a decidir mejor qué es lo que tiene que hacer. Entre ellas se encuentra una opción que es boundscheck que evita mirar la posibilidad de obtener IndexError asumiendo que el código está libre de estos errores de indexación. Lo vamos a usar conjuntamente con wraparound. Esta última opción se encarga de evitar mirar indexaciones relativas al final del iterable (por ejemplo, mi_iterable[-1]). En este caso concreto, la segunda opción no aporta nada de mejora de rendimiento pero la dijamos ya que la hemos probado.
Step22: Parece que hemos conseguido arañar otro poquito de rendimiento.
Cythonizando, que es gerundio (toma 9).
En lugar de usar numpy arrays vamos a usar memoryviews. Los memoryviews son arrays de acceso rápido. Si solo queremos almacenar cosas y no necesitamos ninguna de las características de un numpy array pueden ser una buena solución. Si necesitamos alguna funcionalidad extra siempre lo podemos convertir en un numpy array usando numpy.asarray.
Step23: Parece que, virtualmente, el rendimiento es parecido a lo que ya teniamos por lo que parece que nos hemos quedado igual.
Bonus track
Voy a intentar usar pypy (2.4 (CPython 2.7)) conjuntamente con numpypy para ver lo que conseguimos.
Step24: El último valor del output anterior es el tiempo promedio después de repetir el cálculo 100 veces.
Wow!! Parece que sin hacer modificaciones tenemos que el resultado es 10x - 15x veces más rápido que el obtenido usando la función inicial. Y llega a ser solo 3.5x veces más lento que lo que hemos conseguido con Cython.
Resumen de resultados.
Vamos a ver los resultados completos en un breve resumen. Primero vamos a ver los tiempos de las diferentes versiones de la función busca_min_xxx
Step25: En el gráfico anterior, la primera barra corresponde a la función de partida (busca_min). Recordemos que la versión de pypy ha tardado unos 0.38 segundos.
Y ahora vamos a ver los tiempos entre busca_min (la versión original) y la última versión de cython que hemos creado, busca_min_cython9 usando diferentes tamaños de la matriz de entrada | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Cython, que no CPython
No, no nos hemos equivocado en el título, hoy vamos a hablar de Cython.
¿Qué es Cython?
Cython son dos cosas:
Por una parte, Cython es un lenguaje de programación (un superconjunto de Python) que une Python con el sistema de tipado estático de C y C++.
Por otra parte, cython es un compilador que traduce codigo fuente escrito en Cython en eficiente código C o C++. El código resultante se podría usar como una extensión Python o como un ejecutable.
¡Guau! ¿Cómo os habéis quedado?
Lo que se pretende es, básicamente, aprovechar las fortalezas de Python y C, combinar una sintaxis sencilla con el poder y la velocidad.
Salvando algunas excepciones, el código Python (tanto Python 2 como Python 3) es código Cython válido. Además, Cython añade una serie de palabras clave para poder usar el sistema de tipado de C con Python y que el compilador cython pueda generar código C eficiente.
Pero, ¿quién usa Cython?
Pues mira, igual no lo sabes pero seguramente estés usando Cython todos los días. Sage tiene casi medio millón de líneas de Cython (que se dice pronto), Scipy y Pandas más de 20000, scikit-learn unas 15000,...
¿Nos empezamos a meter en harina?
La idea principal de este primer acercamiento a Cython será empezar con un código Python que sea nuestro cuello de botella e iremos creando versiones que sean cada vez más rápidas, o eso intentaremos.
Por ejemplo, imaginemos que tenemos que detectar valores mínimos locales dentro de una malla. Los valores mínimos deberán ser simplemente valores más bajos que los que haya en los 8 nodos de su entorno inmediato. En el siguiente gráfico, el nodo en verde será un nodo con un mínimo y en su entorno son todo valores superiores:
<table>
<tr>
<td style="background:red">(2, 0)</td>
<td style="background:red">(2, 1)</td>
<td style="background:red">(2, 2)</td>
</tr>
<tr>
<td style="background:red">(1, 0)</td>
<td style="background:green">(1. 1)</td>
<td style="background:red">(1, 2)</td>
</tr>
<tr>
<td style="background:red">(0, 0)</td>
<td style="background:red">(0, 1)</td>
<td style="background:red">(0, 2)</td>
</tr>
</table>
[INCISO] Los números y porcentajes que veáis a continuación pueden variar levemente dependiendo de la máquina donde se ejecute. Tomad los valores como aproximativos.
Setup
Como siempre, importamos algunas librerías antes de empezar a picar código:
End of explanation
np.random.seed(0)
data = np.random.randn(2000, 2000)
Explanation: Creamos una matriz cuadrada relativamente grande (4 millones de elementos).
End of explanation
def busca_min(malla):
minimosx = []
minimosy = []
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
Explanation: Ya tenemos los datos listos para empezar a trabajar.
Vamos a crear una función en Python que busque los mínimos tal como los hemos definido.
End of explanation
%timeit busca_min(data)
Explanation: Veamos cuanto tarda esta función en mi máquina:
End of explanation
from numba import jit
@jit
def busca_min_numba(malla):
minimosx = []
minimosy = []
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
%timeit busca_min_numba(data)
Explanation: Buff, tres segundos y pico en un i7... Si tengo que buscar los mínimos en 500 de estos casos me va a tardar casi media hora.
Por casualidad, vamos a probar numba a ver si es capaz de resolver el problema sin mucho esfuerzo, es código Python muy sencillo en el cual no usamos cosas muy 'extrañas' del lenguaje.
End of explanation
from numba import jit
from numba import int32, float64
@jit(int32[:,:](float64[:,:]))
def busca_min_numba(malla):
minimosx = []
minimosy = []
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array([minimosx, minimosy], dtype = np.int32)
%timeit busca_min_numba(data)
Explanation: Ooooops! Parece que la magia de numba no funciona aquí.
Vamos a especificar los tipos de entrada y de salida (y a modificar el output) a ver si mejora algo:
End of explanation
# antes cythonmagic
%load_ext Cython
Explanation: Pues parece que no, el resultado es del mismo pelo. Usando la opción nopython me casca un error un poco feo,...
Habrá que seguir esperando a que numba esté un poco más maduro. En mis pocas experiencias no he conseguido aun el efecto que buscaba y en la mayoría de los casos obtengo errores muy crípticos. No es que no tenga confianza en la gente que está detrás, solo estoy diciendo que aun no está listo para 'producción'. Esto no pretende ser una guerra Cython/numba, solo he usado numba para ver si a pelo era capaz de mejorar algo el tema. Como no ha sido así, nos olvidamos de numba de momento.
Cythonizando, que es gerundio (toma 1).
Lo más sencillo y evidente es usar directamente el compilador cython y ver si usando el código python tal cual es un poco más rápido. Para ello, vamos a usar las funciones mágicas que Cython pone a nuestra disposición en el notebook. Solo vamos a hablar de la función mágica %%cython, de momento, aunque hay otras.
End of explanation
%%cython --name probandocython1
import numpy as np
def busca_min_cython1(malla):
minimosx = []
minimosy = []
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
Explanation: EL comando %%cython nos permite escribir código Cython en una celda. Una vez que ejecutamos la celda, IPython se encarga de coger el código, crear un fichero de código Cython con extensión .pyx, compilarlo a C y, si todo está correcto, importar ese fichero para que todo esté disponible dentro del notebook.
[INCISO] a la función mágica %%cython le podemos pasar una serie de argumentos. Veremos alguno en este análisis pero ahora vamos a definir uno que sirve para que podamos nombrar a la funcíon que se crea y compila al vuelo, -n o --name.
End of explanation
from IPython.utils.path import get_ipython_cache_dir
print(get_ipython_cache_dir() + '/cython/probandocython1.c')
Explanation: El fichero se creará dentro de la carpeta cython disponible dentro del directorio resultado de la función get_ipython_cache_dir. Veamos la localización del fichero en mi equipo:
End of explanation
%timeit busca_min_cython1(data)
Explanation: No lo muestro por aquí porque el resultado son más de ¡¡2400!! líneas de código C.
Veamos ahora lo que tarda.
End of explanation
%%cython --name probandocython2
import numpy as np
def busca_min_cython2(malla):
cdef list minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
minimosx = []
minimosy = []
for i in range(1, ii):
for j in range(1, jj):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
%timeit busca_min_cython2(data)
Explanation: Bueno, parece que sin hacer mucho esfuerzo hemos conseguido ganar en torno a un 5% - 25% de rendimiento (dependerá del caso). No es gran cosa pero Cython es capaz de mucho más...
Cythonizando, que es gerundio (toma 2).
En esta parte vamos a introducir una de las palabras clave que Cython introduce para extender Python, cdef. La palabra clave cdef sirve para 'tipar' estáticamente variables en Cython (luego veremos que se usa también para definir funciones). Por ejemplo:
Python
cdef int var1, var2
cdef float var3
En el bloque de código de más arriba he creado dos variables de tipo entero, var1 y var2, y una variable de tipo float, var3. Los tipos anteriores son la nomenclatura C.
Vamos a intentar usar cdef con algunos tipos de datos que tenemos dentro de nuestra función. Para empezar, veo evidente que tengo varias listas (minimosx y minimosy), tenemos los índices de los bucles (i y j) y voy a convertir los parámetros de los range en tipos estáticos (ii y jj):
End of explanation
%%cython --name probandocython3
import numpy as np
cdef tuple cbusca_min_cython3(malla):
cdef list minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
minimosx = []
minimosy = []
for i in range(start, ii):
for j in range(start, jj):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
def busca_min_cython3(malla):
return cbusca_min_cython3(malla)
%timeit busca_min_cython3(data)
Explanation: Vaya decepción... No hemos conseguido gran cosa, tenemos un código un poco más largo y estamos peor que en la toma 1.
En realidad, estamos usando objetos Python como listas (no es un tipo C/C++ puro pero Cython lo declara como puntero a algún tipo struct de Python) o numpy arrays y no hemos definido las variables de entrada y de salida.
[INCISO] Cuando existe un tipo Python y C que tienen el mismo nombre (por ejemplo, int) predomina el de C (porque es lo deseable, ¿no?).
Cythonizando, que es gerundio (toma 3).
En Cython existen tres tipos de funciones, las definidas en el espacio Python con def, las definidas en el espacio C con cdef (sí, lo mismo que usamos para declarar los tipos) y las definidas en ambos espacios con cpdef.
def: ya lo hemos visto y funciona como se espera. Accesible desde Python
cdef: No es accesible desde Python y la tendremos que envolver con una función Python para poder acceder a la misma.
cpdef: Es accesible tanto desde Python como desde C y Cython se encargará de hacer el 'envoltorio' para nosotros. Esto meterá un poco más de código y empeorará levemente el rendimiento.
Si definimos una función con cdef debería ser una función que se usa internamente dentro del módulo Cython que vayamos a crear y que no sea necesario llamar desde Python.
Veamos un ejemplo de lo dicho anteriormente definiendo la salida de la función como tupla:
End of explanation
%%cython --annotate
import numpy as np
cdef tuple cbusca_min_cython3(malla):
cdef list minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
minimosx = []
minimosy = []
for i in range(start, ii):
for j in range(start, jj):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
def busca_min_cython3(malla):
return cbusca_min_cython3(malla)
Explanation: Vaya, seguimos sin estar muy a gusto con estos resultados.
Seguimos sin definir el tipo del valor de entrada.
La función mágica %%cython dispone de una serie de funcionalidades entre la que se encuentra -a o --annotate (además del -n o --name que ya hemos visto). Si le pasamos este parámetro podremos ver una representación del código con colores marcando las partes más lentas (amarillo más oscuro) y más optmizadas (más claro) o a la velocidad de C (blanco). Vamos a usarlo para saber donde tenemos cuellos de botella (aplicado a nuestra última versión del código):
End of explanation
%%cython --name probandocython4
import numpy as np
cimport numpy as np
cpdef tuple busca_min_cython4(np.ndarray[double, ndim = 2] malla):
cdef list minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
minimosx = []
minimosy = []
for i in range(start, ii):
for j in range(start, jj):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
%timeit busca_min_cython4(data)
Explanation: El if parece la parte más lenta. Estamos usando el valor de entrada que no tiene un tipo Cython definido.
Los bucles parece que están optimizados (las variables envueltas en el bucle las hemos declarado como unsigned int).
Pero todas las partes por las que pasa el numpy array parece que no están muy optimizadas...
Cythonizando, que es gerundio (toma 4).
Ahora mismo, haciendo import numpy as np tenemos acceso a la funcionalidad Python de numpy. Para poder acceder a la funcionalidad C de numpy hemos de hacer un cimport de numpy.
El cimport se usa para importar información especial del módulo numpy en el momento de compilación. Esta información se encuentra en el fichero numpy.pxd que es parte de la distribución Cython. El cimport también se usa para poder importar desde la stdlib de C.
Vamos a usar esto para declarar el tipo del array de numpy.
End of explanation
a, b = busca_min(data)
print(a)
print(b)
aa, bb = busca_min_cython4(data)
print(aa)
print(bb)
print(np.array_equal(a, aa))
print(np.array_equal(b, bb))
Explanation: Guauuuu!!! Acabamos de obtener un incremento de entre 25x a 30x veces más rápido.
Vamos a comprobar que el resultado sea el mismo que la función original:
End of explanation
%%cython --annotate
import numpy as np
cimport numpy as np
cpdef tuple busca_min_cython4(np.ndarray[double, ndim = 2] malla):
cdef list minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
minimosx = []
minimosy = []
for i in range(start, ii):
for j in range(start, jj):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
Explanation: Pues parece que sí :-)
Vamos a ver si hemos dejado la mayoría del código anterior en blanco o más clarito usando --annotate.
End of explanation
%%cython --name probandocython5
import numpy as np
cimport numpy as np
cpdef np.ndarray[int, ndim = 2] busca_min_cython5(np.ndarray[double, ndim = 2] malla):
cdef list minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
minimosx = []
minimosy = []
for i in range(start, ii):
for j in range(start, jj):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array([minimosx, minimosy])
%timeit busca_min_cython5(data)
Explanation: Vemos que muchas de las partes oscuras ahora son más claras!!! Pero parece que sigue quedando espacio para la mejora.
Cythonizando, que es gerundio (toma 5).
Vamos a ver si definiendo el tipo del resultado de la función como un numpy array en lugar de como una tupla nos introduce alguna mejora:
End of explanation
%%cython --name probandocython6
import numpy as np
cimport numpy as np
cpdef tuple busca_min_cython6(np.ndarray[double, ndim = 2] malla):
cdef np.ndarray[long, ndim = 1] minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
minimosx = np.array([], dtype = np.int)
minimosy = np.array([], dtype = np.int)
for i in range(start, ii):
for j in range(start, jj):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
np.append(minimosx, i)
np.append(minimosy, j)
return minimosx, minimosy
%timeit busca_min_cython6(data)
np.append?
Explanation: Vaya, parece que con respecto a la versión anterior solo obtenemos una ganancia de un 2% - 4%.
Cythonizando, que es gerundio (toma 6).
Vamos a dejar de usar listas y vamos a usar numpy arrays vacios que iremos 'rellenando' con numpy.append. A ver si usando todo numpy arrays conseguimos algún tipo de mejora:
End of explanation
%%cython --name probandocython7
import numpy as np
cimport numpy as np
from cpython cimport array as c_array
from array import array
cpdef tuple busca_min_cython7(np.ndarray[double, ndim = 2] malla):
cdef c_array.array minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
minimosx = array('L', [])
minimosy = array('L', [])
for i in range(start, ii):
for j in range(start, jj):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
%timeit busca_min_cython7(data)
Explanation: En realidad, en la anterior porción de código estoy usando algo muy ineficiente. La función numpy.append no funciona como una lista a la que vas anexando elementos. Lo que estamos haciendo en realidad es crear copias del array existente para convertirlo a un nuevo array con un elemento nuevo. Esto no es lo que pretendiamos!!!!
Cythonizando, que es gerundio (toma 7).
En Python existen arrays eficientes para valores numéricos (según reza la documentación) que también pueden ser usados de la forma en que estoy usando las listas en mi función (arrays vacios a los que les vamos añadiendo elementos). Vamos a usarlos con Cython.
End of explanation
a, b = busca_min(data)
print(a)
print(b)
aa, bb = busca_min_cython7(data)
print(aa)
print(bb)
print(np.array_equal(a, aa))
print(np.array_equal(b, bb))
Explanation: Parece que hemos ganado otro 25% - 30% con respecto a lo anterior más eficiente que habíamos conseguido. Con respecto a la implementación inicial en Python puro tenemos una mejora de 30x - 35x veces la velocidad inicial.
Vamos a comprobar si seguimos teniendo los mismos resultados.
End of explanation
data2 = np.random.randn(5000, 5000)
%timeit busca_min(data2)
%timeit busca_min_cython7(data2)
a, b = busca_min(data2)
print(a)
print(b)
aa, bb = busca_min_cython7(data2)
print(aa)
print(bb)
print(np.array_equal(a, aa))
print(np.array_equal(b, bb))
Explanation: ¿Qué pasa si el tamaño del array se incrementa?
End of explanation
%%cython --name probandocython8
import numpy as np
cimport numpy as np
from cpython cimport array as c_array
from array import array
cimport cython
@cython.boundscheck(False)
@cython.wraparound(False)
cpdef tuple busca_min_cython8(np.ndarray[double, ndim = 2] malla):
cdef c_array.array minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
minimosx = array('L', [])
minimosy = array('L', [])
for i in range(start, ii):
for j in range(start, jj):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
%timeit busca_min_cython8(data)
Explanation: Parece que al ir aumentando el tamaño de los datos de entrada a la función los números son consistentes y el rendimiento se mantiene. En este caso concreto parece que ya hemos llegado a rendimientos de más de ¡¡35x!! con respecto a la implementación inicial.
Cythonizando, que es gerundio (toma 8).
Podemos usar directivas de compilación que ayuden al compilador a decidir mejor qué es lo que tiene que hacer. Entre ellas se encuentra una opción que es boundscheck que evita mirar la posibilidad de obtener IndexError asumiendo que el código está libre de estos errores de indexación. Lo vamos a usar conjuntamente con wraparound. Esta última opción se encarga de evitar mirar indexaciones relativas al final del iterable (por ejemplo, mi_iterable[-1]). En este caso concreto, la segunda opción no aporta nada de mejora de rendimiento pero la dijamos ya que la hemos probado.
End of explanation
%%cython --name probandocython9
import numpy as np
cimport numpy as np
from cpython cimport array as c_array
from array import array
cimport cython
@cython.boundscheck(False)
@cython.wraparound(False)
#cpdef tuple busca_min_cython9(np.ndarray[double, ndim = 2] malla):
cpdef tuple busca_min_cython9(double [:,:] malla):
cdef c_array.array minimosx, minimosy
cdef unsigned int i, j
cdef unsigned int ii = malla.shape[1]-1
cdef unsigned int jj = malla.shape[0]-1
cdef unsigned int start = 1
#cdef float [:, :] malla_view = malla
minimosx = array('L', [])
minimosy = array('L', [])
for i in range(start, ii):
for j in range(start, jj):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
%timeit busca_min_cython9(data)
Explanation: Parece que hemos conseguido arañar otro poquito de rendimiento.
Cythonizando, que es gerundio (toma 9).
En lugar de usar numpy arrays vamos a usar memoryviews. Los memoryviews son arrays de acceso rápido. Si solo queremos almacenar cosas y no necesitamos ninguna de las características de un numpy array pueden ser una buena solución. Si necesitamos alguna funcionalidad extra siempre lo podemos convertir en un numpy array usando numpy.asarray.
End of explanation
%%pypy
import numpy as np
import time
np.random.seed(0)
data = np.random.randn(2000,2000)
def busca_min(malla):
minimosx = []
minimosy = []
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
resx, resy = busca_min(data)
print(data)
print(len(resx), len(resy))
print(resx)
print(resy)
t = []
for i in range(100):
t0 = time.time()
busca_min(data)
t1 = time.time() - t0
t.append(t1)
print(sum(t) / 100.)
Explanation: Parece que, virtualmente, el rendimiento es parecido a lo que ya teniamos por lo que parece que nos hemos quedado igual.
Bonus track
Voy a intentar usar pypy (2.4 (CPython 2.7)) conjuntamente con numpypy para ver lo que conseguimos.
End of explanation
funcs = [busca_min, busca_min_numba, busca_min_cython1,
busca_min_cython2, busca_min_cython3,
busca_min_cython4, busca_min_cython5,
busca_min_cython6, busca_min_cython7,
busca_min_cython8, busca_min_cython9]
t = []
for func in funcs:
res = %timeit -o func(data)
t.append(res.best)
index = np.arange(len(t))
plt.figure(figsize = (12, 6))
plt.bar(index, t)
plt.xticks(index + 0.4, [func.__name__[9:] for func in funcs])
plt.tight_layout()
Explanation: El último valor del output anterior es el tiempo promedio después de repetir el cálculo 100 veces.
Wow!! Parece que sin hacer modificaciones tenemos que el resultado es 10x - 15x veces más rápido que el obtenido usando la función inicial. Y llega a ser solo 3.5x veces más lento que lo que hemos conseguido con Cython.
Resumen de resultados.
Vamos a ver los resultados completos en un breve resumen. Primero vamos a ver los tiempos de las diferentes versiones de la función busca_min_xxx:
End of explanation
tamanyos = [10, 100, 500, 1000, 2000, 5000]
t_p = []
t_c = []
for i in tamanyos:
data = np.random.randn(i, i)
res = %timeit -o busca_min(data)
t_p.append(res.best)
res = %timeit -o busca_min_cython9(data)
t_c.append(res.best)
plt.figure(figsize = (10,6))
plt.plot(tamanyos, t_p, 'bo-')
plt.plot(tamanyos, t_c, 'ro-')
ratio = np.array(t_p) / np.array(t_c)
plt.figure(figsize = (10,6))
plt.plot(tamanyos, ratio, 'bo-')
Explanation: En el gráfico anterior, la primera barra corresponde a la función de partida (busca_min). Recordemos que la versión de pypy ha tardado unos 0.38 segundos.
Y ahora vamos a ver los tiempos entre busca_min (la versión original) y la última versión de cython que hemos creado, busca_min_cython9 usando diferentes tamaños de la matriz de entrada:
End of explanation |
12,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Non-Personalized Recommenders
The recommendation problem
Recommenders have been around since at least 1992. Today we see different flavours of recommenders, deployed across different verticals
Step1: The CourseTalk dataset
Step2: Using pd.merge we get it all into one big DataFrame.
Step3: Collaborative filtering
Step4: Now let's filter down to courses that received at least 20 ratings (a completely arbitrary number);
To do this, I group the data by course_id and use size() to get a Series of group sizes for each title
Step5: The index of titles receiving at least 20 ratings can then be used to select rows from mean_ratings above
Step6: By computing the mean rating for each course, we will order with the highest rating listed first.
Step7: To see the top courses among Coursera students, we can sort by the 'Coursera' column in descending order
Step8: Now, let's go further! How about rank the courses with the highest percentage of ratings that are 4 or higher ? % of ratings 4+
Let's start with a simple pivoting example that does not involve any aggregation. We can extract a ratings matrix as follows
Step9: Let's extract only the rating that are 4 or higher.
Step10: Now picking the number of total ratings for each course and the count of ratings 4+ , we can merge them into one DataFrame.
Step11: Let's now go easy. Let's count the number of ratings for each course, and order with the most number of ratings.
Step12: Considering this information we can sort by the most rated ones with highest percentage of 4+ ratings.
Step13: Finally using the formula above that we learned, let's find out what the courses that most often occur wit the popular MOOC An introduction to Interactive Programming with Python by using the method "x + y/ x" . For each course, calculate the percentage of Programming with python raters who also rated that course. Order with the highest percentage first, and voilá we have the top 5 moocs.
Step14: First, let's get only the users that rated the course An Introduction to Interactive Programming in Python
Step15: Now, for all other courses let's filter out only the ratings from users that rated the Python course.
Step16: By applying the division
Step17: Ordering by the score, highest first excepts the first one which contains the course itself. | Python Code:
from IPython.core.display import Image
Image(filename='./imgs/recsys_arch.png')
Explanation: Introduction to Non-Personalized Recommenders
The recommendation problem
Recommenders have been around since at least 1992. Today we see different flavours of recommenders, deployed across different verticals:
Amazon
Netflix
Facebook
Last.fm.
What exactly do they do?
Definitions from the literature
In a typical recommender system people provide recommendations as inputs, which
the system then aggregates and directs to appropriate recipients. -- Resnick
and Varian, 1997
Collaborative filtering simply means that people collaborate to help one
another perform filtering by recording their reactions to documents they read.
-- Goldberg et al, 1992
In its most common formulation, the recommendation problem is reduced to the
problem of estimating ratings for the items that have not been seen by a
user. Intuitively, this estimation is usually based on the ratings given by this
user to other items and on some other information [...] Once we can estimate
ratings for the yet unrated items, we can recommend to the user the item(s) with
the highest estimated rating(s). -- Adomavicius and Tuzhilin, 2005
Driven by computer algorithms, recommenders help consumers
by selecting products they will probably like and might buy
based on their browsing, searches, purchases, and preferences. -- Konstan and Riedl, 2012
Notation
$U$ is the set of users in our domain. Its size is $|U|$.
$I$ is the set of items in our domain. Its size is $|I|$.
$I(u)$ is the set of items that user $u$ has rated.
$-I(u)$ is the complement of $I(u)$ i.e., the set of items not yet seen by user $u$.
$U(i)$ is the set of users that have rated item $i$.
$-U(i)$ is the complement of $U(i)$.
Goal of a recommendation system
$$
\newcommand{\argmax}{\mathop{\rm argmax}\nolimits}
\forall{u \in U},\; i^* = \argmax_{i \in -I(u)} [S(u,i)]
$$
Problem statement
The recommendation problem in its most basic form is quite simple to define:
|-------------------+-----+-----+-----+-----+-----|
| user_id, movie_id | m_1 | m_2 | m_3 | m_4 | m_5 |
|-------------------+-----+-----+-----+-----+-----|
| u_1 | ? | ? | 4 | ? | 1 |
|-------------------+-----+-----+-----+-----+-----|
| u_2 | 3 | ? | ? | 2 | 2 |
|-------------------+-----+-----+-----+-----+-----|
| u_3 | 3 | ? | ? | ? | ? |
|-------------------+-----+-----+-----+-----+-----|
| u_4 | ? | 1 | 2 | 1 | 1 |
|-------------------+-----+-----+-----+-----+-----|
| u_5 | ? | ? | ? | ? | ? |
|-------------------+-----+-----+-----+-----+-----|
| u_6 | 2 | ? | 2 | ? | ? |
|-------------------+-----+-----+-----+-----+-----|
| u_7 | ? | ? | ? | ? | ? |
|-------------------+-----+-----+-----+-----+-----|
| u_8 | 3 | 1 | 5 | ? | ? |
|-------------------+-----+-----+-----+-----+-----|
| u_9 | ? | ? | ? | ? | 2 |
|-------------------+-----+-----+-----+-----+-----|
Given a partially filled matrix of ratings ($|U|x|I|$), estimate the missing values.
Challenges
Availability of item metadata
Content-based techniques are limited by the amount of metadata that is available
to describe an item. There are domains in which feature extraction methods are
expensive or time consuming, e.g., processing multimedia data such as graphics,
audio/video streams. In the context of grocery items for example, it's often the
case that item information is only partial or completely missing. Examples
include:
Ingredients
Nutrition facts
Brand
Description
County of origin
New user problem
A user has to have rated a sufficient number of items before a recommender
system can have a good idea of what their preferences are. In a content-based
system, the aggregation function needs ratings to aggregate.
New item problem
Collaborative filters rely on an item being rated by many users to compute
aggregates of those ratings. Think of this as the exact counterpart of the new
user problem for content-based systems.
Data sparsity
When looking at the more general versions of content-based and collaborative
systems, the success of the recommender system depends on the availability of a
critical mass of user/item iteractions. We get a first glance at the data
sparsity problem by quantifying the ratio of existing ratings vs $|U|x|I|$. A
highly sparse matrix of interactions makes it difficult to compute similarities
between users and items. As an example, for a user whose tastes are unusual
compared to the rest of the population, there will not be any other users who
are particularly similar, leading to poor recommendations.
Flow chart: the big picture
End of explanation
import pandas as pd
unames = ['user_id', 'username']
users = pd.read_table('./data/users_set.dat',
sep='|', header=None, names=unames)
rnames = ['user_id', 'course_id', 'rating']
ratings = pd.read_table('./data/ratings.dat',
sep='|', header=None, names=rnames)
mnames = ['course_id', 'title', 'avg_rating', 'workload', 'university', 'difficulty', 'provider']
courses = pd.read_table('./data/cursos.dat',
sep='|', header=None, names=mnames)
# show how one of them looks
ratings.head(10)
# show how one of them looks
users[:5]
courses[:5]
Explanation: The CourseTalk dataset: loading and first look
Loading of the CourseTalk database.
The CourseTalk data is spread across three files. Using the pd.read_table
method we load each file:
End of explanation
coursetalk = pd.merge(pd.merge(ratings, courses), users)
coursetalk
coursetalk.ix[0]
Explanation: Using pd.merge we get it all into one big DataFrame.
End of explanation
mean_ratings = coursetalk.pivot_table('rating', rows='provider', aggfunc='mean')
mean_ratings.order(ascending=False)
Explanation: Collaborative filtering: generalizations of the aggregation function
Non-personalized recommendations
Groupby
The idea of groupby is that of split-apply-combine:
split data in an object according to a given key;
apply a function to each subset;
combine results into a new object.
To get mean course ratings grouped by the provider, we can use the pivot_table method:
End of explanation
ratings_by_title = coursetalk.groupby('title').size()
ratings_by_title[:10]
active_titles = ratings_by_title.index[ratings_by_title >= 20]
active_titles[:10]
Explanation: Now let's filter down to courses that received at least 20 ratings (a completely arbitrary number);
To do this, I group the data by course_id and use size() to get a Series of group sizes for each title:
End of explanation
mean_ratings = coursetalk.pivot_table('rating', rows='title', aggfunc='mean')
mean_ratings
Explanation: The index of titles receiving at least 20 ratings can then be used to select rows from mean_ratings above:
End of explanation
mean_ratings.ix[active_titles].order(ascending=False)
Explanation: By computing the mean rating for each course, we will order with the highest rating listed first.
End of explanation
mean_ratings = coursetalk.pivot_table('rating', rows='title',cols='provider', aggfunc='mean')
mean_ratings[:10]
mean_ratings['coursera'][active_titles].order(ascending=False)[:10]
Explanation: To see the top courses among Coursera students, we can sort by the 'Coursera' column in descending order:
End of explanation
# transform the ratings frame into a ratings matrix
ratings_mtx_df = coursetalk.pivot_table(values='rating',
rows='user_id',
cols='title')
ratings_mtx_df.ix[ratings_mtx_df.index[:15], ratings_mtx_df.columns[:15]]
Explanation: Now, let's go further! How about rank the courses with the highest percentage of ratings that are 4 or higher ? % of ratings 4+
Let's start with a simple pivoting example that does not involve any aggregation. We can extract a ratings matrix as follows:
End of explanation
ratings_gte_4 = ratings_mtx_df[ratings_mtx_df>=4.0]
# with an integer axis index only label-based indexing is possible
ratings_gte_4.ix[ratings_gte_4.index[:15], ratings_gte_4.columns[:15]]
Explanation: Let's extract only the rating that are 4 or higher.
End of explanation
ratings_gte_4_pd = pd.DataFrame({'total': ratings_mtx_df.count(), 'gte_4': ratings_gte_4.count()})
ratings_gte_4_pd.head(10)
ratings_gte_4_pd['gte_4_ratio'] = (ratings_gte_4_pd['gte_4'] * 1.0)/ ratings_gte_4_pd.total
ratings_gte_4_pd.head(10)
ranking = [(title,total,gte_4, score) for title, total, gte_4, score in ratings_gte_4_pd.itertuples()]
for title, total, gte_4, score in sorted(ranking, key=lambda x: (x[3], x[2], x[1]) , reverse=True)[:10]:
print title, total, gte_4, score
Explanation: Now picking the number of total ratings for each course and the count of ratings 4+ , we can merge them into one DataFrame.
End of explanation
ratings_by_title = coursetalk.groupby('title').size()
ratings_by_title.order(ascending=False)[:10]
Explanation: Let's now go easy. Let's count the number of ratings for each course, and order with the most number of ratings.
End of explanation
for title, total, gte_4, score in sorted(ranking, key=lambda x: (x[2], x[3], x[1]) , reverse=True)[:10]:
print title, total, gte_4, score
Explanation: Considering this information we can sort by the most rated ones with highest percentage of 4+ ratings.
End of explanation
course_users = coursetalk.pivot_table('rating', rows='title', cols='user_id')
course_users.ix[course_users.index[:15], course_users.columns[:15]]
Explanation: Finally using the formula above that we learned, let's find out what the courses that most often occur wit the popular MOOC An introduction to Interactive Programming with Python by using the method "x + y/ x" . For each course, calculate the percentage of Programming with python raters who also rated that course. Order with the highest percentage first, and voilá we have the top 5 moocs.
End of explanation
ratings_by_course = coursetalk[coursetalk.title == 'An Introduction to Interactive Programming in Python']
ratings_by_course.set_index('user_id', inplace=True)
Explanation: First, let's get only the users that rated the course An Introduction to Interactive Programming in Python
End of explanation
their_ids = ratings_by_course.index
their_ratings = course_users[their_ids]
course_users[their_ids].ix[course_users[their_ids].index[:15], course_users[their_ids].columns[:15]]
Explanation: Now, for all other courses let's filter out only the ratings from users that rated the Python course.
End of explanation
course_count = their_ratings.ix['An Introduction to Interactive Programming in Python'].count()
sims = their_ratings.apply(lambda profile: profile.count() / float(course_count) , axis=1)
Explanation: By applying the division: number of ratings who rated Python Course and the given course / total of ratings who rated the Python Course we have our percentage.
End of explanation
sims.order(ascending=False)[1:][:10]
Explanation: Ordering by the score, highest first excepts the first one which contains the course itself.
End of explanation |
12,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name='output')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
12,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
After Funding
Funding이 시작되기 전에는 예측이 어려웠다면 Funding이 시작된 이후에는 예측을 할 수 있을까?
Funding이 시작된 이후 5일까지 각 날짜별로 얼마만큼의 금액이 펀딩되어야 최종적으로 성공할 것인지 예측
Attributes
Step1: 1. Distribution Test
Step2: 성공/실패 프로젝트들의 펀딩 시작 후 분포는 확연히 다른 분포임을 확인
초기 5일 이내 펀딩율로 프로젝트의 성공 여부 판단 가능할 것으로 기대
2. Classification
Features_1
Step3: B. Grid Search
Step4: C. Cross_Validation
Step5: (0~5)day_funding_rate의 importance가 가장 높음
일자별 (누적)펀딩율은 많은 정보를 포함하고 있어서 정확성 향상에 큰 영향을 미침 (Project의 참신함, 실현가능성, 마케팅 효과 등)
D. Model Selection
Random Forest with 4 features vs GNB with 4 features vs KNN with 4 features
Features
Step6: 너무 낮은 GNB와 KNN Score -> model별 grid search 실시
KNN gridsearch
Step7: GNB grid search -> parameter 조정이 필요없음
#Grid search 결과, 낮은 score의 원인은 parameter selection 문제가 아니라 feature selection 문제이다.
3 Features (without date_duration)
Step8: # KNN의 경우에는 score 증가, GNB는 변화없음
-> date_duration은 KNN 모델에서 왜곡을 가져옴
3 Features (without grammar_level)
Step9: # Score 변화없음
3 Features (without target)
Step10: # KNN, GNB 모델에서 Score 증가
Step11: Random Forest with 4 features vs GNB with 1 feature vs KNN with 1 feature
Features_RandomForest
Step12: Distribution Test 결과들을 다시 생각해보자
성공/실패 target 분포에 대한 차이 검정
Step13: 성공/실패 date_duration 분포에 대한 차이 검정 | Python Code:
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import cross_val_score
from sklearn.cross_validation import KFold
from sklearn.cross_validation import StratifiedKFold
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.grid_search import GridSearchCV
import statsmodels.api as sm
from time import time
import patsy
wadiz_df_original = pd.read_csv('wadiz_df_0329_1.csv', index_col=0)
user_comment = pd.read_csv('user_data_all_0329.csv', index_col=0)
provider_comment = pd.read_csv('provider_data_all_0329.csv', index_col=0)
wadiz_df = pd.read_csv('wadiz_provider_analysis_0329.csv', index_col=0)
provider_comment_grammar = pd.read_csv('comment_analysis.csv', index_col=0)
# grammar null값 제거
wadiz_df = wadiz_df[wadiz_df['provider_grammar_level'].notnull()]
# duration 처리
wadiz_df['date_duration'] = wadiz_df['date_duration'].apply(lambda x: int(x[:-24]))
Explanation: After Funding
Funding이 시작되기 전에는 예측이 어려웠다면 Funding이 시작된 이후에는 예측을 할 수 있을까?
Funding이 시작된 이후 5일까지 각 날짜별로 얼마만큼의 금액이 펀딩되어야 최종적으로 성공할 것인지 예측
Attributes : Target, Duration, Grammar_level, day_funding_rate, day_comment
Models : RandomForest, KNN, GaussianNB
가정
일반적으로 생각하는 평균 펀딩액은 의미가 없을 것임.
초기 펀딩률의 중요성 파악 가능
End of explanation
# 0~5일 이내 누적 펀딩율 분포 Graph
plt.figure(figsize=(10,8))
for i in np.arange(6):
number = i
sns.kdeplot(wadiz_df["{number}day_funding_rate".format(number = i)])
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.xlabel('funding_rate', fontsize=15)
plt.ylabel('', fontsize=15)
plt.legend(fontsize = 15)
plt.xlim(-0.5, 1)
# funding_rate log scaling -> 소수점 이하 미미한 차이들을 확연히 확인 가능
# 성공 Projects의 날짜별 펀딩율 분포
plt.figure(figsize=(10,8))
for i in np.arange(6):
number = i
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] == 1]["{number}day_log_funding_rate".format(number = i)])
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.xlabel('funding_rate', fontsize=15)
plt.ylabel('', fontsize=15)
plt.legend(fontsize = 10)
# 실패 Projects의 날짜별 펀딩율 분포
plt.figure(figsize=(10,8))
for i in np.arange(6):
number = i
sns.kdeplot(wadiz_df.loc[wadiz_df['success'] == 0]["{number}day_log_funding_rate".format(number = i)])
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.xlabel('funding_rate', fontsize=15)
plt.ylabel('', fontsize=15)
plt.legend(fontsize = 10)
f, ax = plt.subplots(2, 3, figsize=(15,10))
for i in range(0, 6):
if i < 3:
sns.kdeplot(wadiz_df.loc[wadiz_df["success"] == 1]["{number}day_log_funding_rate".format(number = i)],
label = 'success', ax=ax[0,i])
sns.kdeplot(wadiz_df.loc[wadiz_df["success"] == 0]["{number}day_log_funding_rate".format(number = i)],
label = 'fail', ax=ax[0,i])
ax[0,i].set_title("{number}day".format(number = i))
else:
sns.kdeplot(wadiz_df.loc[wadiz_df["success"] == 1]["{number}day_log_funding_rate".format(number = i)],
label = 'success', ax=ax[1,i-3])
sns.kdeplot(wadiz_df.loc[wadiz_df["success"] == 0]["{number}day_log_funding_rate".format(number = i)],
label = 'fail', ax=ax[1,i-3])
ax[1,i-3].set_title("{number}day".format(number = i))
# Ks_2sampResult : Kolmogorov-Smirnov test (분포차이 검정)
# Ttest_indResult : 2 sample T-test(평균차이 검정)
for i in range(0,6):
success_funding_rate = wadiz_df.loc[wadiz_df["success"] == 1]["{number}day_funding_rate".format(number = i)]
fail_funding_rate = wadiz_df.loc[wadiz_df["success"] == 0]["{number}day_funding_rate".format(number = i)]
print('[{number}day_success vs {number}day_fail]'.format(number = i)),
print(' K-S statistic :', round(sp.stats.ks_2samp(success_funding_rate, fail_funding_rate)[0], 4))
print(' p-value :', round(sp.stats.ks_2samp(success_funding_rate, fail_funding_rate)[1], 4))
# T-teest : 평균 차이 검정
for i in range(0,6):
success_funding_rate = wadiz_df.loc[wadiz_df["success"] == 1]["{number}day_funding_rate".format(number = i)]
fail_funding_rate = wadiz_df.loc[wadiz_df["success"] == 0]["{number}day_funding_rate".format(number = i)]
print('[{number}day_success vs {number}day_fail]'.format(number = i)),
print(' T-test statistic :', round(sp.stats.ttest_ind(success_funding_rate, fail_funding_rate)[0], 4))
print(' p-value :', round(sp.stats.ttest_ind(success_funding_rate, fail_funding_rate)[1], 4))
Explanation: 1. Distribution Test
End of explanation
re = RandomForestClassifier()
x_classification_1day = pd.DataFrame([wadiz_df['target'], wadiz_df['date_duration'],
wadiz_df['provider_grammar_level'], wadiz_df['1day_funding_rate']]).T
y = wadiz_df['success']
x_re_list_1 = []
y_re_list_1 = []
for i in range(1, 100):
re_1 = RandomForestClassifier(n_estimators=i)
a = cross_val_score(re_1, x_classification_1day, y, cv=10)
b = a.sum() / 10
#print(b)
x_re_list_1.append(i)
y_re_list_1.append(b)
# tree개수의 변화에 따른 accuracy 변화
base_success_rate = round((y.value_counts()[1] / len(y)), 2)
figure = plt.figure(figsize=(10,8))
plt.plot(x_re_list_1, y_re_list_1, 'o--', c = 'r', label = 'accuracy')
plt.axhline(base_success_rate, ls = '--', label = 'base_success_rate')
plt.legend(fontsize=15, loc=7)
plt.xlabel('n_estimator', fontsize=15)
plt.ylabel('accuracy', fontsize=15)
#plt.ylim(0.50, 0.68)
print('base_success_rate :', round((y.value_counts()[1] / len(y))*100, 2), '%')
print('max_accuracy :', round(max(y_re_list_1)*100, 2), '%')
Explanation: 성공/실패 프로젝트들의 펀딩 시작 후 분포는 확연히 다른 분포임을 확인
초기 5일 이내 펀딩율로 프로젝트의 성공 여부 판단 가능할 것으로 기대
2. Classification
Features_1 : Target, Duration, Grammar_level, (0~5)Day_funding_rate
Features_2 : (0~5)Day_funding_rate
A. RandomForest
End of explanation
# Gridsearch report를 위한 함수 생성
from operator import itemgetter
def report(grid_scores, n_top=3):
top_scores = sorted(grid_scores, key=itemgetter(1), reverse=True)[:n_top]
for i, score in enumerate(top_scores):
print("Model with rank: {0}".format(i + 1))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
score.mean_validation_score,
np.std(score.cv_validation_scores)))
print("Parameters: {0}".format(score.parameters))
print("")
#grid search
param_grid = {"max_depth": [5, 10, None],
"max_features": [1, 3, None],
"min_samples_split": [1, 3, 5],
"min_samples_leaf": [1, 3, 5, 10],
"n_estimators" : np.arange(3, 20)}
# run grid search
grid_search = GridSearchCV(re, param_grid=param_grid)
start = time
grid_search.fit(x_classification_1day, y)
#print("GridSearchCV took %.2f seconds for %d candidate parameter settings."
# % ((time() - start), len(grid_search.grid_scores_)))
report(grid_search.grid_scores_)
report(grid_search.grid_scores_)
best_re_day = RandomForestClassifier(max_features= 1, max_depth= None, min_samples_split= 3, n_estimators= 19, min_samples_leaf= 10)
Explanation: B. Grid Search
End of explanation
# 각 일자별 RandomForest Score
Stkfold = StratifiedKFold(y, n_folds=10)
y_day_list = []
x_day_list = []
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'], wadiz_df['date_duration'],
wadiz_df['provider_grammar_level'], wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
score_cv = cross_val_score(best_re_day, x_classification_day, y, cv=Stkfold)
score_cv_mean = score_cv.mean()
score_cv_std = score_cv.std()
y_day_list.append(score_cv_mean)
x_day_list.append(i)
print('Best_Score_{number}day_mean : '.format(number=i), round(score_cv_mean*100, 2), '%')
print('Best_Score_{number}day_std : '.format(number=i), round(score_cv_std, 4))
print('========================================')
figure = plt.figure(figsize=(10,8))
plt.plot(x_day_list, y_day_list, 'o--', c = 'r', label = 'Accuracy');
plt.axhline(base_success_rate, ls = '--', label = 'Base_success_rate');
plt.legend(fontsize=15, loc=7);
plt.xlabel('Day', fontsize=15);
plt.ylabel('Accuracy', fontsize=15);
best_re_day.fit(x_classification_1day, y);
pd.DataFrame([x_classification_1day.columns, best_re_day.feature_importances_],
index=['feature', 'importance']).T
Explanation: C. Cross_Validation
End of explanation
gnb = GaussianNB()
knn = KNeighborsClassifier()
x_day_knn = []
y_day_knn = []
x_day_gnb = []
y_day_gnb = []
y_day_list = []
x_day_list = []
# RandomForest
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'], wadiz_df['date_duration'],
wadiz_df['provider_grammar_level'], wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
score_cv = cross_val_score(best_re_day, x_classification_day, y, cv=Stkfold)
score_cv_mean = score_cv.mean()
score_cv_std = score_cv.std()
y_day_list.append(score_cv_mean)
x_day_list.append(i)
# Gaussian Naive Bayes
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'], wadiz_df['date_duration'],
wadiz_df['provider_grammar_level'], wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
day_score_gnb = cross_val_score(gnb, x_classification_day, y, cv = Stkfold).mean()
x_day_gnb.append(i)
y_day_gnb.append(day_score_gnb)
# K-Nearest Neighbor
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'], wadiz_df['date_duration'],
wadiz_df['provider_grammar_level'], wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
day_score_knn = cross_val_score(knn, x_classification_day, y, cv = Stkfold).mean()
x_day_knn.append(i)
y_day_knn.append(day_score_knn)
figure = plt.figure(figsize=(10,8))
plt.plot(x_day_list, y_day_list, 'o--', c = 'r', label = 'RandomForest_Accuracy');
plt.plot(x_day_gnb, y_day_gnb, 'o--', c = 'g', label = 'GNB_Accuracy');
plt.plot(x_day_knn, y_day_knn, 'o--', c = 'y', label = 'KNN_Accuracy');
plt.axhline(base_success_rate, ls = '--', label = 'Base_success_rate');
plt.legend(fontsize=15, loc=7);
plt.xlabel('Day', fontsize=15);
plt.ylabel('Accuracy', fontsize=15);
plt.xlim(0,5.5);
Explanation: (0~5)day_funding_rate의 importance가 가장 높음
일자별 (누적)펀딩율은 많은 정보를 포함하고 있어서 정확성 향상에 큰 영향을 미침 (Project의 참신함, 실현가능성, 마케팅 효과 등)
D. Model Selection
Random Forest with 4 features vs GNB with 4 features vs KNN with 4 features
Features : target, date_duration, grammar_level, (0~5)unding_rate
End of explanation
knn = KNeighborsClassifier()
#grid search
param_grid = [{'weights': ['uniform', 'distance'],
'algorithm' : ['auto', 'ball_tree', 'kd_tree', 'brute'],
'n_neighbors': [5, 10, 20, 30, 40, 50]}]
# run grid search
grid_search = GridSearchCV(knn, param_grid=param_grid)
start = time
grid_search.fit(x_classification_1day, y)
#print("GridSearchCV took %.2f seconds for %d candidate parameter settings."
# % ((time() - start), len(grid_search.grid_scores_)))
report(grid_search.grid_scores_)
Explanation: 너무 낮은 GNB와 KNN Score -> model별 grid search 실시
KNN gridsearch
End of explanation
# 3 features (no duration)
x_day_knn = []
y_day_knn = []
x_day_gnb = []
y_day_gnb = []
y_day_list = []
x_day_list = []
# RandomForest
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'], wadiz_df['provider_grammar_level'],
wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
score_cv = cross_val_score(best_re_day, x_classification_day, y, cv=Stkfold)
score_cv_mean = score_cv.mean()
score_cv_std = score_cv.std()
y_day_list.append(score_cv_mean)
x_day_list.append(i)
# Gaussian Naive Bayes
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'],wadiz_df['provider_grammar_level'],
wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
day_score_gnb = cross_val_score(gnb, x_classification_day, y, cv = Stkfold).mean()
x_day_gnb.append(i)
y_day_gnb.append(day_score_gnb)
# K-Nearest Neighbor
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'], wadiz_df['provider_grammar_level'],
wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
day_score_knn = cross_val_score(knn, x_classification_day, y, cv = Stkfold).mean()
x_day_knn.append(i)
y_day_knn.append(day_score_knn)
figure = plt.figure(figsize=(10,8))
plt.plot(x_day_list, y_day_list, 'o--', c = 'r', label = 'RandomForest_Accuracy');
plt.plot(x_day_gnb, y_day_gnb, 'o--', c = 'g', label = 'GNB_Accuracy');
plt.plot(x_day_knn, y_day_knn, 'o--', c = 'y', label = 'KNN_Accuracy');
plt.axhline(base_success_rate, ls = '--', label = 'Base_success_rate');
plt.legend(fontsize=15, loc=7);
plt.xlabel('Day', fontsize=15);
plt.ylabel('Accuracy', fontsize=15);
plt.xlim(0,5.5);
Explanation: GNB grid search -> parameter 조정이 필요없음
#Grid search 결과, 낮은 score의 원인은 parameter selection 문제가 아니라 feature selection 문제이다.
3 Features (without date_duration)
End of explanation
# 3 features (no grammar_level)
x_day_knn = []
y_day_knn = []
x_day_gnb = []
y_day_gnb = []
y_day_list = []
x_day_list = []
# RandomForest
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'], wadiz_df['date_duration'],
wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
score_cv = cross_val_score(best_re_day, x_classification_day, y, cv=Stkfold)
score_cv_mean = score_cv.mean()
score_cv_std = score_cv.std()
y_day_list.append(score_cv_mean)
x_day_list.append(i)
# Gaussian Naive Bayes
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'], wadiz_df['date_duration'],
wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
day_score_gnb = cross_val_score(gnb, x_classification_day, y, cv = Stkfold).mean()
x_day_gnb.append(i)
y_day_gnb.append(day_score_gnb)
# K-Nearest Neighbor
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'], wadiz_df['date_duration'],
wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
day_score_knn = cross_val_score(knn, x_classification_day, y, cv = Stkfold).mean()
x_day_knn.append(i)
y_day_knn.append(day_score_knn)
figure = plt.figure(figsize=(10,8))
plt.plot(x_day_list, y_day_list, 'o--', c = 'r', label = 'RandomForest_Accuracy');
plt.plot(x_day_gnb, y_day_gnb, 'o--', c = 'g', label = 'GNB_Accuracy');
plt.plot(x_day_knn, y_day_knn, 'o--', c = 'y', label = 'KNN_Accuracy');
plt.axhline(base_success_rate, ls = '--', label = 'Base_success_rate');
plt.legend(fontsize=15, loc=7);
plt.xlabel('Day', fontsize=15);
plt.ylabel('Accuracy', fontsize=15);
plt.xlim(0,5.5);
Explanation: # KNN의 경우에는 score 증가, GNB는 변화없음
-> date_duration은 KNN 모델에서 왜곡을 가져옴
3 Features (without grammar_level)
End of explanation
# no target
x_day_knn = []
y_day_knn = []
x_day_gnb = []
y_day_gnb = []
y_day_list = []
x_day_list = []
# RandomForest
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['date_duration'],
wadiz_df['provider_grammar_level'], wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
score_cv = cross_val_score(best_re_day, x_classification_day, y, cv=Stkfold)
score_cv_mean = score_cv.mean()
score_cv_std = score_cv.std()
y_day_list.append(score_cv_mean)
x_day_list.append(i)
# Gaussian Naive Bayes
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['date_duration'],
wadiz_df['provider_grammar_level'], wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
day_score_gnb = cross_val_score(gnb, x_classification_day, y, cv = Stkfold).mean()
x_day_gnb.append(i)
y_day_gnb.append(day_score_gnb)
# K-Nearest Neighbor
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['date_duration'],
wadiz_df['provider_grammar_level'], wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
day_score_knn = cross_val_score(knn, x_classification_day, y, cv = Stkfold).mean()
x_day_knn.append(i)
y_day_knn.append(day_score_knn)
figure = plt.figure(figsize=(10,8))
plt.plot(x_day_list, y_day_list, 'o--', c = 'r', label = 'RandomForest_Accuracy');
plt.plot(x_day_gnb, y_day_gnb, 'o--', c = 'g', label = 'GNB_Accuracy');
plt.plot(x_day_knn, y_day_knn, 'o--', c = 'y', label = 'KNN_Accuracy');
plt.axhline(base_success_rate, ls = '--', label = 'Base_success_rate');
plt.legend(fontsize=15, loc=7);
plt.xlabel('Day', fontsize=15);
plt.ylabel('Accuracy', fontsize=15);
plt.xlim(0,5.5);
Explanation: # Score 변화없음
3 Features (without target)
End of explanation
# no target, no duration
x_day_knn = []
y_day_knn = []
x_day_gnb = []
y_day_gnb = []
y_day_list = []
x_day_list = []
# RandomForest
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'], wadiz_df['date_duration'],
wadiz_df['provider_grammar_level'], wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
score_cv = cross_val_score(best_re_day, x_classification_day, y, cv=Stkfold)
score_cv_mean = score_cv.mean()
score_cv_std = score_cv.std()
y_day_list.append(score_cv_mean)
x_day_list.append(i)
# Gaussian Naive Bayes
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['provider_grammar_level'], wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
day_score_gnb = cross_val_score(gnb, x_classification_day, y, cv = Stkfold).mean()
x_day_gnb.append(i)
y_day_gnb.append(day_score_gnb)
# K-Nearest Neighbor
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['provider_grammar_level'], wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
day_score_knn = cross_val_score(knn, x_classification_day, y, cv = Stkfold).mean()
x_day_knn.append(i)
y_day_knn.append(day_score_knn)
figure = plt.figure(figsize=(10,8))
plt.plot(x_day_list, y_day_list, 'o--', c = 'r', label = 'RandomForest_Accuracy');
plt.plot(x_day_gnb, y_day_gnb, 'o--', c = 'g', label = 'GNB_Accuracy');
plt.plot(x_day_knn, y_day_knn, 'o--', c = 'y', label = 'KNN_Accuracy');
plt.axhline(base_success_rate, ls = '--', label = 'Base_success_rate');
plt.legend(fontsize=15, loc=7);
plt.xlabel('Day', fontsize=15);
plt.ylabel('Accuracy', fontsize=15);
plt.xlim(0,5.5);
Explanation: # KNN, GNB 모델에서 Score 증가
End of explanation
x_day_knn = []
y_day_knn = []
x_day_gnb = []
y_day_gnb = []
y_day_list = []
x_day_list = []
# RandomForest
for i in range(0, 6):
x_classification_day = pd.DataFrame([wadiz_df['target'], wadiz_df['date_duration'],
wadiz_df['provider_grammar_level'], wadiz_df['{number}day_funding_rate'.format(number=i)]]).T
score_cv = cross_val_score(best_re_day, x_classification_day, y, cv=Stkfold)
score_cv_mean = score_cv.mean()
score_cv_std = score_cv.std()
y_day_list.append(score_cv_mean)
x_day_list.append(i)
# Gaussian Naive Bayes
for i in range(0, 6):
x_only_day_gnb = patsy.dmatrix(wadiz_df['{number}day_funding_rate'.format(number = i)])
day_score_gnb = cross_val_score(gnb, x_only_day_gnb, y, cv = Stkfold).mean()
x_day_gnb.append(i)
y_day_gnb.append(day_score_gnb)
# K-Nearest Neighbor
for i in range(0, 6):
x_only_day_knn = patsy.dmatrix(wadiz_df['{number}day_funding_rate'.format(number = i)])
day_score_knn = cross_val_score(knn, x_only_day_knn, y, cv = Stkfold).mean()
x_day_knn.append(i)
y_day_knn.append(day_score_knn)
figure = plt.figure(figsize=(10,8))
plt.plot(x_day_list, y_day_list, 'o--', c = 'r', label = 'RandomForest_Accuracy');
plt.plot(x_day_gnb, y_day_gnb, 'o--', c = 'g', label = 'GNB_Accuracy');
plt.plot(x_day_knn, y_day_knn, 'o--', c = 'y', label = 'KNN_Accuracy');
plt.axhline(base_success_rate, ls = '--', label = 'Base_success_rate');
plt.legend(fontsize=15, loc=7);
plt.xlabel('Day', fontsize=15);
plt.ylabel('Accuracy', fontsize=15);
plt.xlim(0,5.5);
Explanation: Random Forest with 4 features vs GNB with 1 feature vs KNN with 1 feature
Features_RandomForest : target, date_duration, grammar_level, (0~5)unding_rate
Features_GNB & KNN : (0~5)unding_rate
End of explanation
success_target = wadiz_df.loc[wadiz_df['success'] == 1]['target']
fail_target = wadiz_df.loc[wadiz_df['success'] == 0]['target']
print(sp.stats.ks_2samp(success_target, fail_target))
Explanation: Distribution Test 결과들을 다시 생각해보자
성공/실패 target 분포에 대한 차이 검정
End of explanation
success_duration = wadiz_df.loc[wadiz_df['success'] == 1]['date_duration']
fail_duration = wadiz_df.loc[wadiz_df['success'] == 0]['date_duration']
print('[success_duration vs fail_duration]'),
print(sp.stats.ks_2samp(success_duration, fail_duration))
Explanation: 성공/실패 date_duration 분포에 대한 차이 검정
End of explanation |
12,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: We create a three-dimansional vector field with domain that spans between
Step2: Now, we can create a vector field object and initialise it so that
Step3: Please note, that in this case we provided a field value as a field argument, which internally calls a set method explained in other tutorials.
If we plot the field, we get
Step4: This vector field can now be saved in an OOMMF omf file, by using write_oommf_file method and providing a filename.
Step5: We can now see that, the OOMMF file is saved
Step6: Now when we have the OOMMF vector field file, we can read it, which will create a different Field object.
Step7: As expected, two fields must have exactly the same values at all nodes
Step8: Finally we can delete the OOMFF file used in this tutorial. | Python Code:
from oommffield import Field, read_oommf_file
Explanation: Tutorial: Manipulating OOMMF vector field files
In this tutorial, reading and writing of OOMMF vector field files (omf and ohf) are demonstrated. As usual, we need to import the Field class, but this time also the read_oommf_file function.
End of explanation
cmin = (0, 0, 0)
cmax = (100e-9, 100e-9, 5e-9)
d = (5e-9, 5e-9, 5e-9)
dim = 3
Explanation: We create a three-dimansional vector field with domain that spans between:
minimum coordinate $c_\text{min} = (0, 0, 0)$ and
maximum coordinate $c_\text{max} = (100 \,\text{nm}, 100 \,\text{nm}, 5 \,\text{nm})$,
with discretisation $d = (5 \,\text{nm}, 5 \,\text{nm}, 5 \,\text{nm})$.
End of explanation
def m_init(pos):
x, y, z = pos
return (x+1, x+y+2, z+2)
field = Field(cmin, cmax, d, dim=dim, value=m_init)
Explanation: Now, we can create a vector field object and initialise it so that:
$$f(x, y, z) = (x+1, x+y+2, z+3)$$
End of explanation
#PYTEST_VALIDATE_IGNORE_OUTPUT
%matplotlib inline
fig = field.plot_slice('z', 2.5e-9, xsize=8)
Explanation: Please note, that in this case we provided a field value as a field argument, which internally calls a set method explained in other tutorials.
If we plot the field, we get:
End of explanation
filename = 'vector_field.omf'
field.write_oommf_file(filename)
Explanation: This vector field can now be saved in an OOMMF omf file, by using write_oommf_file method and providing a filename.
End of explanation
!ls *.omf
Explanation: We can now see that, the OOMMF file is saved:
End of explanation
field_read = read_oommf_file('vector_field.omf')
Explanation: Now when we have the OOMMF vector field file, we can read it, which will create a different Field object.
End of explanation
(field.f == field_read.f).all()
Explanation: As expected, two fields must have exactly the same values at all nodes:
End of explanation
!rm vector_field.omf
Explanation: Finally we can delete the OOMFF file used in this tutorial.
End of explanation |
12,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transform QCLCD Data Function
Step1: Load The Data as CSV
This is QCLCD data for PDX. It is what will be used to train the model.
Step2: Run The Model and Fit Predictions | Python Code:
# downloaded weather data from http://www.ncdc.noaa.gov/qclcd/QCLCD
def load_weather_frame(filename):
#load the weather data and make a date
data_raw = pd.read_csv(filename, dtype={'Time': str, 'Date': str})
data_raw['WetBulbCelsius'] = data_raw['WetBulbCelsius'].astype(float)
times = []
for index, row in data_raw.iterrows():
_t = datetime.time(int(row['Time'][:2]), int(row['Time'][:-2]), 0) #2153
_d = datetime.datetime.strptime( row['Date'], "%Y%m%d" ) #20150905
times.append(datetime.datetime.combine(_d, _t))
data_raw['_time'] = pd.Series(times, index=data_raw.index)
df = pd.DataFrame(data_raw, columns=['_time','WetBulbCelsius'])
return df.set_index('_time')
Explanation: Transform QCLCD Data Function
End of explanation
# scale values to reasonable values and convert to float
data_weather = load_weather_frame("data/QCLCD_PDX_20150901.csv")
X, y = load_csvdata(data_weather, TIMESTEPS, seperate=False)
Explanation: Load The Data as CSV
This is QCLCD data for PDX. It is what will be used to train the model.
End of explanation
regressor = learn.SKCompat(learn.Estimator(
model_fn=lstm_model(
TIMESTEPS,
RNN_LAYERS,
DENSE_LAYERS
),
model_dir=LOG_DIR
))
# create a lstm instance and validation monitor
validation_monitor = learn.monitors.ValidationMonitor(X['val'], y['val'],
every_n_steps=PRINT_STEPS,
early_stopping_rounds=1000)
regressor.fit(X['train'], y['train'],
monitors=[validation_monitor],
batch_size=BATCH_SIZE,
steps=TRAINING_STEPS)
predicted = regressor.predict(X['test'])
#not used in this example but used for seeing deviations
rmse = np.sqrt(((predicted - y['test']) ** 2).mean(axis=0))
score = mean_squared_error(predicted, y['test'])
print ("MSE: %f" % score)
# plot the data
all_dates = data_weather.index.get_values()
fig, ax = plt.subplots(1)
fig.autofmt_xdate()
predicted_values = predicted.flatten() #already subset
predicted_dates = all_dates[len(all_dates)-len(predicted_values):len(all_dates)]
predicted_series = pd.Series(predicted_values, index=predicted_dates)
plot_predicted, = ax.plot(predicted_series, label='predicted (c)')
test_values = y['test'].flatten()
test_dates = all_dates[len(all_dates)-len(test_values):len(all_dates)]
test_series = pd.Series(test_values, index=test_dates)
plot_test, = ax.plot(test_series, label='2015 (c)')
xfmt = mdates.DateFormatter('%b %d %H')
ax.xaxis.set_major_formatter(xfmt)
# ax.fmt_xdata = mdates.DateFormatter('%Y-%m-%d %H')
plt.title('PDX Weather Predictions for 2016 vs 2015')
plt.legend(handles=[plot_predicted, plot_test])
plt.show()
Explanation: Run The Model and Fit Predictions
End of explanation |
12,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib
Introduction
Matplotlib is a library for producing publication-quality figures. mpl (for short) was designed from the bottom-up to serve dual-purposes. First, to allow for interactive, cross-platform control of figures and plots, and second, to make it very easy to produce static raster or vector graphics files without the need for any GUIs. Furthermore, mpl -- much like Python itself -- gives the developer complete control over the appearance of their plots, while still being very usable through a powerful defaults system.
Online Documentation
The matplotlib.org project website is the primary online resource for the library's documentation. It contains examples, FAQs, API documentation, and, most importantly, the gallery.
Gallery
Many users of matplotlib are often faced with the question, "I want to make a plot that has X with Y in the same figure, but it needs to look like Z". Good luck getting an answer from a web search with that query. This is why the gallery is so useful, because it showcases the variety of ways one can make plots. Browse through the gallery, click on any figure that has pieces of what you want to see the code that generated it. Soon enough, you will be like a chef, mixing and matching components to produce your masterpiece!
As always, if you have a new and interesting plot that demonstrates a feature of matplotlib, feel free to submit a well-commented version of the example code for inclusion.
Mailing Lists and StackOverflow
When you are just simply stuck, and can not figure out how to get something to work, or just need some hints on how to get started, you will find much of the community at the matplotlib-users mailing list. This mailing list is an excellent resource of information with many friendly members who just love to help out newcomers. The number one rule to remember with this list is to be persistant. While many questions do get answered fairly quickly, some do fall through the cracks, or the one person who knows the answer isn't available. Therefore, try again with your questions rephrased, or with a plot showing your attempts so far. We love plots, so an image showing what is wrong often gets the quickest responses.
Another community resource is StackOverflow, so if you need to build up karma points, submit your questions here, and help others out too! We are also on Gitter.
Github repository
Location
Matplotlib is hosted by GitHub.
Bug Reports and feature requests
So, you think you found a bug? Or maybe you think some feature is just too difficult to use? Or missing altogether? Submit your bug reports here at matplotlib's issue tracker. We even have a process for submitting and discussing Matplotlib Enhancement Proposals (MEPs).
Quick note on "backends" and Jupyter notebooks
Matplotlib has multiple backends. The backends allow mpl to be used on a variety of platforms with a variety of GUI toolkits (GTK, Qt, Wx, etc.), all of them written so that most of the time, you will not need to care which backend you are using.
Step1: Normally we wouldn't need to think about this too much, but IPython/Jupyter notebooks behave a touch differently than "normal" python.
Inside of IPython, it's often easiest to use the Jupyter nbagg or notebook backend. This allows plots to be displayed and interacted with inline in the browser in a Jupyter notebook. Otherwise, figures will pop up in a separate GUI window.
We can do this in two ways
Step2: On with the show!
Matplotlib is a large project and can seem daunting at first. However, by learning the components, it should begin to feel much smaller and more approachable.
Anatomy of a "Plot"
People use "plot" to mean many different things. Here, we'll be using a consistent terminology (mirrored by the names of the underlying classes, etc)
Step3: Figures
Now let's create a figure...
Step4: Awww, nothing happened! This is because by default mpl will not show anything until told to do so, as we mentioned earlier in the "backend" discussion.
Instead, we'll need to call plt.show()
Step5: Great, a blank figure! Not terribly useful yet.
However, while we're on the topic, you can control the size of the figure through the figsize argument, which expects a tuple of (width, height) in inches.
A really useful utility function is figaspect
Step6: Axes
All plotting is done with respect to an Axes. An Axes is made up of Axis objects and many other things. An Axes object must belong to a Figure (and only one Figure). Most commands you will ever issue will be with respect to this Axes object.
Typically, you'll set up a Figure, and then add an Axes to it.
You can use fig.add_axes, but in most cases, you'll find that adding a subplot will fit your needs perfectly. (Again a "subplot" is just an axes on a grid system.)
Step7: Notice the call to set. Matplotlib's objects typically have lots of "explicit setters" -- in other words, functions that start with set_<something> and control a particular option.
To demonstrate this (and as an example of IPython's tab-completion), try typing ax.set_ in a code cell, then hit the <Tab> key. You'll see a long list of Axes methods that start with set.
For example, we could have written the third line above as
Step8: Clearly this can get repitive quickly. Therefore, Matplotlib's set method can be very handy. It takes each kwarg you pass it and tries to call the corresponding "setter". For example, foo.set(bar='blah') would call foo.set_bar('blah').
Note that the set method doesn't just apply to Axes; it applies to more-or-less all matplotlib objects.
However, there are cases where you'll want to use things like ax.set_xlabel('Some Label', size=25) to control other options for a particular function.
Basic Plotting
Most plotting happens on an Axes. Therefore, if you're plotting something on an axes, then you'll use one of its methods.
We'll talk about different plotting methods in more depth in the next section. For now, let's focus on two methods
Step9: Axes methods vs. pyplot
Interestingly, just about all methods of an Axes object exist as a function in the pyplot module (and vice-versa). For example, when calling plt.xlim(1, 10), pyplot calls ax.set_xlim(1, 10) on whichever Axes is "current". Here is an equivalent version of the above example using just pyplot.
Step10: Much cleaner, and much clearer! So, why will most of my examples not follow the pyplot approach? Because PEP20 "The Zen of Python" says
Step11: plt.subplots(...) created a new figure and added 4 subplots to it. The axes object that was returned is a 2D numpy object array. Each item in the array is one of the subplots. They're laid out as you see them on the figure.
Therefore, when we want to work with one of these axes, we can index the axes array and use that item's methods.
For example
Step12: One really nice thing about plt.subplots() is that when it's called with no arguments, it creates a new figure with a single subplot.
Any time you see something like
fig = plt.figure()
ax = fig.add_subplot(111)
You can replace it with | Python Code:
import matplotlib
print(matplotlib.__version__)
print(matplotlib.get_backend())
Explanation: Matplotlib
Introduction
Matplotlib is a library for producing publication-quality figures. mpl (for short) was designed from the bottom-up to serve dual-purposes. First, to allow for interactive, cross-platform control of figures and plots, and second, to make it very easy to produce static raster or vector graphics files without the need for any GUIs. Furthermore, mpl -- much like Python itself -- gives the developer complete control over the appearance of their plots, while still being very usable through a powerful defaults system.
Online Documentation
The matplotlib.org project website is the primary online resource for the library's documentation. It contains examples, FAQs, API documentation, and, most importantly, the gallery.
Gallery
Many users of matplotlib are often faced with the question, "I want to make a plot that has X with Y in the same figure, but it needs to look like Z". Good luck getting an answer from a web search with that query. This is why the gallery is so useful, because it showcases the variety of ways one can make plots. Browse through the gallery, click on any figure that has pieces of what you want to see the code that generated it. Soon enough, you will be like a chef, mixing and matching components to produce your masterpiece!
As always, if you have a new and interesting plot that demonstrates a feature of matplotlib, feel free to submit a well-commented version of the example code for inclusion.
Mailing Lists and StackOverflow
When you are just simply stuck, and can not figure out how to get something to work, or just need some hints on how to get started, you will find much of the community at the matplotlib-users mailing list. This mailing list is an excellent resource of information with many friendly members who just love to help out newcomers. The number one rule to remember with this list is to be persistant. While many questions do get answered fairly quickly, some do fall through the cracks, or the one person who knows the answer isn't available. Therefore, try again with your questions rephrased, or with a plot showing your attempts so far. We love plots, so an image showing what is wrong often gets the quickest responses.
Another community resource is StackOverflow, so if you need to build up karma points, submit your questions here, and help others out too! We are also on Gitter.
Github repository
Location
Matplotlib is hosted by GitHub.
Bug Reports and feature requests
So, you think you found a bug? Or maybe you think some feature is just too difficult to use? Or missing altogether? Submit your bug reports here at matplotlib's issue tracker. We even have a process for submitting and discussing Matplotlib Enhancement Proposals (MEPs).
Quick note on "backends" and Jupyter notebooks
Matplotlib has multiple backends. The backends allow mpl to be used on a variety of platforms with a variety of GUI toolkits (GTK, Qt, Wx, etc.), all of them written so that most of the time, you will not need to care which backend you are using.
End of explanation
matplotlib.use('nbagg')
Explanation: Normally we wouldn't need to think about this too much, but IPython/Jupyter notebooks behave a touch differently than "normal" python.
Inside of IPython, it's often easiest to use the Jupyter nbagg or notebook backend. This allows plots to be displayed and interacted with inline in the browser in a Jupyter notebook. Otherwise, figures will pop up in a separate GUI window.
We can do this in two ways:
The IPython %matplotlib backend_name "magic" command (or plt.ion(), which behaves similarly)
Figures will be shown automatically by IPython, even if you don't call plt.show().
matplotlib.use("backend_name")
Figures will only be shown when you call plt.show().
Here, we'll use the second method for one simple reason: it allows our code to behave the same way regardless of whether we run it inside of an IPython notebook or from the command line. Feel free to use the %matplotlib magic command if you'd prefer.
One final note: You need to do this before you import matplotlib.pyplot.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
Explanation: On with the show!
Matplotlib is a large project and can seem daunting at first. However, by learning the components, it should begin to feel much smaller and more approachable.
Anatomy of a "Plot"
People use "plot" to mean many different things. Here, we'll be using a consistent terminology (mirrored by the names of the underlying classes, etc):
<img src="images/figure_axes_axis_labeled.png">
The Figure is the top-level container in this hierarchy. It is the overall window/page that everything is drawn on. You can have multiple independent figures and Figures can contain multiple Axes.
Most plotting ocurs on an Axes. The axes is effectively the area that we plot data on and any ticks/labels/etc associated with it. Usually we'll set up an Axes with a call to subplot (which places Axes on a regular grid), so in most cases, Axes and Subplot are synonymous.
Each Axes has an XAxis and a YAxis. These contain the ticks, tick locations, labels, etc. In this tutorial, we'll mostly control ticks, tick labels, and data limits through other mechanisms, so we won't touch the individual Axis part of things all that much. However, it's worth mentioning here to explain where the term Axes comes from.
Getting Started
In this tutorial, we'll use the following import statements. These abbreviations are semi-standardized, and most tutorials, other scientific python code, etc that you'll find elsewhere will use them as well.
End of explanation
fig = plt.figure()
Explanation: Figures
Now let's create a figure...
End of explanation
plt.show()
Explanation: Awww, nothing happened! This is because by default mpl will not show anything until told to do so, as we mentioned earlier in the "backend" discussion.
Instead, we'll need to call plt.show()
End of explanation
# Twice as tall as it is wide:
fig = plt.figure(figsize=plt.figaspect(2.0))
plt.show()
Explanation: Great, a blank figure! Not terribly useful yet.
However, while we're on the topic, you can control the size of the figure through the figsize argument, which expects a tuple of (width, height) in inches.
A really useful utility function is figaspect
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111) # We'll explain the "111" later. Basically, 1 row and 1 column.
ax.set(xlim=[0.5, 4.5], ylim=[-2, 8], title='An Example Axes', ylabel='Y-Axis', xlabel='X-Axis')
plt.show()
Explanation: Axes
All plotting is done with respect to an Axes. An Axes is made up of Axis objects and many other things. An Axes object must belong to a Figure (and only one Figure). Most commands you will ever issue will be with respect to this Axes object.
Typically, you'll set up a Figure, and then add an Axes to it.
You can use fig.add_axes, but in most cases, you'll find that adding a subplot will fit your needs perfectly. (Again a "subplot" is just an axes on a grid system.)
End of explanation
ax.set_xlim([0.5, 4.5])
ax.set_ylim([-2, 8])
ax.set_title('An Example Axes')
ax.set_ylabel('Y-Axis')
ax.set_xlabel('X-Axis')
Explanation: Notice the call to set. Matplotlib's objects typically have lots of "explicit setters" -- in other words, functions that start with set_<something> and control a particular option.
To demonstrate this (and as an example of IPython's tab-completion), try typing ax.set_ in a code cell, then hit the <Tab> key. You'll see a long list of Axes methods that start with set.
For example, we could have written the third line above as:
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, 2, 3, 4], [10, 20, 25, 30], color='lightblue', linewidth=3)
ax.scatter([0.3, 3.8, 1.2, 2.5], [11, 25, 9, 26], color='darkgreen', marker='^')
ax.set_xlim(0.5, 4.5)
plt.show()
Explanation: Clearly this can get repitive quickly. Therefore, Matplotlib's set method can be very handy. It takes each kwarg you pass it and tries to call the corresponding "setter". For example, foo.set(bar='blah') would call foo.set_bar('blah').
Note that the set method doesn't just apply to Axes; it applies to more-or-less all matplotlib objects.
However, there are cases where you'll want to use things like ax.set_xlabel('Some Label', size=25) to control other options for a particular function.
Basic Plotting
Most plotting happens on an Axes. Therefore, if you're plotting something on an axes, then you'll use one of its methods.
We'll talk about different plotting methods in more depth in the next section. For now, let's focus on two methods: plot and scatter.
plot draws points with lines connecting them. scatter draws unconnected points, optionally scaled or colored by additional variables.
As a basic example:
End of explanation
plt.plot([1, 2, 3, 4], [10, 20, 25, 30], color='lightblue', linewidth=3)
plt.scatter([0.3, 3.8, 1.2, 2.5], [11, 25, 9, 26], color='darkgreen', marker='^')
plt.xlim(0.5, 4.5)
plt.show()
Explanation: Axes methods vs. pyplot
Interestingly, just about all methods of an Axes object exist as a function in the pyplot module (and vice-versa). For example, when calling plt.xlim(1, 10), pyplot calls ax.set_xlim(1, 10) on whichever Axes is "current". Here is an equivalent version of the above example using just pyplot.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=2)
plt.show()
Explanation: Much cleaner, and much clearer! So, why will most of my examples not follow the pyplot approach? Because PEP20 "The Zen of Python" says:
"Explicit is better than implicit"
While very simple plots, with short scripts would benefit from the conciseness of the pyplot implicit approach, when doing more complicated plots, or working within larger scripts, you will want to explicitly pass around the Axes and/or Figure object to operate upon.
The advantage of keeping which axes we're working with very clear in our code will become more obvious when we start to have multiple axes in one figure.
Multiple Axes
We've mentioned before that a figure can have more than one Axes on it. If you want your axes to be on a regular grid system, then it's easiest to use plt.subplots(...) to create a figure and add the axes to it automatically.
For example:
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=2)
axes[0,0].set(title='Upper Left')
axes[0,1].set(title='Upper Right')
axes[1,0].set(title='Lower Left')
axes[1,1].set(title='Lower Right')
# To iterate over all items in a multidimensional numpy array, use the `flat` attribute
for ax in axes.flat:
# Remove all xticks and yticks...
ax.set(xticks=[], yticks=[])
plt.show()
Explanation: plt.subplots(...) created a new figure and added 4 subplots to it. The axes object that was returned is a 2D numpy object array. Each item in the array is one of the subplots. They're laid out as you see them on the figure.
Therefore, when we want to work with one of these axes, we can index the axes array and use that item's methods.
For example:
End of explanation
%load exercises/1.1-subplots_and_basic_plotting.py
import numpy as np
import matplotlib.pyplot as plt
# Try to reproduce the figure shown in images/exercise_1-1.png
# Our data...
x = np.linspace(0, 10, 100)
y1, y2, y3 = np.cos(x), np.cos(x + 1), np.cos(x + 2)
names = ['Signal 1', 'Signal 2', 'Signal 3']
# Can you figure out what to do next to plot x vs y1, y2, and y3 on one figure?
Explanation: One really nice thing about plt.subplots() is that when it's called with no arguments, it creates a new figure with a single subplot.
Any time you see something like
fig = plt.figure()
ax = fig.add_subplot(111)
You can replace it with:
fig, ax = plt.subplots()
We'll be using that approach for the rest of the examples. It's much cleaner.
However, keep in mind that we're still creating a figure and adding axes to it. When we start making plot layouts that can't be described by subplots, we'll go back to creating the figure first and then adding axes to it one-by-one.
Quick Exercise: Exercise 1.1
Let's use some of what we've been talking about. Can you reproduce this figure?
<img src="images/exercise_1-1.png">
Here's the data and some code to get you started.
End of explanation |
12,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 5
Step1: 5-1-3. モデルの評価
性能を測るといっても,その目的によって指標を変える必要がある.
どのような問題で,どのような指標を用いることが一般的か?という問いに対しては,先行研究を確認することを勧める.
また,指標それぞれの特性(数学的な意味)を知っていることもその役に立つだろう.
参考文献
Step2: 5-2. 問題に合わせたコーディング
5-2-1. Irisデータの可視化
Irisデータは4次元だったので,直接可視化することはできない.
4次元のデータをPCAによって圧縮して,2次元にし可視化する.
Step3: 5-2-2. テキストに対する処理
テキストから特徴量を設計
テキストのカウントベクトルを作成し,TF-IDFを用いて特徴ベクトルを作る.
いくつかの設計ができるが,例題としてこの手法を用いる.
ここでは,20newsgroupsというデータセットを利用する.
Step4: Naive Bayseによる学習
Step5: このように文に対して,categoriesのうちのどれに対応するかを出力する学習器を作ることができた.
この技術を応用することで,ある文がポジティブかネガティブか,スパムか否かなど自然言語の文に対する分類問題を解くことができる.
Pipelineによる結合 | Python Code:
# 1. データセットを用意する
from sklearn import datasets
iris = datasets.load_iris() # ここではIrisデータセットを読み込む
print(iris.data[0], iris.target[0]) # 1番目のサンプルのデータとラベル
# 2.学習用データとテスト用データに分割する
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target)
# 3. 線形SVMという手法を用いて分類する
from sklearn.svm import SVC, LinearSVC
clf = LinearSVC()
clf.fit(X_train, y_train) # 学習
# 4. 分類器の性能を測る
y_pred = clf.predict(X_test) # 予測
from sklearn import metrics
print(metrics.classification_report(y_true=y_test, y_pred=y_pred)) # 予測結果の評価
Explanation: Chapter 5: 機械学習 実践編
機械学習のアルゴリズムをまとめたモジュールscikit-learnを学ぶ.
簡単なアルゴリズムを用いた実験と評価について,コードを読みつつ知る.
scikit-learnについて
モジュールの概要
データセットのダウンロードと実験
モデルの評価
問題に合わせたコーディング
Irisの可視化
テキストデータの処理
Pipelineの作成
5-1. scikit-learnについて
5-1-1. モジュールの概要
scikit-learnのホームページに詳しい情報がある.
特徴
scikit-learn(sklearn)には,多くの機械学習アルゴリズムが入っており,統一した形式で書かれているため利用しやすい.
各手法をコードで理解するだけでなく,その元となる論文も紹介されている.
チュートリアルやどのように利用するのかをまとめたページもあり,似た手法が列挙されている.
5-1-2. データセットのダウンロードと実験
例題としてIrisデータセットを用いる.
Irisはアヤメのデータであり,3種類の花があり,花の大きさなどの4つの特徴が与えられている.
花の大きさなどの4つの特徴から,どの花なのかを予測する問題を解いてみよう.
実験手順
データセットを用意する
データを学習用,テスト用に分割する
学習用データを用いて,分類器を学習させる
テスト用データを用いて,分類器の性能を測る
End of explanation
print('accuracy: ', metrics.accuracy_score(y_test, y_pred))
print('precision:', metrics.precision_score(y_test, y_pred, average='macro'))
print('recall: ', metrics.recall_score(y_test, y_pred, average='macro'))
print('F1 score: ', metrics.f1_score(y_test, y_pred, average='macro'))
Explanation: 5-1-3. モデルの評価
性能を測るといっても,その目的によって指標を変える必要がある.
どのような問題で,どのような指標を用いることが一般的か?という問いに対しては,先行研究を確認することを勧める.
また,指標それぞれの特性(数学的な意味)を知っていることもその役に立つだろう.
参考文献:scikit-learn Model Evaluation
例えば,今回の分類問題に対する指標について考えてみよう.一般的な指標だけでも以下の4つがある.
1. 正解率(accuracy)
2. 精度(precision)
3. 再現率(recall)
4. F値(F1-score)
(精度,再現率,F値にはmacro, micro, weightedなどがある)
今回の実験でのそれぞれの値を見てみよう.
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X, y = iris.data, iris.target
X_pca = pca.fit_transform(X) # 次元圧縮
print(X_pca.shape)
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y);
from sklearn.svm import SVC
# 次元圧縮したデータを用いて分類してみる
X_train, X_test, y_train, y_test = train_test_split(X_pca, iris.target)
clf = LinearSVC()
clf.fit(X_train, y_train)
y_pred2 = clf.predict(X_test)
from sklearn import metrics
print(metrics.classification_report(y_true=y_test, y_pred=y_pred2)) # 予測結果の評価
Explanation: 5-2. 問題に合わせたコーディング
5-2-1. Irisデータの可視化
Irisデータは4次元だったので,直接可視化することはできない.
4次元のデータをPCAによって圧縮して,2次元にし可視化する.
End of explanation
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
news_train = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42)
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
count_vec = CountVectorizer()
X_train_counts = count_vec.fit_transform(news_train.data)
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
Explanation: 5-2-2. テキストに対する処理
テキストから特徴量を設計
テキストのカウントベクトルを作成し,TF-IDFを用いて特徴ベクトルを作る.
いくつかの設計ができるが,例題としてこの手法を用いる.
ここでは,20newsgroupsというデータセットを利用する.
End of explanation
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(X_train_tf, news_train.target)
docs = ["God is love.", "I study about Computer Science."]
X_test_counts = count_vec.transform(docs)
X_test_tf = tf_transformer.transform(X_test_counts)
preds = clf.predict(X_test_tf)
for d, label_id in zip(docs, preds):
print("{} -> {}".format(d, news_train.target_names[label_id]))
Explanation: Naive Bayseによる学習
End of explanation
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('countvec', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB())])
text_clf.fit(news_train.data, news_train.target)
for d, label_id in zip(docs, text_clf.predict(docs)):
print("{} -> {}".format(d, news_train.target_names[label_id]))
Explanation: このように文に対して,categoriesのうちのどれに対応するかを出力する学習器を作ることができた.
この技術を応用することで,ある文がポジティブかネガティブか,スパムか否かなど自然言語の文に対する分類問題を解くことができる.
Pipelineによる結合
End of explanation |
12,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, we will use Long Short Term Memory RNN to develop a time series forecasting model.
The dataset used for the examples of this notebook is on air pollution measured by concentration of
particulate matter (PM) of diameter less than or equal to 2.5 micrometers. There are other variables
such as air pressure, air temparature, dewpoint and so on.
Two time series models are developed - one on air pressure and the other on pm2.5.
The dataset has been downloaded from UCI Machine Learning Repository.
https
Step1: To make sure that the rows are in the right order of date and time of observations,
a new column datetime is created from the date and time related columns of the DataFrame.
The new column consists of Python's datetime.datetime objects. The DataFrame is sorted in ascending order
over this column.
Step2: Gradient descent algorithms perform better (for example converge faster) if the variables are wihtin range [-1, 1]. Many sources relax the boundary to even [-3, 3]. The PRES variable is mixmax scaled to bound the tranformed variable within [0,1].
Step5: Before training the model, the dataset is split in two parts - train set and validation set.
The neural network is trained on the train set. This means computation of the loss function, back propagation
and weights updated by a gradient descent algorithm is done on the train set. The validation set is
used to evaluate the model and to determine the number of epochs in model training. Increasing the number of
epochs will further decrease the loss function on the train set but might not neccesarily have the same effect
for the validation set due to overfitting on the train set.Hence, the number of epochs is controlled by keeping
a tap on the loss function computed for the validation set. We use Keras with Tensorflow backend to define and train
the model. All the steps involved in model training and validation is done by calling appropriate functions
of the Keras API.
Step7: Now we need to generate regressors (X) and target variable (y) for train and validation. 2-D array of regressor and 1-D array of target is created from the original 1-D array of columm log_PRES in the DataFrames. For the time series forecasting model, Past seven days of observations are used to predict for the next day. This is equivalent to a AR(7) model. We define a function which takes the original time series and the number of timesteps in regressors as input to generate the arrays of X and y.
Step8: The input to RNN layers must be of shape (number of samples, number of timesteps, number of features per timestep). In this case we are modeling only pm2.5 hence number of features per timestep is one. Number of timesteps is seven and number of samples is same as the number of samples in X_train and X_val, which are reshaped to 3D arrays.
Step9: Now we define the MLP using the Keras Functional API. In this approach a layer can be declared as the input of the following layer at the time of defining the next layer.
Step11: The input, dense and output layers will now be packed inside a Model, which is wrapper class for training and making
predictions. Mean square error (mse) is used as the loss function.
The network's weights are optimized by the Adam algorithm. Adam stands for adaptive moment estimation
and has been a popular choice for training deep neural networks. Unlike, stochastic gradient descent, adam uses
different learning rates for each weight and separately updates the same as the training progresses. The learning rate of a weight is updated based on exponentially weighted moving averages of the weight's gradients and the squared gradients.
Step12: Prediction are made for the air pressure from the best saved model. The model's predictions, which are on the scaled air-pressure, are inverse transformed to get predictions on original air pressure. The goodness-of-fit, R-squared is also calculated for the predictions on the original variable. | Python Code:
from __future__ import print_function
import os
import sys
import pandas as pd
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import datetime
#Read the dataset into a pandas.DataFrame
df = pd.read_csv('datasets/PRSA_data_2010.1.1-2014.12.31.csv')
print('Shape of the dataframe:', df.shape)
#Let's see the first five rows of the DataFrame
df.head()
Explanation: In this notebook, we will use Long Short Term Memory RNN to develop a time series forecasting model.
The dataset used for the examples of this notebook is on air pollution measured by concentration of
particulate matter (PM) of diameter less than or equal to 2.5 micrometers. There are other variables
such as air pressure, air temparature, dewpoint and so on.
Two time series models are developed - one on air pressure and the other on pm2.5.
The dataset has been downloaded from UCI Machine Learning Repository.
https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data
End of explanation
df['datetime'] = df[['year', 'month', 'day', 'hour']].apply(lambda row: datetime.datetime(year=row['year'], month=row['month'], day=row['day'],
hour=row['hour']), axis=1)
df.sort_values('datetime', ascending=True, inplace=True)
#Let us draw a box plot to visualize the central tendency and dispersion of PRES
plt.figure(figsize=(5.5, 5.5))
g = sns.boxplot(df['PRES'])
g.set_title('Box plot of Air Pressure')
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df['PRES'])
g.set_title('Time series of Air Pressure')
g.set_xlabel('Index')
g.set_ylabel('Air Pressure readings in hPa')
Explanation: To make sure that the rows are in the right order of date and time of observations,
a new column datetime is created from the date and time related columns of the DataFrame.
The new column consists of Python's datetime.datetime objects. The DataFrame is sorted in ascending order
over this column.
End of explanation
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
df['scaled_PRES'] = scaler.fit_transform(np.array(df['PRES']).reshape(-1, 1))
Explanation: Gradient descent algorithms perform better (for example converge faster) if the variables are wihtin range [-1, 1]. Many sources relax the boundary to even [-3, 3]. The PRES variable is mixmax scaled to bound the tranformed variable within [0,1].
End of explanation
Let's start by splitting the dataset into train and validation. The dataset's time period if from
Jan 1st, 2010 to Dec 31st, 2014. The first fours years - 2010 to 2013 is used as train and
2014 is kept for validation.
split_date = datetime.datetime(year=2014, month=1, day=1, hour=0)
df_train = df.loc[df['datetime']<split_date]
df_val = df.loc[df['datetime']>=split_date]
print('Shape of train:', df_train.shape)
print('Shape of test:', df_val.shape)
#First five rows of train
df_train.head()
#First five rows of validation
df_val.head()
#Reset the indices of the validation set
df_val.reset_index(drop=True, inplace=True)
The train and validation time series of standardized PRES is also plotted.
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df_train['scaled_PRES'], color='b')
g.set_title('Time series of scaled Air Pressure in train set')
g.set_xlabel('Index')
g.set_ylabel('Scaled Air Pressure readings')
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df_val['scaled_PRES'], color='r')
g.set_title('Time series of scaled Air Pressure in validation set')
g.set_xlabel('Index')
g.set_ylabel('Scaled Air Pressure readings')
Explanation: Before training the model, the dataset is split in two parts - train set and validation set.
The neural network is trained on the train set. This means computation of the loss function, back propagation
and weights updated by a gradient descent algorithm is done on the train set. The validation set is
used to evaluate the model and to determine the number of epochs in model training. Increasing the number of
epochs will further decrease the loss function on the train set but might not neccesarily have the same effect
for the validation set due to overfitting on the train set.Hence, the number of epochs is controlled by keeping
a tap on the loss function computed for the validation set. We use Keras with Tensorflow backend to define and train
the model. All the steps involved in model training and validation is done by calling appropriate functions
of the Keras API.
End of explanation
def makeXy(ts, nb_timesteps):
Input:
ts: original time series
nb_timesteps: number of time steps in the regressors
Output:
X: 2-D array of regressors
y: 1-D array of target
X = []
y = []
for i in range(nb_timesteps, ts.shape[0]):
X.append(list(ts.loc[i-nb_timesteps:i-1]))
y.append(ts.loc[i])
X, y = np.array(X), np.array(y)
return X, y
X_train, y_train = makeXy(df_train['scaled_PRES'], 7)
print('Shape of train arrays:', X_train.shape, y_train.shape)
X_val, y_val = makeXy(df_val['scaled_PRES'], 7)
print('Shape of validation arrays:', X_val.shape, y_val.shape)
Explanation: Now we need to generate regressors (X) and target variable (y) for train and validation. 2-D array of regressor and 1-D array of target is created from the original 1-D array of columm log_PRES in the DataFrames. For the time series forecasting model, Past seven days of observations are used to predict for the next day. This is equivalent to a AR(7) model. We define a function which takes the original time series and the number of timesteps in regressors as input to generate the arrays of X and y.
End of explanation
X_train, X_val = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)),\
X_val.reshape((X_val.shape[0], X_val.shape[1], 1))
print('Shape of 3D arrays:', X_train.shape, X_val.shape)
Explanation: The input to RNN layers must be of shape (number of samples, number of timesteps, number of features per timestep). In this case we are modeling only pm2.5 hence number of features per timestep is one. Number of timesteps is seven and number of samples is same as the number of samples in X_train and X_val, which are reshaped to 3D arrays.
End of explanation
from keras.layers import Dense, Input, Dropout
from keras.layers.recurrent import LSTM
from keras.optimizers import SGD
from keras.models import Model
from keras.models import load_model
from keras.callbacks import ModelCheckpoint
#Define input layer which has shape (None, 7) and of type float32. None indicates the number of instances
input_layer = Input(shape=(7,1), dtype='float32')
#LSTM layer is defined for seven timesteps
lstm_layer = LSTM(64, input_shape=(7,1), return_sequences=False)(input_layer)
dropout_layer = Dropout(0.2)(lstm_layer)
#Finally the output layer gives prediction for the next day's air pressure.
output_layer = Dense(1, activation='linear')(dropout_layer)
Explanation: Now we define the MLP using the Keras Functional API. In this approach a layer can be declared as the input of the following layer at the time of defining the next layer.
End of explanation
ts_model = Model(inputs=input_layer, outputs=output_layer)
ts_model.compile(loss='mae', optimizer='adam')
ts_model.summary()
The model is trained by calling the fit function on the model object and passing the X_train and y_train. The training
is done for a predefined number of epochs. Additionally, batch_size defines the number of samples of train set to be
used for a instance of back propagation.The validation dataset is also passed to evaluate the model after every epoch
completes. A ModelCheckpoint object tracks the loss function on the validation set and saves the model for the epoch,
at which the loss function has been minimum.
save_weights_at = os.path.join('keras_models', 'PRSA_data_Air_Pressure_LSTM_weights.{epoch:02d}-{val_loss:.4f}.hdf5')
save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,
save_best_only=True, save_weights_only=False, mode='min',
period=1)
ts_model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,
verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),
shuffle=True)
Explanation: The input, dense and output layers will now be packed inside a Model, which is wrapper class for training and making
predictions. Mean square error (mse) is used as the loss function.
The network's weights are optimized by the Adam algorithm. Adam stands for adaptive moment estimation
and has been a popular choice for training deep neural networks. Unlike, stochastic gradient descent, adam uses
different learning rates for each weight and separately updates the same as the training progresses. The learning rate of a weight is updated based on exponentially weighted moving averages of the weight's gradients and the squared gradients.
End of explanation
best_model = load_model(os.path.join('keras_models', 'PRSA_data_Air_Pressure_LSTM_weights.06-0.0087.hdf5'))
preds = best_model.predict(X_val)
pred_PRES = scaler.inverse_transform(preds)
pred_PRES = np.squeeze(pred_PRES)
from sklearn.metrics import r2_score
r2 = r2_score(df_val['PRES'].loc[7:], pred_PRES)
print('R-squared on validation set of the original air pressure:', r2)
#Let's plot the first 50 actual and predicted values of air pressure.
plt.figure(figsize=(5.5, 5.5))
plt.plot(range(50), df_val['PRES'].loc[7:56], linestyle='-', marker='*', color='r')
plt.plot(range(50), pred_PRES[:50], linestyle='-', marker='.', color='b')
plt.legend(['Actual','Predicted'], loc=2)
plt.title('Actual vs Predicted Air Pressure')
plt.ylabel('Air Pressure')
plt.xlabel('Index')
plt.savefig('plots/ch5/B07887_05_11.png', format='png', dpi=300)
Explanation: Prediction are made for the air pressure from the best saved model. The model's predictions, which are on the scaled air-pressure, are inverse transformed to get predictions on original air pressure. The goodness-of-fit, R-squared is also calculated for the predictions on the original variable.
End of explanation |
12,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Notes
We tried different model, like SVM regression, MLP, Random Forest and KNN as recommanded by the winning team of the Kaggle on taxi trajectories. So far Random Forest seems to be the best, slightly better than the SVM.
The new features we exctracted only made a very small impact on predictions.
Step1: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Few words about the dataset
Predictions is made in the USA corn growing states (mainly Iowa, Illinois, Indiana) during the season with the highest rainfall (as illustrated by Iowa for the april to august months)
The Kaggle page indicate that the dataset have been shuffled, so working on a subset seems acceptable
The test set is not a extracted from the same data as the training set however, which make the evaluation trickier
Load the dataset
Step2: Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail"
If we encounter the value 327.40 meter per hour, we should probably start building Noah's ark
Therefor, it seems reasonable to drop values too large, considered as outliers
Step3: We regroup the data by ID
Step4: On fully filled dataset
Step5: Predicitons
As a first try, we make predictions on the complete data, and return the 50th percentile and uncomplete and fully empty data
Step6:
Step7: max prof 24
nb trees 84
min sample per leaf 17
min sample to split 51
Step8:
Step9:
Step10:
Step11:
Step12:
Step13: Here for legacy
Step14:
Step15: | Python Code:
# from __future__ import exam_success
from __future__ import absolute_import
from __future__ import print_function
%matplotlib inline
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
import scipy.stats as stats
# Sk cheatsfrom sklearn.ensemble import ExtraTreesRegressor
from sklearn.cross_validation import cross_val_score # cross val
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.preprocessing import Imputer # get rid of nan
from sklearn.neighbors import KNeighborsRegressor
from sklearn import grid_search
import os
Explanation: FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Notes
We tried different model, like SVM regression, MLP, Random Forest and KNN as recommanded by the winning team of the Kaggle on taxi trajectories. So far Random Forest seems to be the best, slightly better than the SVM.
The new features we exctracted only made a very small impact on predictions.
End of explanation
%%time
#filename = "data/train.csv"
filename = "data/reduced_train_100000.csv"
#filename = "data/reduced_train_1000000.csv"
raw = pd.read_csv(filename)
raw = raw.set_index('Id')
raw.columns
raw['Expected'].describe()
Explanation: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Few words about the dataset
Predictions is made in the USA corn growing states (mainly Iowa, Illinois, Indiana) during the season with the highest rainfall (as illustrated by Iowa for the april to august months)
The Kaggle page indicate that the dataset have been shuffled, so working on a subset seems acceptable
The test set is not a extracted from the same data as the training set however, which make the evaluation trickier
Load the dataset
End of explanation
# Considering that the gauge may concentrate the rainfall, we set the cap to 1000
# Comment this line to analyse the complete dataset
l = len(raw)
raw = raw[raw['Expected'] < 300] #1000
print("Dropped %d (%0.2f%%)"%(l-len(raw),(l-len(raw))/float(l)*100))
raw.head(5)
raw.describe()
Explanation: Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail"
If we encounter the value 327.40 meter per hour, we should probably start building Noah's ark
Therefor, it seems reasonable to drop values too large, considered as outliers
End of explanation
# We select all features except for the minutes past,
# because we ignore the time repartition of the sequence for now
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getXy(raw):
selected_columns = list([ u'minutes_past',u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX, docY = [], []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
docY.append(float(raw.loc[i]["Expected"]))
else:
m = data.loc[i].as_matrix()
docX.append(m)
docY.append(float(raw.loc[i][:1]["Expected"]))
X , y = np.array(docX) , np.array(docY)
return X,y
Explanation: We regroup the data by ID
End of explanation
#noAnyNan = raw.loc[raw[features_columns].dropna(how='any').index.unique()]
noAnyNan = raw.dropna()
noFullNan = raw.loc[raw[features_columns].dropna(how='all').index.unique()]
fullNan = raw.drop(raw[features_columns].dropna(how='all').index)
print(len(raw))
print(len(noAnyNan))
print(len(noFullNan))
print(len(fullNan))
Explanation: On fully filled dataset
End of explanation
%%time
#X,y=getXy(noAnyNan)
X,y=getXy(noFullNan)
%%time
#XX = [np.array(t).mean(0) for t in X]
XX = [np.append(np.nanmean(np.array(t),0),(np.array(t)[1:] - np.array(t)[:-1]).sum(0) ) for t in X]
t = np.array([[10,1,10],
[20,np.nan,12],
[30,20,30]])
np.nanpercentile(t,90,axis=0)
# used to fill fully empty datas
global_means = np.nanmean(noFullNan,0)
# reduce the sequence structure of the data and produce
# new hopefully informatives features
def addFeatures(X):
# used to fill fully empty datas
#global_means = np.nanmean(X,0)
XX=[]
nbFeatures=float(len(X[0][0]))
for t in X:
# compute means, ignoring nan when possible, marking it when fully filled with nan
nm = np.nanmean(t,0)
tt=[]
for idx,j in enumerate(nm):
if np.isnan(j):
nm[idx]=global_means[idx]
tt.append(1)
else:
tt.append(0)
tmp = np.append(nm,np.append(tt,tt.count(0)/nbFeatures))
# faster if working on fully filled data:
#tmp = np.append(np.nanmean(np.array(t),0),(np.array(t)[1:] - np.array(t)[:-1]).sum(0) )
# add the percentiles
tmp = np.append(tmp,np.nanpercentile(t,10,axis=0))
tmp = np.append(tmp,np.nanpercentile(t,50,axis=0))
tmp = np.append(tmp,np.nanpercentile(t,90,axis=0))
for idx,i in enumerate(tmp):
if np.isnan(i):
tmp[idx]=0
# adding the dbz as a feature
test = t
try:
taa=test[:,0]
except TypeError:
taa=[test[0][0]]
valid_time = np.zeros_like(taa)
valid_time[0] = taa[0]
for n in xrange(1,len(taa)):
valid_time[n] = taa[n] - taa[n-1]
valid_time[-1] = valid_time[-1] + 60 - np.sum(valid_time)
valid_time = valid_time / 60.0
sum=0
try:
column_ref=test[:,2]
except TypeError:
column_ref=[test[0][2]]
for dbz, hours in zip(column_ref, valid_time):
# See: https://en.wikipedia.org/wiki/DBZ_(meteorology)
if np.isfinite(dbz):
mmperhr = pow(pow(10, dbz/10)/200, 0.625)
sum = sum + mmperhr * hours
XX.append(np.append(np.array(sum),tmp))
#XX.append(np.array([sum]))
#XX.append(tmp)
return XX
%%time
XX=addFeatures(X)
XX[2]
def splitTrainTest(X, y, split=0.2):
tmp1, tmp2 = [], []
ps = int(len(X) * (1-split))
index_shuf = range(len(X))
random.shuffle(index_shuf)
for i in index_shuf:
tmp1.append(X[i])
tmp2.append(y[i])
return tmp1[:ps], tmp2[:ps], tmp1[ps:], tmp2[ps:]
X_train,y_train, X_test, y_test = splitTrainTest(XX,y)
Explanation: Predicitons
As a first try, we make predictions on the complete data, and return the 50th percentile and uncomplete and fully empty data
End of explanation
def manualScorer(estimator, X, y):
err = (estimator.predict(X_test)-y_test)**2
return -err.sum()/len(err)
Explanation:
End of explanation
from sklearn import svm
svr = svm.SVR(C=100000)
%%time
srv = svr.fit(X_train,y_train)
err = (svr.predict(X_train)-y_train)**2
err.sum()/len(err)
err = (svr.predict(X_test)-y_test)**2
err.sum()/len(err)
%%time
svr_score = cross_val_score(svr, XX, y, cv=5)
print("Score: %s\nMean: %.03f"%(svr_score,svr_score.mean()))
Explanation: max prof 24
nb trees 84
min sample per leaf 17
min sample to split 51
End of explanation
knn = KNeighborsRegressor(n_neighbors=6,weights='distance',algorithm='ball_tree')
#parameters = {'weights':('distance','uniform'),'algorithm':('auto', 'ball_tree', 'kd_tree', 'brute')}
parameters = {'n_neighbors':range(1,10,1)}
grid_knn = grid_search.GridSearchCV(knn, parameters,scoring=manualScorer)
%%time
grid_knn.fit(X_train,y_train)
print(grid_knn.grid_scores_)
print("Best: ",grid_knn.best_params_)
knn = grid_knn.best_estimator_
knn= knn.fit(X_train,y_train)
print(knn.score(X_train,y_train))
print(knn.score(X_test,y_test))
err = (knn.predict(X_train)-y_train)**2
err.sum()/len(err)
err = (knn.predict(X_test)-y_test)**2
err.sum()/len(err)
Explanation:
End of explanation
etreg = ExtraTreesRegressor(n_estimators=200, max_depth=None, min_samples_split=1, random_state=0)
parameters = {'n_estimators':range(100,200,20)}
grid_rf = grid_search.GridSearchCV(etreg, parameters,n_jobs=2,scoring=manualScorer)
%%time
grid_rf.fit(X_train,y_train)
print(grid_rf.grid_scores_)
print("Best: ",grid_rf.best_params_)
grid_rf.best_params_
es = etreg
#es = grid_rf.best_estimator_
%%time
es = es.fit(X_train,y_train)
print(es.score(X_train,y_train))
print(es.score(X_test,y_test))
err = (es.predict(X_train)-y_train)**2
err.sum()/len(err)
err = (es.predict(X_test)-y_test)**2
err.sum()/len(err)
Explanation:
End of explanation
import xgboost as xgb
# the dbz feature does not influence xgbr so much
xgbr = xgb.XGBRegressor(max_depth=6, learning_rate=0.1, n_estimators=700, silent=True,
objective='reg:linear', nthread=-1, gamma=0, min_child_weight=1,
max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5,
seed=0, missing=None)
%%time
xgbr = xgbr.fit(X_train,y_train)
print(xgbr.score(X_train,y_train))
print(xgbr.score(X_test,y_test))
Explanation:
End of explanation
gbr = GradientBoostingRegressor(loss='ls', learning_rate=0.1, n_estimators=900,
subsample=1.0, min_samples_split=2, min_samples_leaf=1, max_depth=4, init=None,
random_state=None, max_features=None, alpha=0.5,
verbose=0, max_leaf_nodes=None, warm_start=False)
%%time
gbr = gbr.fit(X_train,y_train)
#os.system('say "終わりだ"') #its over!
#parameters = {'max_depth':range(2,5,1),'alpha':[0.5,0.6,0.7,0.8,0.9]}
#parameters = {'subsample':[0.2,0.4,0.5,0.6,0.8,1]}
#parameters = {'subsample':[0.2,0.5,0.6,0.8,1],'n_estimators':[800,1000,1200]}
#parameters = {'max_depth':range(2,4,1)}
parameters = {'n_estimators':[400,800,1100]}
#parameters = {'loss':['ls', 'lad', 'huber', 'quantile'],'alpha':[0.3,0.5,0.8,0.9]}
#parameters = {'learning_rate':[0.1,0.5,0.9]}
grid_gbr = grid_search.GridSearchCV(gbr, parameters,n_jobs=2,scoring=manualScorer)
%%time
grid_gbr = grid_gbr.fit(X_train,y_train)
print(grid_gbr.grid_scores_)
print("Best: ",grid_gbr.best_params_)
err = (gbr.predict(X_train)-y_train)**2
print(err.sum()/len(err))
err = (gbr.predict(X_test)-y_test)**2
print(err.sum()/len(err))
err = (gbr.predict(X_train)-y_train)**2
print(err.sum()/len(err))
err = (gbr.predict(X_test)-y_test)**2
print(err.sum()/len(err))
Explanation:
End of explanation
t = []
for i in XX:
t.append(np.count_nonzero(~np.isnan(i)) / float(i.size))
pd.DataFrame(np.array(t)).describe()
Explanation:
End of explanation
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD,RMSprop
in_dim = len(XX[0])
out_dim = 1
model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(Dense(128, input_shape=(in_dim,)))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(1, init='uniform'))
model.add(Activation('linear'))
#sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
#model.compile(loss='mean_squared_error', optimizer=sgd)
rms = RMSprop()
model.compile(loss='mean_squared_error', optimizer=rms)
#model.fit(X_train, y_train, nb_epoch=20, batch_size=16)
#score = model.evaluate(X_test, y_test, batch_size=16)
prep = []
for i in y_train:
prep.append(min(i,20))
prep=np.array(prep)
mi,ma = prep.min(),prep.max()
fy = (prep-mi) / (ma-mi)
#my = fy.max()
#fy = fy/fy.max()
model.fit(np.array(X_train), fy, batch_size=10, nb_epoch=10, validation_split=0.1)
pred = model.predict(np.array(X_test))*ma+mi
err = (pred-y_test)**2
err.sum()/len(err)
r = random.randrange(len(X_train))
print("(Train) Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_train[r]]))[0][0]*ma+mi,y_train[r]))
r = random.randrange(len(X_test))
print("(Test) Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_test[r]]))[0][0]*ma+mi,y_test[r]))
Explanation: Here for legacy
End of explanation
def marshall_palmer(ref, minutes_past):
#print("Estimating rainfall from {0} observations".format(len(minutes_past)))
# how long is each observation valid?
valid_time = np.zeros_like(minutes_past)
valid_time[0] = minutes_past.iloc[0]
for n in xrange(1, len(minutes_past)):
valid_time[n] = minutes_past.iloc[n] - minutes_past.iloc[n-1]
valid_time[-1] = valid_time[-1] + 60 - np.sum(valid_time)
valid_time = valid_time / 60.0
# sum up rainrate * validtime
sum = 0
for dbz, hours in zip(ref, valid_time):
# See: https://en.wikipedia.org/wiki/DBZ_(meteorology)
if np.isfinite(dbz):
mmperhr = pow(pow(10, dbz/10)/200, 0.625)
sum = sum + mmperhr * hours
return sum
def simplesum(ref,hour):
hour.sum()
# each unique Id is an hour of data at some gauge
def myfunc(hour):
#rowid = hour['Id'].iloc[0]
# sort hour by minutes_past
hour = hour.sort('minutes_past', ascending=True)
est = marshall_palmer(hour['Ref'], hour['minutes_past'])
return est
info = raw.groupby(raw.index)
estimates = raw.groupby(raw.index).apply(myfunc)
estimates.head(20)
%%time
etreg.fit(X_train,y_train)
%%time
et_score = cross_val_score(etreg, XX, y, cv=5)
print("Score: %s\tMean: %.03f"%(et_score,et_score.mean()))
%%time
et_score = cross_val_score(etreg, XX, y, cv=5)
print("Score: %s\tMean: %.03f"%(et_score,et_score.mean()))
err = (etreg.predict(X_test)-y_test)**2
err.sum()/len(err)
err = (etreg.predict(X_test)-y_test)**2
err.sum()/len(err)
r = random.randrange(len(X_train))
print(r)
print(etreg.predict(X_train[r]))
print(y_train[r])
r = random.randrange(len(X_test))
print(r)
print(etreg.predict(X_test[r]))
print(y_test[r])
Explanation:
End of explanation
%%time
#filename = "data/reduced_test_5000.csv"
filename = "data/test.csv"
test = pd.read_csv(filename)
test = test.set_index('Id')
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getX(raw):
selected_columns = list([ u'minutes_past',u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX= []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
else:
m = data.loc[i].as_matrix()
docX.append(m)
X = np.array(docX)
return X
#%%time
#X=getX(test)
#tmp = []
#for i in X:
# tmp.append(len(i))
#tmp = np.array(tmp)
#sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
#plt.title("Number of ID per number of observations\n(On test dataset)")
#plt.plot()
testFull = test.dropna()
%%time
X=getX(testFull) # 1min
#XX = [np.array(t).mean(0) for t in X] # 10s
XX=addFeatures(X)
pd.DataFrame(gbr.predict(XX)).describe()
predFull = zip(testFull.index.unique(),gbr.predict(XX))
testNan = test.drop(test[features_columns].dropna(how='all').index)
tmp = np.empty(len(testNan))
tmp.fill(0.445000) # 50th percentile of full Nan dataset
predNan = zip(testNan.index.unique(),tmp)
testLeft = test.drop(testNan.index.unique()).drop(testFull.index.unique())
tmp = np.empty(len(testLeft))
tmp.fill(1.27) # 50th percentile of full Nan dataset
predLeft = zip(testLeft.index.unique(),tmp)
len(testFull.index.unique())
len(testNan.index.unique())
len(testLeft.index.unique())
pred = predFull + predNan + predLeft
pred.sort(key=lambda x: x[0], reverse=False)
submission = pd.DataFrame(pred)
submission.columns = ["Id","Expected"]
submission.head()
submission.loc[submission['Expected']<0,'Expected'] = 0.445
submission.to_csv("submit4.csv",index=False)
filename = "data/sample_solution.csv"
sol = pd.read_csv(filename)
sol
ss = np.array(sol)
%%time
for a,b in predFull:
ss[a-1][1]=b
ss
sub = pd.DataFrame(pred)
sub.columns = ["Id","Expected"]
sub.Id = sub.Id.astype(int)
sub.head()
sub.to_csv("submit3.csv",index=False)
Explanation:
End of explanation |
12,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 훈련 후 정수 양자화
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: TensorFlow 모델 생성하기
MNIST 데이터세트에서 숫자를 분류하는 간단한 모델을 만들어 보겠습니다.
이 훈련은 약 ~98%의 정확성으로 훈련하는 단 5 epoch 동안 모델을 훈련하기 때문에 오래 걸리지 않을 것입니다.
Step3: TensorFlow Lite 모델로 변환하기
이제 TFLiteConverter API를 사용하여 훈련된 모델을 TensorFlow Lite 형식으로 변환하고 다양한 정도의 양자화를 적용할 수 있습니다.
일부 양자화 버전은 일부 데이터를 부동 형식으로 남겨 둡니다. 따라서 다음 섹션에서는 완전히 int8 또는 uint8 데이터인 모델을 얻을 때까지 양자화 양이 증가하는 각 옵션을 보여줍니다(각 옵션에 대한 모든 양자화 단계를 볼 수 있도록 각 섹션에서 일부 코드를 복제합니다).
먼저, 양자화없이 변환된 모델이 있습니다.
Step4: 이제 TensorFlow Lite 모델이지만 모든 매개변수 데이터에 대해 여전히 32bit 부동 소수점 값을 사용하고 있습니다.
동적 범위 양자화를 사용하여 변환하기
이제 기본 optimizations 플래그를 활성화하여 모든 고정 매개변수(예
Step5: 모델은 이제 양자화된 가중치로 약간 더 작아지지만 다른 변수 데이터는 여전히 부동 형식입니다.
부동 폴백 양자화를 사용하여 변환하기
변수 데이터(예
Step6: 이제 모든 가중치와 변수 데이터가 양자화되고 모델은 원본 TensorFlow Lite 모델에 비해 훨씬 작습니다.
그러나 전통적으로 부동 모델 입력 및 출력 텐서를 사용하는 애플리케이션과의 호환성을 유지하기 위해 TensorFlow Lite 변환기는 모델 입력 및 출력 텐서를 부동 상태로 둡니다.
Step7: 일반적으로 호환성에는 좋지만 에지 TPU와 같이 정수 기반 작업만 수행하는 기기와는 호환되지 않습니다.
또한 TensorFlow Lite에 해당 연산에 대한 양자화된 구현이 포함되어 있지 않은 경우 위의 프로세스는 부동 형식으로 연산을 남길 수 있습니다. 이 전략을 사용하면 변환을 완료할 수 있으므로 더 작고 효율적인 모델을 사용할 수 있지만, 정수 전용 하드웨어와는 호환되지 않습니다(이 MNIST 모델의 모든 연산에는 양자화된 구현이 있습니다).
따라서 엔드 투 엔드 정수 전용 모델을 보장하려면 몇 가지 매개변수가 더 필요합니다.
정수 전용 양자화를 사용하여 변환하기
입력 및 출력 텐서를 양자화하고, 양자화할 수 없는 연산이 발생하는 경우 변환기에서 오류를 발생시키려면 몇 가지 추가 매개변수를 사용하여 모델을 다시 변환합니다.
Step8: 내부 양자화는 위와 동일하게 유지되지만 입력 및 출력 텐서는 이제 정수 형식임을 알 수 있습니다.
Step9: 이제 모델의 입력 및 출력 텐서에 정수 데이터를 사용하는 정수 양자화 모델이 있으므로 에지 TPU와 같은 정수 전용 하드웨어와 호환됩니다.
모델을 파일로 저장하기
다른 기기에 모델을 배포하려면 .tflite 파일이 필요합니다. 따라서 변환된 모델을 파일로 저장한 다음 아래에서 추론을 실행할 때 로드해보겠습니다.
Step10: TensorFlow Lite 모델 실행하기
이제 TensorFlow Lite Interpreter로 추론을 실행하여 모델 정확성을 비교합니다.
먼저 주어진 모델과 이미지로 추론을 실행한 다음 예측을 반환하는 함수가 필요합니다.
Step11: 하나의 이미지에서 모델 테스트하기
이제 부동 모델과 양자화된 모델의 성능을 비교해 보겠습니다.
tflite_model_file은 부동 소수점 데이터가 있는 원본 TensorFlow Lite 모델입니다.
tflite_model_quant_file은 정수 전용 양자화를 사용하여 변환된 마지막 모델입니다(입력 및 출력에 uint8 데이터 사용).
예측값을 출력하는 다른 함수를 만들어 보겠습니다.
Step12: 이제 부동 모델을 테스트합니다.
Step13: 그리고 양자화된 모델을 테스트합니다.
Step14: 모든 이미지에서 모델 평가하기
이제 이 튜토리얼의 시작 부분에서 로드한 모든 테스트 이미지를 사용하여 두 모델을 모두 실행해보겠습니다.
Step15: 부동 모델을 평가합니다.
Step16: 양자화된 모델을 평가합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
import numpy as np
assert float(tf.__version__[:3]) >= 2.3
Explanation: 훈련 후 정수 양자화
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_integer_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lite/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lite/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a> </td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lite/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
개요
정수 양자화는 32bit 부동 소수점 숫자(예: 가중치 및 활성화 출력)를 가장 가까운 8bit 고정 소수점 숫자로 변환하는 최적화 전략입니다. 그 결과 모델이 작아지고 추론 속도가 증가하여 마이크로 컨트롤러와 같은 저전력 장치에 유용합니다. 이 데이터 형식은 에지 TPU와 같은 정수 전용 가속기에도 필요합니다.
이 가이드에서는 MNIST 모델을 처음부터 훈련하고 Tensorflow Lite 파일로 변환하고 훈련 후 양자화로 양자화합니다. 마지막으로 변환된 모델의 정확성을 확인하고 원본 부동 모델과 비교합니다.
실제로 모델을 양자화하려는 정도에 대한 몇 가지 옵션이 있습니다. 이 튜토리얼에서는 모든 가중치와 활성화 출력을 8bit 정수 데이터로 변환하는 '전체 정수 양자화'를 수행합니다. 반면 다른 전략은 일부 양의 데이터를 부동 소수점에 남길 수 있습니다.
다양한 양자화 전략에 대해 자세히 알아 보려면 TensorFlow Lite 모델 최적화에 대해 읽어보세요.
설정
입력 및 출력 텐서를 양자화하려면 TensorFlow r2.3에 추가된 API를 사용해야 합니다.
End of explanation
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images.astype(np.float32) / 255.0
test_images = test_images.astype(np.float32) / 255.0
# Define the model architecture
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=5,
validation_data=(test_images, test_labels)
)
Explanation: TensorFlow 모델 생성하기
MNIST 데이터세트에서 숫자를 분류하는 간단한 모델을 만들어 보겠습니다.
이 훈련은 약 ~98%의 정확성으로 훈련하는 단 5 epoch 동안 모델을 훈련하기 때문에 오래 걸리지 않을 것입니다.
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
Explanation: TensorFlow Lite 모델로 변환하기
이제 TFLiteConverter API를 사용하여 훈련된 모델을 TensorFlow Lite 형식으로 변환하고 다양한 정도의 양자화를 적용할 수 있습니다.
일부 양자화 버전은 일부 데이터를 부동 형식으로 남겨 둡니다. 따라서 다음 섹션에서는 완전히 int8 또는 uint8 데이터인 모델을 얻을 때까지 양자화 양이 증가하는 각 옵션을 보여줍니다(각 옵션에 대한 모든 양자화 단계를 볼 수 있도록 각 섹션에서 일부 코드를 복제합니다).
먼저, 양자화없이 변환된 모델이 있습니다.
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model_quant = converter.convert()
Explanation: 이제 TensorFlow Lite 모델이지만 모든 매개변수 데이터에 대해 여전히 32bit 부동 소수점 값을 사용하고 있습니다.
동적 범위 양자화를 사용하여 변환하기
이제 기본 optimizations 플래그를 활성화하여 모든 고정 매개변수(예: 가중치)를 양자화합니다.
End of explanation
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
# Model has only one input so each data point has one element.
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
tflite_model_quant = converter.convert()
Explanation: 모델은 이제 양자화된 가중치로 약간 더 작아지지만 다른 변수 데이터는 여전히 부동 형식입니다.
부동 폴백 양자화를 사용하여 변환하기
변수 데이터(예: 모델 입력/출력 및 레이어 간 중간)를 양자화하려면 RepresentativeDataset을 제공해야 합니다. 이것은 일반적인 값을 나타낼 만큼 충분히 큰 입력 데이터세트를 제공하는 생성기 함수입니다. 해당 함수는 변환기로 모든 가변 데이터에 대한 동적 범위를 추정할 수 있습니다(데이터세트는 훈련 또는 평가 데이터세트와 비교할 때 고유할 필요가 없습니다). 여러 입력을 지원하기 위해 각 대표 데이터 포인트는 목록으로 이루어졌고 목록의 요소는 인덱스에 따라 모델에 제공됩니다.
End of explanation
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
Explanation: 이제 모든 가중치와 변수 데이터가 양자화되고 모델은 원본 TensorFlow Lite 모델에 비해 훨씬 작습니다.
그러나 전통적으로 부동 모델 입력 및 출력 텐서를 사용하는 애플리케이션과의 호환성을 유지하기 위해 TensorFlow Lite 변환기는 모델 입력 및 출력 텐서를 부동 상태로 둡니다.
End of explanation
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
# Ensure that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8 (APIs added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quant = converter.convert()
Explanation: 일반적으로 호환성에는 좋지만 에지 TPU와 같이 정수 기반 작업만 수행하는 기기와는 호환되지 않습니다.
또한 TensorFlow Lite에 해당 연산에 대한 양자화된 구현이 포함되어 있지 않은 경우 위의 프로세스는 부동 형식으로 연산을 남길 수 있습니다. 이 전략을 사용하면 변환을 완료할 수 있으므로 더 작고 효율적인 모델을 사용할 수 있지만, 정수 전용 하드웨어와는 호환되지 않습니다(이 MNIST 모델의 모든 연산에는 양자화된 구현이 있습니다).
따라서 엔드 투 엔드 정수 전용 모델을 보장하려면 몇 가지 매개변수가 더 필요합니다.
정수 전용 양자화를 사용하여 변환하기
입력 및 출력 텐서를 양자화하고, 양자화할 수 없는 연산이 발생하는 경우 변환기에서 오류를 발생시키려면 몇 가지 추가 매개변수를 사용하여 모델을 다시 변환합니다.
End of explanation
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
Explanation: 내부 양자화는 위와 동일하게 유지되지만 입력 및 출력 텐서는 이제 정수 형식임을 알 수 있습니다.
End of explanation
import pathlib
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
# Save the unquantized/float model:
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
# Save the quantized model:
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_model_quant)
Explanation: 이제 모델의 입력 및 출력 텐서에 정수 데이터를 사용하는 정수 양자화 모델이 있으므로 에지 TPU와 같은 정수 전용 하드웨어와 호환됩니다.
모델을 파일로 저장하기
다른 기기에 모델을 배포하려면 .tflite 파일이 필요합니다. 따라서 변환된 모델을 파일로 저장한 다음 아래에서 추론을 실행할 때 로드해보겠습니다.
End of explanation
# Helper function to run inference on a TFLite model
def run_tflite_model(tflite_file, test_image_indices):
global test_images
# Initialize the interpreter
interpreter = tf.lite.Interpreter(model_path=str(tflite_file))
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
predictions = np.zeros((len(test_image_indices),), dtype=int)
for i, test_image_index in enumerate(test_image_indices):
test_image = test_images[test_image_index]
test_label = test_labels[test_image_index]
# Check if the input type is quantized, then rescale input data to uint8
if input_details['dtype'] == np.uint8:
input_scale, input_zero_point = input_details["quantization"]
test_image = test_image / input_scale + input_zero_point
test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"])
interpreter.set_tensor(input_details["index"], test_image)
interpreter.invoke()
output = interpreter.get_tensor(output_details["index"])[0]
predictions[i] = output.argmax()
return predictions
Explanation: TensorFlow Lite 모델 실행하기
이제 TensorFlow Lite Interpreter로 추론을 실행하여 모델 정확성을 비교합니다.
먼저 주어진 모델과 이미지로 추론을 실행한 다음 예측을 반환하는 함수가 필요합니다.
End of explanation
import matplotlib.pylab as plt
# Change this to test a different image
test_image_index = 1
## Helper function to test the models on one image
def test_model(tflite_file, test_image_index, model_type):
global test_labels
predictions = run_tflite_model(tflite_file, [test_image_index])
plt.imshow(test_images[test_image_index])
template = model_type + " Model \n True:{true}, Predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0])))
plt.grid(False)
Explanation: 하나의 이미지에서 모델 테스트하기
이제 부동 모델과 양자화된 모델의 성능을 비교해 보겠습니다.
tflite_model_file은 부동 소수점 데이터가 있는 원본 TensorFlow Lite 모델입니다.
tflite_model_quant_file은 정수 전용 양자화를 사용하여 변환된 마지막 모델입니다(입력 및 출력에 uint8 데이터 사용).
예측값을 출력하는 다른 함수를 만들어 보겠습니다.
End of explanation
test_model(tflite_model_file, test_image_index, model_type="Float")
Explanation: 이제 부동 모델을 테스트합니다.
End of explanation
test_model(tflite_model_quant_file, test_image_index, model_type="Quantized")
Explanation: 그리고 양자화된 모델을 테스트합니다.
End of explanation
# Helper function to evaluate a TFLite model on all images
def evaluate_model(tflite_file, model_type):
global test_images
global test_labels
test_image_indices = range(test_images.shape[0])
predictions = run_tflite_model(tflite_file, test_image_indices)
accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images)
print('%s model accuracy is %.4f%% (Number of test samples=%d)' % (
model_type, accuracy, len(test_images)))
Explanation: 모든 이미지에서 모델 평가하기
이제 이 튜토리얼의 시작 부분에서 로드한 모든 테스트 이미지를 사용하여 두 모델을 모두 실행해보겠습니다.
End of explanation
evaluate_model(tflite_model_file, model_type="Float")
Explanation: 부동 모델을 평가합니다.
End of explanation
evaluate_model(tflite_model_quant_file, model_type="Quantized")
Explanation: 양자화된 모델을 평가합니다.
End of explanation |
12,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Lesson 6 - Supervised Learning
Machine learning (ML)
The sentiment analysis program we wrote earlier (in session 1) adopts a non-machine learning algorithm. That is, it tries to define what words have good and bad sentiments and assumes all the necessary words of good and bad sentiments exist in the word_sentiment.csv file.
Machine Learning (ML) is a class of algorithms, which are data-driven, i.e. unlike "normal" algorithms, it is the data that "tells" what the "good answer" is. A machine learning algorithm would not have such coded definition of what a good and bad sentiment is, but would "learn-by-examples". That is, you will show several words which have been labeled as good sentiment and bad sentiment and a good ML algorithm will eventually learn and be able to predict whether or not an unseen word has a good or bad sentiment. This particular example of sentiment analysis is "supervised", which means that your example words must be labeled, or explicitly say which words are good and which are bad.
On the other hand, in the case of unsupervised learning, the word examples are not labeled. Of course, in such a case the algorithm itself cannot "invent" what a good sentiment is, but it can try to cluster the data into different groups, e.g. it can figure out that words that are close to certain other words are different from words closer to some other words (eg. words close to the word "mother" are most likely good).
There are "intermediate" forms of supervision, i.e. semi-supervised and active learning. Technically, these are supervised methods in which there is some "smart" way to avoid a large number of labeled examples.
In active learning, the algorithm itself decides which thing you should label (e.g. it can be pretty sure about a sentence that has the word fantastic, but it might ask you to confirm if the sentence may have a negative like “not”).
In semi-supervised learning, there are two different algorithms, which start with the labeled examples, and then "tell" each other the way they think about some large number of unlabeled data. From this "discussion" they learn.
Figure
Step4: Step 2
Step7: Step 3
Step10: Step 4
Step13: Step 5
Step14: Excercise
Improve the feature extractor (by adding new features) so that the test accuracy can go up to atleast 70%. | Python Code:
def feature_extractor(word):
Extract the features for a given word and return a dictonary of the features
start_letter = word[0]
last_letter = word[-1]
return {'start_letter' : start_letter,'last_letter' : last_letter}
def main():
print(feature_extractor('poonacha'))
main()
Explanation: Lesson 6 - Supervised Learning
Machine learning (ML)
The sentiment analysis program we wrote earlier (in session 1) adopts a non-machine learning algorithm. That is, it tries to define what words have good and bad sentiments and assumes all the necessary words of good and bad sentiments exist in the word_sentiment.csv file.
Machine Learning (ML) is a class of algorithms, which are data-driven, i.e. unlike "normal" algorithms, it is the data that "tells" what the "good answer" is. A machine learning algorithm would not have such coded definition of what a good and bad sentiment is, but would "learn-by-examples". That is, you will show several words which have been labeled as good sentiment and bad sentiment and a good ML algorithm will eventually learn and be able to predict whether or not an unseen word has a good or bad sentiment. This particular example of sentiment analysis is "supervised", which means that your example words must be labeled, or explicitly say which words are good and which are bad.
On the other hand, in the case of unsupervised learning, the word examples are not labeled. Of course, in such a case the algorithm itself cannot "invent" what a good sentiment is, but it can try to cluster the data into different groups, e.g. it can figure out that words that are close to certain other words are different from words closer to some other words (eg. words close to the word "mother" are most likely good).
There are "intermediate" forms of supervision, i.e. semi-supervised and active learning. Technically, these are supervised methods in which there is some "smart" way to avoid a large number of labeled examples.
In active learning, the algorithm itself decides which thing you should label (e.g. it can be pretty sure about a sentence that has the word fantastic, but it might ask you to confirm if the sentence may have a negative like “not”).
In semi-supervised learning, there are two different algorithms, which start with the labeled examples, and then "tell" each other the way they think about some large number of unlabeled data. From this "discussion" they learn.
Figure : Supervised learning approach
<center>
<img src="ML_supervised.gif" width="500" title="Supervised learning">
</center>
Training the ML algorithm
NLTK module is built for working with language data. NLTK supports classification, tokenization, stemming, tagging, parsing, and semantic reasoning functionalities. We will use the NLTK module and employ the naive Bayes method to classify words as being either positive or negative sentiment. You can also use other modules specifically meant for ML eg. sklearn module.
Step 1: Feature extraction
Define what features of a word that you want to use in order to classify the data set. We will select two features the first and the last letter of the word.
End of explanation
import csv
def feature_extractor(word):
Extract the features for a given word and return a dictonary of the features
start_letter = word[0]
last_letter = word[-1]
return {'start_letter' : start_letter,'last_letter' : last_letter}
def ML_train(sentiment_corpus):
Create feature set from the corpus given to to it.
feature_set = []
with open(sentiment_corpus,'rt',encoding = 'utf-8') as sentobj:
sentiment_handle = csv.reader(sentobj)
for sentiment in sentiment_handle:
new_row = []
new_row.append(feature_extractor(sentiment[0])) #get the dictionary of features for a word
if int(sentiment[1]) >= 0: # Club the sentiment values (-5 to + 5) to just positive or negative
new_row.append('positive')
else:
new_row.append('negative')
feature_set.append(new_row)
print(feature_set)
def main():
sentiment_csv = "C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 3/Sent/word_sentiment.csv"
ML_train(sentiment_csv)
main()
Explanation: Step 2: Create the feature set
We will use the corpus of sentiments from the word_sentiment.csv file to create a feature dataset which we will use to train and test the ML model.
End of explanation
import csv
import random
def feature_extractor(word):
Extract the features for a given word and return a dictonary of the features
start_letter = word[0]
last_letter = word[-1]
return {'start_letter' : start_letter,'last_letter' : last_letter}
def ML_train(sentiment_corpus):
Create feature set from the corpus given to to it. Split the feature set into training and testing sets
feature_set = []
with open(sentiment_corpus,'rt',encoding = 'utf-8') as sentobj:
sentiment_handle = csv.reader(sentobj)
for sentiment in sentiment_handle:
new_row = []
new_row.append(feature_extractor(sentiment[0])) #get the dictionary of features for a word
if int(sentiment[1]) >= 0: # Club the sentiment values (-5 to + 5) to just positive or negative
new_row.append('positive')
else:
new_row.append('negative')
feature_set.append(new_row)
random.shuffle(feature_set)
# We need to shuffle the features since the word_sentiment.csv had words arranged in alphabetical order
train_set = feature_set[:1500] #the first 1500 words becomes our training set
test_set = feature_set[1500:]
print(len(test_set))
def main():
sentiment_csv = "C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 3/Sent/word_sentiment.csv"
ML_train(sentiment_csv)
main()
Explanation: Step 3: Split the feature set into training and testing sets
We will split the feature data set into training and test data sets. The training set is used to train our ML model and then the testing set can be used to check how good the model is. It is normal to use 20% of the data set for testing purposes. In our case we will retain 1500 words for training and the rest for testing.
End of explanation
import csv
import random
import nltk
def feature_extractor(word):
Extract the features for a given word and return a dictonary of the features
start_letter = word[0]
last_letter = word[-1]
return {'start_letter' : start_letter,'last_letter' : last_letter}
def ML_train(sentiment_corpus):
Create feature set from the corpus given to to it. Split the feature set into training and testing sets.
Train the classifier using the naive Bayes model and return the classifier.
feature_set = []
with open(sentiment_corpus,'rt',encoding = 'utf-8') as sentobj:
sentiment_handle = csv.reader(sentobj)
for sentiment in sentiment_handle:
new_row = []
new_row.append(feature_extractor(sentiment[0])) #get the dictionary of features for a word
if int(sentiment[1]) >= 0: # Club the sentiment values (-5 to + 5) to just positive or negative
new_row.append('positive')
else:
new_row.append('negative')
feature_set.append(new_row)
random.shuffle(feature_set)
# We need to shuffle the features since the word_sentiment.csv had words arranged in alphabetical order
train_set = feature_set[:1500] #the first 1500 words becomes our training set
test_set = feature_set[1500:]
classifier = nltk.NaiveBayesClassifier.train(train_set)
# Note: to create the classifier we need to provide a dictonary of features and the label ONLY
return classifier
def main():
sentiment_csv = "C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 3/Sent/word_sentiment.csv"
classifier = ML_train(sentiment_csv)
input_word = input('Enter a word ').lower()
sentiment = classifier.classify(feature_extractor(input_word))
print('Sentiment of word "', input_word,'" is : ',sentiment)
main()
Explanation: Step 4: Use ML method (naive Bayes) to create the classifier model
The NLTK module gives us several ML methods to create a classifier model using our training set and based on our selected features.
End of explanation
import csv
import random
import nltk
def feature_extractor(word):
Extract the features for a given word and return a dictonary of the features
start_letter = word[0]
last_letter = word[-1]
return {'start_letter' : start_letter,'last_letter' : last_letter}
def ML_train(sentiment_corpus):
Create feature set from the corpus given to to it. Split the feature set into training and testing sets.
Train the classifier using the naive Bayes model and return the classifier.
feature_set = []
with open(sentiment_corpus,'rt',encoding = 'utf-8') as sentobj:
sentiment_handle = csv.reader(sentobj)
for sentiment in sentiment_handle:
new_row = []
new_row.append(feature_extractor(sentiment[0])) #get the dictionary of features for a word
if int(sentiment[1]) >= 0: # Club the sentiment values (-5 to + 5) to just positive or negative
new_row.append('positive')
else:
new_row.append('negative')
feature_set.append(new_row)
random.shuffle(feature_set)
# We need to shuffle the features since the word_sentiment.csv had words arranged in alphabetical order
train_set = feature_set[:1500] #the first 1500 words becomes our training set
test_set = feature_set[1500:]
classifier = nltk.NaiveBayesClassifier.train(train_set)
# Note: to create the classifier we need to provide a dictonary of features and the label ONLY
print('Test accuracy of the classifier = ',nltk.classify.accuracy(classifier, test_set))
print(classifier.show_most_informative_features())
return classifier
def main():
sentiment_csv = "C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 3/Sent/word_sentiment.csv"
classifier = ML_train(sentiment_csv)
input_word = input('Enter a word ').lower()
sentiment = classifier.classify(feature_extractor(input_word))
print('Sentiment of word "', input_word,'" is : ',sentiment)
main()
Explanation: Step 5: Testing the model
Find how good the model is in identifying the labels. Ensure that the test set is distinct from the training corpus. If we simply re-used the training set as the test set, then a model that simply memorized its input, without learning how to generalize to new examples, would receive misleadingly high scores. The function nltk.classify.accuracy() will calculate the accuracy of a classifier model on a given test set.
End of explanation
#Enter code here
#
Explanation: Excercise
Improve the feature extractor (by adding new features) so that the test accuracy can go up to atleast 70%.
End of explanation |
12,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculation of Equilibrium Concentrations in Competitive Binding Experiment
This notebook uses analytical solution of equilibrium expressions for 2 ligands competing for 1 population of protein binding sites.
Calculations are based on
Wang, Z. X. An exact mathematical expression for describing
competitive binding of two different ligands to a protein molecule. FEBS
Lett. 1995, 360, 111−114. doi
Step2: Strictly Competitive Binding Model
$ {K_A} $ and $ {K_A} $ are dissociation constants of A and B, binding to P.
$$PA {\stackrel{K_A}{\rightleftharpoons}} A + B $$
$$PB {\stackrel{K_B}{\rightleftharpoons}} B + P$$
$$ K_A = \frac{[P][A]}{[PA]} $$ $$ K_B = \frac{[P][B]}{[PB]} $$
$[A]_0$, $[B]_0$ and $[P]_0$ are total concentrations of A, B and P. $[A]$, $[B]$ and $[P]$ are the free concentrations. Conservation of mass
Step3: For one set of conditions
Step4: Modelling the competition experiment - Expected Binding Curve
Ideally protein concentration should be 10 fold lower then Kd. It must be at least hafl of Kd if fluorescence detection requires higher protein concentration.
For our assay it will be 0.5 uM.
Ideally ligand concentation should span 100-fold Kd to 0.01-fold Kd, log dilution.
The ligand concentration will be in half log dilution from 20 uM ligand.
Step5: Without competitive ligand
Step6: Predicting experimental fluorescence signal of saturation binding experiment
Molar fluorescence values based on dansyl amide.
Step7: Fluorescent ligand (L) titration into buffer
Step8: Fluorescent ligand titration into protein (HSA)
Step9: Checking ligand depletion | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from IPython.display import display, Math, Latex #Do we even need this anymore?
%pylab inline
Explanation: Calculation of Equilibrium Concentrations in Competitive Binding Experiment
This notebook uses analytical solution of equilibrium expressions for 2 ligands competing for 1 population of protein binding sites.
Calculations are based on
Wang, Z. X. An exact mathematical expression for describing
competitive binding of two different ligands to a protein molecule. FEBS
Lett. 1995, 360, 111−114. doi:10.1016/0014-5793(95)00062-E
We will model binding of two ligands, fluorescent ligand (A) and non-fluorescent ligand (B), to protein (P).
End of explanation
#Competitive binding function
def three_component_competitive_binding_exact(Ptot, Atot, K_A, Btot, K_B):
Parameters
----------
Ptot : float
Total protein concentration
Atot : float
Total tracer(fluorescent) ligand concentration
K_A : float
Dissociation constant of A
Btot : float
Total competitive ligand concentration
K_B : float
Dissociation constant of B
Returns
-------
P : float
Free protein concentration
A : float
Free ligand concentration
B : float
Free ligand concentration
PA : float
Concentration of PA complex
PB : float
Concentration of PB complex
Usage
-----
[P, A, B, PA, PB] = three_component_competitive_binding(Ptot, Atot, K_A, Btot, K_B)
# P^3 + aP^2 + bP + c = 0
a = K_A + K_B + Atot + Btot - Ptot
b = K_A*K_B + K_B*(Atot-Ptot) + K_B*(Btot - Ptot)
c = -K_A*K_B*Ptot
# Subsitute P=u-a/3
# u^3 - qu - r = 0 where
q = (a**2)/3.0 - b
r = (-2.0/27.0)*a**3 +(1.0/3.0)*a*b - c
# Discriminant
delta = (r**2)/4.0 -(q**3)/27.0
# 3 roots. Physically meaningful root is u.
theta = np.arccos((-2*(a**3)+9*a*b-27*c)/(2*sqrt((a**2-3*b)**3)))
u = (2.0/3.0)*sqrt(a**2-3*b)*cos(theta/3.0)
# Free protein concentration [P]
P = u - a/3.0
# [PA]
PA = P*Atot/(K_A + P)
# [PB]
PB = P*Btot/(K_B + P)
# Free A concentration [A]
A = Atot - PA
# Free B concentration [B]
B = Btot - PB
# Apparent Kd of A (shift caused by competitive ligand)
# K_A_app = K_A*(1+B/K_B)
return [P, A, B, PA, PB]
Explanation: Strictly Competitive Binding Model
$ {K_A} $ and $ {K_A} $ are dissociation constants of A and B, binding to P.
$$PA {\stackrel{K_A}{\rightleftharpoons}} A + B $$
$$PB {\stackrel{K_B}{\rightleftharpoons}} B + P$$
$$ K_A = \frac{[P][A]}{[PA]} $$ $$ K_B = \frac{[P][B]}{[PB]} $$
$[A]_0$, $[B]_0$ and $[P]_0$ are total concentrations of A, B and P. $[A]$, $[B]$ and $[P]$ are the free concentrations. Conservation of mass:
$$[A]_0 = [A]+[PA]$$
$$[B]_0 = [A]+[PB]$$
$$[P]_0 = [P]+[PA]+[PB]$$
Combining with equilibrium expressions:
$$ K_A = \frac{[P][A]_0 - [PA]}{[PA]} $$ $$ K_B = \frac{P}{[PB]} $$
Reorganize to get [PA] and [PB].
$$ [PA] = \frac{[P][A]_0}{K_A+[P]} $$ $$ [PB] = \frac{[P][B]_0}{K_B+[P]} $$
$$ [P]_0 = [P] + \frac{[P][A]_0}{K_A+[P]} + \frac{[P][B]_0}{K_B+[P]} $$
Expanding results:
$ 0 = [P]^3 + K_A[P]^2+ K_B[P]^2 + [A]_0[P]^2 + [B]_0[P]^2 - [P]_0[P]^2 +K_AK_B[P] +[A]_0K_B[P] +[B]_0K_A[P] - K_A[P]_0[P]-K_B[P]_0[P]-K_AK_B[P]_0$
$ 0 = [P]^3 + a[P]^2 + b[P] +c $
$ a = K_A+ K_B + [A]_0 + [B]_0 - [P]_0 $
$ b = K_AK_B +K_B([A]_0-[P]_0) +K_B([B]_0 - [P]_0) $
$ c = - K_AK_B[P]_0$
Substituting $ [P] = u - \frac{a}{3} $, gives us:
$ u^3 -qu - r =0 $
$ q=\frac{a^2}{3}-b $
$ r = \frac{-2}{27}a^3+\frac{1}{3}ab-c$
Discriminant:
$ \Delta = \frac{r^2}{4} - \frac{q^3}{27}$
Since $ \Delta < 0$, there are 3 real roots.
Only physically meaningful root is $ u = \frac{2}{3}\sqrt{a^2-3b}\cos(\frac{\theta}{3})$, where $\theta=\arccos{\frac{-2a^3+9ab-27c}{2\sqrt{(a^2-3b)^3}}}$
Then we can calculate equilibrium concentrations of species.
$$ [P]= u -\frac{a}{3} = -\frac{a}{3} + \frac{2}{3}\sqrt{a^2-3b}\cos(\frac{\theta}{3}) $$
$$ [PA] = \frac{[P][A]_0}{K_A+[P]} = \frac{[A]_0(2\sqrt{a^2-3b}\cos(\frac{\theta}{3})-a)}{3K_A+(2\sqrt{a^2-3b}\cos(\frac{\theta}{3})-a)} $$
$$ [PB] = \frac{[P][B]_0}{K_B+[P]} = \frac{[B]_0(2\sqrt{a^2-3b}\cos(\frac{\theta}{3})-a)}{3K_B+(2\sqrt{a^2-3b}\cos(\frac{\theta}{3})-a)} $$
$$[A] = [A]_0-[PA] $$
$$[B] = [B]_0-[PB]$$
End of explanation
# Initial total concentrations of P, A and B (uM)
Ptot=0.5
Atot=200
Btot=50
# Dissociation constant for fluorescent ligand A: K_A (uM)
K_A=15
# Dissociation constant for non-fluorescent ligand B: K_B (uM)
K_B=3.85
[P, A, B, PA, PB] = three_component_competitive_binding_exact(Ptot, Atot, K_A, Btot, K_B)
[P, A, B, PA, PB]
Explanation: For one set of conditions
End of explanation
# Dilution series for fluorescent ligand A
# Number of wells in a dilution series
num_wells = 12.0
# (uM)
Amax = 2000
Amin = 0.02
# Factor for logarithmic dilution (n)
# Amax*((1/n)^(11)) = Amin for 12 wells
n = (Amax/Amin)**(1/(num_wells-1))
# Fluorescent-ligand titration series (uM)
Atot = Amax / np.array([n**(float(i)) for i in range(12)])
Atot
# Dissociation constant for non-fluorescent ligand A: K_dA (uM)
K_A = 15
# Dissociation constant for non-fluorescent ligand B: K_B (uM)
K_B=3.85
Explanation: Modelling the competition experiment - Expected Binding Curve
Ideally protein concentration should be 10 fold lower then Kd. It must be at least hafl of Kd if fluorescence detection requires higher protein concentration.
For our assay it will be 0.5 uM.
Ideally ligand concentation should span 100-fold Kd to 0.01-fold Kd, log dilution.
The ligand concentration will be in half log dilution from 20 uM ligand.
End of explanation
# Constant concentration of B that will be added to all wells (uM)
# If there is no competitive ligand
Btot= 0
[P_B0, A_B0, B_B0, PA_B0, PB_B0] = three_component_competitive_binding_exact(Ptot, Atot, K_A, Btot, K_B)
# If B concentration is 10 uM
Btot= 10
[P_B10, A_B10, B_B10, PA_B10, PB_B10] = three_component_competitive_binding_exact(Ptot, Atot, K_A, Btot, K_B)
# If B concentration is 50 uM
Btot= 50
[P_B50, A_B50, B_B50, PA_B50, PB_B50] = three_component_competitive_binding_exact(Ptot, Atot, K_A, Btot, K_B)
plt.semilogx(Atot,PA_B0, 'o', label='[Btot]=0 uM')
plt.semilogx(Atot,PA_B10, 'o', label='[Btot]=10 uM')
plt.semilogx(Atot,PA_B50, 'o', label='[Btot]=50 uM')
plt.xlabel('Atot')
plt.ylabel('[PA]')
plt.ylim(1e-3,6e-1)
plt.xlim(1e-2,1e+4)
plt.axhline(Ptot,color='0.75',linestyle='--',label='[Ptot]')
plt.axvline(K_A,color='k',linestyle='--',label='K_A')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
Explanation: Without competitive ligand
End of explanation
# Background fluorescence
BKG = 86.2
# Molar fluorescence of free ligand
MF = 2.5
# Molar fluorescence of ligand in complex
FR = 306.1
MFC = FR * MF
Explanation: Predicting experimental fluorescence signal of saturation binding experiment
Molar fluorescence values based on dansyl amide.
End of explanation
Atot
# Fluorescence measurement of buffer + ligand L titrations
A=Atot
Flu_buffer = MF*A + BKG
Flu_buffer
# y will be complex concentration
# x will be total ligand concentration
plt.semilogx(Atot,Flu_buffer,'o')
plt.xlabel('[A]')
plt.ylabel('Fluorescence')
plt.ylim(50,6000)
Explanation: Fluorescent ligand (L) titration into buffer
End of explanation
# Fluorescence measurement of the HSA + A serial dilution + 0 uM B
Flu_HSA_B0 = MF*A_B0 + BKG + FR*MF*PA_B0
# Fluorescence measurement of the HSA + A serial dilution + 10 uM B
Flu_HSA_B10 = MF*A_B10 + BKG + FR*MF*PA_B10
# Fluorescence measurement of the HSA + A serial dilution + 50 uM B
Flu_HSA_B50 = MF*A_B50 + BKG + FR*MF*PA_B50
plt.semilogx(Atot,Flu_buffer,'.',label='buffer + titration of A ')
plt.semilogx(Atot, Flu_HSA_B0 ,'.', label='HSA + titration of A')
plt.semilogx(Atot, Flu_HSA_B10 ,'.', label='HSA + titration of A + 10 uM B ')
plt.semilogx(Atot, Flu_HSA_B50 ,'.', label='HSA + titration of A + 50 uM B')
plt.xlabel('[A_tot]')
plt.ylabel('Fluorescence')
plt.ylim(50,6000)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: Fluorescent ligand titration into protein (HSA)
End of explanation
A_percent_depletion_B0=((Atot-A_B0)/Atot)*100
plt.semilogx(Atot,A_percent_depletion_B0,'.',label='A_B0')
A_percent_depletion_B50=((Atot-A_B50)/Atot)*100
plt.semilogx(Atot,A_percent_depletion_B50,'.',label='A_B50')
Btot=50
B_percent_depletion_B50=((Btot-B_B50)/Btot)*100
plt.semilogx(Atot,B_percent_depletion_B50,'.',label='B_B50')
plt.xlabel('[A_tot]')
plt.ylabel('% ligand depletion')
plt.ylim(-0,50)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: Checking ligand depletion
End of explanation |
12,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simply use the metric we created to define the quality of a app. If the weighted rating is no less than 4.0, it can be seen as a good app. If the weighted rating is no more than 2.5, it is a bad app.
Step1: Use star value of different reviews to filter comments.
Step2: <b>Cleaning and Preprocessing</b>
Data cleaning is absolutely crucial for generating a useful topic model. The steps below are common to most natural language processing methods
Step3: <b>Preparing Document-Term Matrix</b>
* Convert a corpus into a document-term matrix. LDA model looks for repeating term patterns in the entire DT matrix
Step4: <b>Running LDA Model (Batch Wise LDA)</b>
* According to the reference, in order to retrieve most important topic terms, a corpus can be divided into batches of fixed sizes. Running LDA multiple times on these batches will provide different results, however, the best topic terms will be the intersection of all batches.
Step5: <b>Examining the results</b>
Step6: Each generated topic is separated by a comma. Within each topic are the five most probable words to appear in that topic. The best topic terms will be the intersection of all three batches. Some things to think about, for the good app, the comments have common features like
Step7: For the bad apps, from the result, we can see most topics include the word "time". We can refer that customers are not satisfied for the using fluency of these apps. And for the updated version of these apps, they doesn't work sometimes, maybe because flashing back. Meanwhile, compared with the last version, these updated apps maybe designed not that good.
<b>Running LDA Model (For the whole documents)</b> | Python Code:
good_app = app.loc[app['weighted_rating'] >=4.0]
bad_app = app.loc[app['weighted_rating'] <=2.5]
good_app = good_app.reset_index(drop=True)
bad_app = bad_app.reset_index(drop=True)
category = app['category']
cate_list = []
for i in category.unique():
cate = i.lower()
cate_list.append(cate)
Explanation: Simply use the metric we created to define the quality of a app. If the weighted rating is no less than 4.0, it can be seen as a good app. If the weighted rating is no more than 2.5, it is a bad app.
End of explanation
first_good= good_app.loc[good_app['review1_star']>=4].reset_index(drop=True)['review1']
second_good = good_app.loc[good_app['review2_star']>=4].reset_index(drop=True)['review2']
third_good = good_app.loc[good_app['review3_star']>=4].reset_index(drop=True)['review3']
first_bad = bad_app.loc[bad_app['review1_star']<=2.5].reset_index(drop=True)['review1']
second_bad = bad_app.loc[bad_app['review2_star']<=2.5].reset_index(drop=True)['review2']
third_bad = bad_app.loc[bad_app['review3_star']<=2.5].reset_index(drop=True)['review3']
good_rev = first_good.append(second_good)
all_good = good_rev.append(third_good)
bad_rev = first_bad.append(second_bad)
all_bad = bad_rev.append(third_bad)
Explanation: Use star value of different reviews to filter comments.
End of explanation
stop = set(stopwords.words('english')+[u'one',u'app',u'it',u'dont',u"i",u"'s","''","``",u'use',u'used',u'using',u'love',
u'would',u'great',u'app.',u'like',u'lot']+ cate_list)
exclude = set(string.punctuation)
lemma = WordNetLemmatizer()
def stem(tokens,stemmer = PorterStemmer().stem):
return [stemmer(w.lower()) for w in tokens if w not in stop]
def clean(doc):
stop_free = " ".join([i for i in doc.lower().split() if i not in stop])
punc_free = ''.join(ch for ch in stop_free if ch not in exclude)
normalized = " ".join(lemma.lemmatize(word) for word in punc_free.split())
tokenize = nltk.word_tokenize
to_token = stem(tokenize(normalized))
tags = nltk.pos_tag(to_token)
dt_tags = [t for t in tags if t[1] in ["DT", "MD", "VBP","IN", "JJ","VB"]]
for tag in dt_tags:
normalized = " ".join(tok for tok in to_token if tok not in tag[0])
return normalized
doc_clean_g1 = [clean(doc).split() for doc in first_good]
doc_clean_g2 = [clean(doc).split() for doc in second_good]
doc_clean_g3 = [clean(doc).split() for doc in third_good]
doc_clean_b1 = [clean(doc).split() for doc in first_bad]
doc_clean_b2 = [clean(doc).split() for doc in second_bad]
doc_clean_b3 = [clean(doc).split() for doc in third_bad]
doc_clean_good = [clean(doc).split() for doc in all_good]
doc_clean_bad = [clean(doc).split() for doc in all_bad]
Explanation: <b>Cleaning and Preprocessing</b>
Data cleaning is absolutely crucial for generating a useful topic model. The steps below are common to most natural language processing methods:
* Tokenizing: converting a document to its atomic elements.
* Stopping: removing meaningless words.
* Stemming: merging words that are equivalent in meaning.
Here we need to note that POS tag filter is more about the context of the features than frequencies of features. Topic Modelling tries to map out the recurring patterns of terms into topics. However, every term might not be equally important contextually. For example, POS tag IN contain terms such as – “within”, “upon”, “except”. “CD” contains – “one”,”two”, “hundred” etc. “MD” contains “may”, “must” etc. These terms are the supporting words of a language and can be removed by studying their post tags.
End of explanation
# Creating the term dictionary of our courpus, where every unique term is assigned an index.
dictionary_g1 = corpora.Dictionary(doc_clean_g1)
dictionary_g2 = corpora.Dictionary(doc_clean_g2)
dictionary_g3 = corpora.Dictionary(doc_clean_g3)
dictionary_b1 = corpora.Dictionary(doc_clean_b1)
dictionary_b2 = corpora.Dictionary(doc_clean_b2)
dictionary_b3 = corpora.Dictionary(doc_clean_b3)
dictionary_good = corpora.Dictionary(doc_clean_good)
dictionary_bad = corpora.Dictionary(doc_clean_bad)
# Converting list of documents (corpus) into Document Term Matrix using dictionary prepared above.
doc_term_matrix_g1 = [dictionary_g1.doc2bow(doc) for doc in doc_clean_g1]
doc_term_matrix_g2 = [dictionary_g2.doc2bow(doc) for doc in doc_clean_g2]
doc_term_matrix_g3 = [dictionary_g3.doc2bow(doc) for doc in doc_clean_g3]
doc_term_matrix_b1 = [dictionary_b1.doc2bow(doc) for doc in doc_clean_b1]
doc_term_matrix_b2 = [dictionary_b2.doc2bow(doc) for doc in doc_clean_b2]
doc_term_matrix_b3 = [dictionary_b3.doc2bow(doc) for doc in doc_clean_b3]
doc_term_matrix_good = [dictionary_good.doc2bow(doc) for doc in doc_clean_good]
doc_term_matrix_bad = [dictionary_bad.doc2bow(doc) for doc in doc_clean_bad]
Explanation: <b>Preparing Document-Term Matrix</b>
* Convert a corpus into a document-term matrix. LDA model looks for repeating term patterns in the entire DT matrix
End of explanation
# Creating the object for LDA model using gensim library
Lda = gensim.models.ldamodel.LdaModel
# Running and Trainign LDA model on the document term matrix.
ldamodel_g1 = Lda(doc_term_matrix_g1, num_topics=3, id2word = dictionary_g1, passes=50)
ldamodel_g2 = Lda(doc_term_matrix_g2, num_topics=3, id2word = dictionary_g2, passes=50)
ldamodel_g3 = Lda(doc_term_matrix_g3, num_topics=3, id2word = dictionary_g3, passes=50)
ldamodel_b1 = Lda(doc_term_matrix_b1, num_topics=3, id2word = dictionary_b1, passes=50)
ldamodel_b2 = Lda(doc_term_matrix_b2, num_topics=3, id2word = dictionary_b2, passes=50)
ldamodel_b3 = Lda(doc_term_matrix_b3, num_topics=3, id2word = dictionary_b3, passes=50)
Explanation: <b>Running LDA Model (Batch Wise LDA)</b>
* According to the reference, in order to retrieve most important topic terms, a corpus can be divided into batches of fixed sizes. Running LDA multiple times on these batches will provide different results, however, the best topic terms will be the intersection of all batches.
End of explanation
print(ldamodel_g1.print_topics(num_topics=3, num_words=5))
print(ldamodel_g2.print_topics(num_topics=3, num_words=5))
print(ldamodel_g3.print_topics(num_topics=3, num_words=5))
Explanation: <b>Examining the results</b>
End of explanation
print(ldamodel_b1.print_topics(num_topics=3, num_words=5))
print(ldamodel_b2.print_topics(num_topics=3, num_words=5))
print(ldamodel_b3.print_topics(num_topics=3, num_words=5))
Explanation: Each generated topic is separated by a comma. Within each topic are the five most probable words to appear in that topic. The best topic terms will be the intersection of all three batches. Some things to think about, for the good app, the comments have common features like:
1. It's free have some good features that satisfy customers' demand.
2. It has many good information and details, and customers are comfortable at vision, like screen.
3. The speed is awesome and save some time.
4. It provids some help when customers using it.
End of explanation
ldamodel_good = Lda(doc_term_matrix_good, num_topics=10, id2word = dictionary_good, passes=20)
ldamodel_bad = Lda(doc_term_matrix_bad, num_topics=10, id2word = dictionary_bad, passes=20)
print(ldamodel_good.print_topics(num_topics=5, num_words=3))
print(ldamodel_bad.print_topics(num_topics=5, num_words=3))
import pyLDAvis
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
good_rev = pyLDAvis.gensim.prepare(ldamodel_good, doc_term_matrix_good, dictionary_good)
bad_rev = pyLDAvis.gensim.prepare(ldamodel_bad, doc_term_matrix_bad, dictionary_bad)
pyLDAvis.save_html(good_rev,"good_rev.html")
good_rev
bad_rev
pyLDAvis.save_html(bad_rev,"bad_rev.html")
bad_rev
Explanation: For the bad apps, from the result, we can see most topics include the word "time". We can refer that customers are not satisfied for the using fluency of these apps. And for the updated version of these apps, they doesn't work sometimes, maybe because flashing back. Meanwhile, compared with the last version, these updated apps maybe designed not that good.
<b>Running LDA Model (For the whole documents)</b>
End of explanation |
12,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SINGA Core Classes
<img src="http
Step1: NOTE
Step2: Tensor
A tensor instance represents a multi-dimensional array allocated on a device instance.
It provides linear algbra operations, like +, -, *, /, dot, pow ,etc
NOTE
Step3: Initialize tensor values
Step4: To and from numpy
Step5: Move tensor between devices
Step6: Operations
NOTE
Step7: Member functions (in-place)
These functions would change the content of the tensor
Step8: Global functions (out of place)
These functions would not change the memory of the tensor, instead they return a new tensor
Unary functions
Step9: Binary functions
Step10: BLAS
BLAS function may change the memory of input tensor | Python Code:
from singa import device
default_dev = device.get_default_device()
gpu = device.create_cuda_gpu() # the first gpu device
gpu
Explanation: SINGA Core Classes
<img src="http://singa.apache.org/en/_static/images/singav1-sw.png" width="500px"/>
Device
A device instance represents a hardware device with multiple execution units, e.g.,
* A GPU which has multile cuda streams
* A CPU which has multiple threads
All data structures (variables) are allocated on a device instance. Consequently, all operations are executed on the resident device.
Create a device instance
End of explanation
gpu = device.create_cuda_gpu_on(1) # use the gpu device with the specified GPU ID
gpu_list1 = device.create_cuda_gpus(2) # the first two gpu devices
gpu_list2 = device.create_cuda_gpus([0,2]) # create the gpu instances on the given GPU IDs
opencl_gpu = device.create_opencl_device() # valid if SINGA is compiled with USE_OPENCL=ON
device.get_num_gpus()
device.get_gpu_ids()
Explanation: NOTE: currently we can only call the creating function once due to the cnmem restriction.
End of explanation
from singa import tensor
import numpy as np
a = tensor.Tensor((2, 3))
a.shape
a.device
gb = tensor.Tensor((2, 3), gpu)
gb.device
Explanation: Tensor
A tensor instance represents a multi-dimensional array allocated on a device instance.
It provides linear algbra operations, like +, -, *, /, dot, pow ,etc
NOTE: class memeber functions are inplace; global functions are out-of-place.
Create tensor instances
End of explanation
a.set_value(1.2)
gb.gaussian(0, 0.1)
Explanation: Initialize tensor values
End of explanation
tensor.to_numpy(a)
tensor.to_numpy(gb)
c = tensor.from_numpy(np.array([1,2], dtype=np.float32))
c.shape
c.copy_from_numpy(np.array([3,4], dtype=np.float32))
tensor.to_numpy(c)
Explanation: To and from numpy
End of explanation
gc = c.clone()
gc.to_device(gpu)
gc.device
b = gb.clone()
b.to_host() # the same as b.to_device(default_dev)
b.device
Explanation: Move tensor between devices
End of explanation
gb.l1()
a.l2()
e = tensor.Tensor((2, 3))
e.is_empty()
gb.size()
gb.memsize()
# note we can only support matrix multiplication for tranposed tensors;
# other operations on transposed tensor would result in errors
c.is_transpose()
et=e.T()
et.is_transpose()
et.shape
et.ndim()
Explanation: Operations
NOTE: tensors should be initialized if the operation would read the tensor values
Summary
End of explanation
a += b
tensor.to_numpy(a)
a -= b
tensor.to_numpy(a)
a *= 2
tensor.to_numpy(a)
a /= 3
tensor.to_numpy(a)
d = tensor.Tensor((3,))
d.uniform(-1,1)
tensor.to_numpy(d)
a.add_row(d)
tensor.to_numpy(a)
Explanation: Member functions (in-place)
These functions would change the content of the tensor
End of explanation
h = tensor.sign(d)
tensor.to_numpy(h)
tensor.to_numpy(d)
h = tensor.abs(d)
tensor.to_numpy(h)
h = tensor.relu(d)
tensor.to_numpy(h)
g = tensor.sum(a, 0)
g.shape
g = tensor.sum(a, 1)
g.shape
tensor.bernoulli(0.5, g)
tensor.to_numpy(g)
g.gaussian(0, 0.2)
tensor.gaussian(0, 0.2, g)
tensor.to_numpy(g)
Explanation: Global functions (out of place)
These functions would not change the memory of the tensor, instead they return a new tensor
Unary functions
End of explanation
f = a + b
tensor.to_numpy(f)
g = a < b
tensor.to_numpy(g)
tensor.add_column(2, c, 1, f) # f = 2 *c + 1* f
tensor.to_numpy(f)
Explanation: Binary functions
End of explanation
tensor.axpy(2, a, f) # f = 2a + f
tensor.to_numpy(b)
f = tensor.mult(a, b.T())
tensor.to_numpy(f)
tensor.mult(a, b.T(), f, 2, 1) # f = 2a*b.T() + 1f
tensor.to_numpy(f)
Explanation: BLAS
BLAS function may change the memory of input tensor
End of explanation |
12,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../../../../images/qiskit-heading.gif" alt="Note
Step1: In this section, we first judge the version of Python and import the packages of qiskit, math to implement the following code. We show our algorithm on the ibm_qasm_simulator, if you need to run it on the real quantum conputer, please remove the "#" in frint of "import Qconfig".
Step2: Here we define the number pi in the math lib, because we need to use u3 gate. And we also define a list about the parameter theta which we need to use in the u3 gate. As the same above, if you want to implement on the real quantum comnputer, please remove the symbol "#" and configure your local Qconfig.py file. | Python Code:
# import math lib
from math import pi
# import Qiskit
from qiskit import Aer, IBMQ, execute
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
# To use local qasm simulator
backend = Aer.get_backend('qasm_simulator')
Explanation: <img src="../../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
Quantum K-Means algorithm
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
Contributors
Shan Jin, Xi He, Xiaokai Hou, Li Sun, Dingding Wen, Shaojun Wu and Xiaoting Wang$^{1}$
Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China,Chengdu, China,610051
Introduction
Clustering algorithm is a typical unsupervised learning algorithm, which is mainly used to automatically classify similar samples into one category.In the clustering algorithm, according to the similarity between the samples, the samples are divided into different categories. For different similarity calculation methods, different clustering results will be obtained. The commonly used similarity calculation method is the Euclidean distance method.
What we want to show is the quantum K-Means algorithm. The K-Means algorithm is a distance-based clustering algorithm that uses distance as an evaluation index for similarity, that is, the closer the distance between two objects is, the greater the similarity. The algorithm considers the cluster to be composed of objects that are close together, so the compact and independent cluster is the ultimate target.
Experiment design
The implementation of the quantum K-Means algorithm mainly uses the swap test to compare the distances among the input data points. Select K points randomly from N data points as centroids, measure the distance from each point to each centroid, and assign it to the nearest centroid- class, recalculate centroids of each class that has been obtained, and iterate 2 to 3 steps until the new centroid is equal to or less than the specified threshold, and the algorithm ends. In our example, we selected 6 data points, 2 centroids, and used the swap test circuit to calculate the distance. Finally, we obtained two clusters of data points.
$|0\rangle$ is an auxiliary qubit, through left $H$ gate, it will be changed to $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$. Then under the control of $|1\rangle$, the circuit will swap two vectors $|x\rangle$ and $|y\rangle$. Finally, we get the result at the right end of the circuit:
$$|0_{anc}\rangle |x\rangle |y\rangle \rightarrow \frac{1}{2}|0_{anc}\rangle(|xy\rangle + |yx\rangle) + \frac{1}{2}|1_{anc}\rangle(|xy\rangle - |yx\rangle)$$
If we measure auxiliary qubit alone, then the probability of final state in the ground state $|1\rangle$ is:
$$P(|1_{anc}\rangle) = \frac{1}{2} - \frac{1}{2}|\langle x | y \rangle|^2$$
If we measure auxiliary qubit alone, then the probability of final state in the ground state $|1\rangle$ is:
$$Euclidean \ distance = \sqrt{(2 - 2|\langle x | y \rangle|)}$$
So, we can see that the probability of measuring $|1\rangle$ has positive correlation with the Euclidean distance.
The schematic diagram of quantum K-Means is as the follow picture.[1]
<img src="../images/k_means_circuit.png">
To make our algorithm can be run using qiskit, we design a more detailed circuit to achieve our algorithm.
|
Quantum K-Means circuit
<img src="../images/k_means.png">
Data points
<table border="1">
<tr>
<td>point num</td>
<td>theta</td>
<td>phi</td>
<td>lam</td>
<td>x</td>
<td>y</td>
</tr>
<tr>
<td>1</td>
<td>0.01</td>
<td>pi</td>
<td>pi</td>
<td>0.710633</td>
<td>0.703562</td>
</tr>
<tr>
<td>2</td>
<td>0.02</td>
<td>pi</td>
<td>pi</td>
<td>0.714142</td>
<td>0.7</td>
</tr>
<tr>
<td>3</td>
<td>0.03</td>
<td>pi</td>
<td>pi</td>
<td>0.717633</td>
<td>0.696421</td>
</tr>
<tr>
<td>4</td>
<td>0.04</td>
<td>pi</td>
<td>pi</td>
<td>0.721107</td>
<td>0.692824</td>
</tr>
<tr>
<td>5</td>
<td>0.05</td>
<td>pi</td>
<td>pi</td>
<td>0.724562</td>
<td>0.68921</td>
</tr>
<tr>
<td>6</td>
<td>1.31</td>
<td>pi</td>
<td>pi</td>
<td>0.886811</td>
<td>0.462132</td>
</tr>
<tr>
<td>7</td>
<td>1.32</td>
<td>pi</td>
<td>pi</td>
<td>0.889111</td>
<td>0.457692</td>
</tr>
<tr>
<td>8</td>
<td>1.33</td>
<td>pi</td>
<td>pi</td>
<td>0.891388</td>
<td>0.453241</td>
</tr>
<tr>
<td>9</td>
<td>1.34</td>
<td>pi</td>
<td>pi</td>
<td>0.893643</td>
<td>0.448779</td>
</tr>
<tr>
<td>10</td>
<td>1.35</td>
<td>pi</td>
<td>pi</td>
<td>0.895876</td>
<td>0.444305</td>
</tr>
## Quantum K-Means algorithm program
End of explanation
theta_list = [0.01, 0.02, 0.03, 0.04, 0.05, 1.31, 1.32, 1.33, 1.34, 1.35]
Explanation: In this section, we first judge the version of Python and import the packages of qiskit, math to implement the following code. We show our algorithm on the ibm_qasm_simulator, if you need to run it on the real quantum conputer, please remove the "#" in frint of "import Qconfig".
End of explanation
# create Quantum Register called "qr" with 5 qubits
qr = QuantumRegister(5, name="qr")
# create Classical Register called "cr" with 5 bits
cr = ClassicalRegister(5, name="cr")
# Creating Quantum Circuit called "qc" involving your Quantum Register "qr"
# and your Classical Register "cr"
qc = QuantumCircuit(qr, cr, name="k_means")
#Define a loop to compute the distance between each pair of points
for i in range(9):
for j in range(1,10-i):
# Set the parament theta about different point
theta_1 = theta_list[i]
theta_2 = theta_list[i+j]
#Achieve the quantum circuit via qiskit
qc.h(qr[2])
qc.h(qr[1])
qc.h(qr[4])
qc.u3(theta_1, pi, pi, qr[1])
qc.u3(theta_2, pi, pi, qr[4])
qc.cswap(qr[2], qr[1], qr[4])
qc.h(qr[2])
qc.measure(qr[2], cr[2])
qc.reset(qr)
job = execute(qc, backend=backend, shots=1024)
result = job.result()
print(result)
print('theta_1:' + str(theta_1))
print('theta_2:' + str(theta_2))
# print( result.get_data(qc))
plot_histogram(result.get_counts())
Explanation: Here we define the number pi in the math lib, because we need to use u3 gate. And we also define a list about the parameter theta which we need to use in the u3 gate. As the same above, if you want to implement on the real quantum comnputer, please remove the symbol "#" and configure your local Qconfig.py file.
End of explanation |
12,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logic
This Jupyter notebook acts as supporting material for topics covered in Chapter 6 Logical Agents, Chapter 7 First-Order Logic and Chapter 8 Inference in First-Order Logic of the book Artificial Intelligence
Step1: CONTENTS
Logical sentences
Expr
PropKB
Knowledge-based agents
Inference in propositional knowledge base
Truth table enumeration
Proof by resolution
Forward and backward chaining
DPLL
WalkSAT
SATPlan
FolKB
Inference in first order knowledge base
Unification
Forward chaining algorithm
Backward chaining algorithm
Logical Sentences
The Expr class is designed to represent any kind of mathematical expression. The simplest type of Expr is a symbol, which can be defined with the function Symbol
Step2: Or we can define multiple symbols at the same time with the function symbols
Step3: We can combine Exprs with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q"
Step4: This works because the Expr class overloads the & operator with this definition
Step5: It is important to note that the Expr class does not define the logic of Propositional Logic sentences; it just gives you a way to represent expressions. Think of an Expr as an abstract syntax tree. Each of the args in an Expr can be either a symbol, a number, or a nested Expr. We can nest these trees to any depth. Here is a deply nested Expr
Step6: Operators for Constructing Logical Sentences
Here is a table of the operators that can be used to form sentences. Note that we have a problem
Step7: expr
Step8: expr takes a string as input, and parses it into an Expr. The string can contain arrow operators
Step9: For now that's all you need to know about expr. If you are interested, we explain the messy details of how expr is implemented and how |'==>'| is handled in the appendix.
Propositional Knowledge Bases
Step10: We define the symbols we use in our clauses.<br/>
$P_{x, y}$ is true if there is a pit in [x, y].<br/>
$B_{x, y}$ is true if the agent senses breeze in [x, y].<br/>
Step11: Now we tell sentences based on section 7.4.3.<br/>
There is no pit in [1,1].
Step12: A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
Step13: Now we include the breeze percepts for the first two squares leading up to the situation in Figure 7.3(b)
Step14: We can check the clauses stored in a KB by accessing its clauses variable
Step15: We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the KB.<br/>
$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.<br/>
$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.<br/>
$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.<br/>
$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner.
Knowledge based agents
A knowledge-based agent is a simple generic agent that maintains and handles a knowledge base.
The knowledge base may initially contain some background knowledge.
<br>
The purpose of a KB agent is to provide a level of abstraction over knowledge-base manipulation and is to be used as a base class for agents that work on a knowledge base.
<br>
Given a percept, the KB agent adds the percept to its knowledge base, asks the knowledge base for the best action, and tells the knowledge base that it has in fact taken that action.
<br>
Our implementation of KB-Agent is encapsulated in a class KB_AgentProgram which inherits from the KB class.
<br>
Let's have a look.
Step16: The helper functions make_percept_sentence, make_action_query and make_action_sentence are all aptly named and as expected,
make_percept_sentence makes first-order logic sentences about percepts we want our agent to receive,
make_action_query asks the underlying KB about the action that should be taken and
make_action_sentence tells the underlying KB about the action it has just taken.
Inference in Propositional Knowledge Base
In this section we will look at two algorithms to check if a sentence is entailed by the KB. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$.
Truth Table Enumeration
It is a model-checking approach which, as the name suggests, enumerates all possible models in which the KB is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the KB and enumerate the $2^{n}$ models in a depth-first manner and check the truth of KB and $\alpha$.
Step17: The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.
<br>
If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then,
it checks whether model is consistent with kb.
The given models correspond to the lines in the truth table,
which have a true in the KB column,
and for these lines it checks whether the query evaluates to true
<br>
result = pl_true(alpha, model).
<br>
<br>
In short, tt_check_all evaluates this logical expression for each model
<br>
pl_true(kb, model) => pl_true(alpha, model)
<br>
which is logically equivalent to
<br>
pl_true(kb, model) & ~pl_true(alpha, model)
<br>
that is, the knowledge base and the negation of the query are logically inconsistent.
<br>
<br>
tt_entails() just extracts the symbols from the query and calls tt_check_all() with the proper parameters.
Step18: Keep in mind that for two symbols P and Q, P => Q is false only when P is True and Q is False.
Example usage of tt_entails()
Step19: P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
Step20: If we know that P | Q is true, we cannot infer the truth values of P and Q.
Hence (P | Q) => Q is False and so is (P | Q) => P.
Step21: We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.
Nothing can be said about B or C.
Coming back to our problem, note that tt_entails() takes an Expr which is a conjunction of clauses as the input instead of the KB itself.
You can use the ask_if_true() method of PropKB which does all the required conversions.
Let's check what wumpus_kb tells us about $P_{1, 1}$.
Step22: Looking at Figure 7.9 we see that in all models in which the knowledge base is True, $P_{1, 1}$ is False. It makes sense that ask_if_true() returns True for $\alpha = \neg P_{1, 1}$ and False for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is True in only a portion of all models. Do we return True or False? This doesn't rule out the possibility of $\alpha$ being True but it is not entailed by the KB so we return False in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
Step23: Proof by Resolution
Recall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ <em>if and only if</em> $\text{KB} \land \neg \alpha$ is unsatisfiable".<br/>
This technique corresponds to <em>proof by <strong>contradiction</strong></em>, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, <strong>resolution</strong> which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yields us a clause which we add to the KB. We keep doing this until
Step24: to_cnf calls three subroutines.
<br>
eliminate_implications converts bi-implications and implications to their logical equivalents.
<br>
move_not_inwards removes negations from compound statements and moves them inwards using De Morgan's laws.
<br>
distribute_and_over_or distributes disjunctions over conjunctions.
<br>
Run the cell below for implementation details.
Step25: Let's convert some sentences to see how it works
Step26: Coming back to our resolution problem, we can see how the to_cnf function is utilized here
Step27: Forward and backward chaining
Previously, we said we will look at two algorithms to check if a sentence is entailed by the KB. Here's a third one.
The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol q - the query.
There is a catch however - the knowledge base can only contain Horn clauses.
<br>
Horn Clauses
Horn clauses can be defined as a disjunction of literals with at most one positive literal.
<br>
A Horn clause with exactly one positive literal is called a definite clause.
<br>
A Horn clause might look like
<br>
$\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$
<br>
This, coincidentally, is also a definite clause.
<br>
Using De Morgan's laws, the example above can be simplified to
<br>
$a\land b\land c\land d ... \implies z$
<br>
This seems like a logical representation of how humans process known data and facts.
Assuming percepts a, b, c, d ... to be true simultaneously, we can infer z to also be true at that point in time.
There are some interesting aspects of Horn clauses that make algorithmic inference or resolution easier.
- Definite clauses can be written as implications
Step28: Let's now have a look at the pl_fc_entails algorithm.
Step29: The function accepts a knowledge base KB (an instance of PropDefiniteKB) and a query q as inputs.
<br>
<br>
count initially stores the number of symbols in the premise of each sentence in the knowledge base.
<br>
The conjuncts helper function separates a given sentence at conjunctions.
<br>
inferred is initialized as a boolean defaultdict.
This will be used later to check if we have inferred all premises of each clause of the agenda.
<br>
agenda initially stores a list of clauses that the knowledge base knows to be true.
The is_prop_symbol helper function checks if the given symbol is a valid propositional logic symbol.
<br>
<br>
We now iterate through agenda, popping a symbol p on each iteration.
If the query q is the same as p, we know that entailment holds.
<br>
The agenda is processed, reducing count by one for each implication with a premise p.
A conclusion is added to the agenda when count reaches zero. This means we know all the premises of that particular implication to be true.
<br>
clauses_with_premise is a helpful method of the PropKB class.
It returns a list of clauses in the knowledge base that have p in their premise.
<br>
<br>
Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
Step30: We will now tell this information to our knowledge base.
Step31: We can now check if our knowledge base entails the following queries.
Step32: Effective Propositional Model Checking
The previous segments elucidate the algorithmic procedure for model checking.
In this segment, we look at ways of making them computationally efficient.
<br>
The problem we are trying to solve is conventionally called the propositional satisfiability problem, abbreviated as the SAT problem.
In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.
<br>
The SAT problem was the first problem to be proven NP-complete.
The main characteristics of an NP-complete problem are
Step33: The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.
It recursively calls itself, simplifying the problem at each step. It also uses helper functions find_pure_symbol and find_unit_clause to carry out steps 2 and 3 above.
<br>
The dpll_satisfiable helper function converts the input clauses to conjunctive normal form and calls the dpll function with the correct parameters.
Step34: Let's see a few examples of usage.
Step35: This is a simple case to highlight that the algorithm actually works.
Step36: If a particular symbol isn't present in the solution,
it means that the solution is independent of the value of that symbol.
In this case, the solution is independent of A.
Step37: 2. WalkSAT algorithm
This algorithm is very similar to Hill climbing.
On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.
This is similar to finding a neighboring state in the hill_climbing algorithm.
<br>
The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.
Sometimes, symbols are also flipped randomly to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long as a way of getting out of local minima of numbers of unsatisfied clauses.
<br>
<br>
Let's have a look at the algorithm.
Step38: The function takes three arguments
Step39: This is a simple case to show that the algorithm converges.
Step40: This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution.
One point of difference between this algorithm and the dpll_satisfiable algorithms is that both these algorithms take inputs differently.
For WalkSAT to take complete sentences as input,
we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
Step41: Now we can call WalkSAT_CNF and DPLL_Satisfiable with the same arguments.
Step42: It works!
<br>
Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon.
If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.
<br>
<br>
Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the %%timeit magic to do this.
Step43: On an average, for solvable cases, WalkSAT is quite faster than dpll because, for a small number of variables,
WalkSAT can reduce the search space significantly.
Results can be different for sentences with more symbols though.
Feel free to play around with this to understand the trade-offs of these algorithms better.
SATPlan
In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps
Step44: Let's see few examples of its usage. First we define a transition and then call SAT_plan.
Step45: Let us do the same for another transition.
Step46: First-Order Logic Knowledge Bases
Step47: <em>“... it is a crime for an American to sell weapons to hostile nations”</em><br/>
The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.
Criminal(x)
Step48: <em>"The country Nono, an enemy of America"</em><br/>
We now know that Nono is an enemy of America. We represent these nations using the constant symbols Nono and America. the enemy relation is show using the predicate symbol Enemy.
$\text{Enemy}(\text{Nono}, \text{America})$
Step49: <em>"Nono ... has some missiles"</em><br/>
This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant M1 which is the missile owned by Nono.
$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
Step50: <em>"All of its missiles were sold to it by Colonel West"</em><br/>
If Nono owns something and it classifies as a missile, then it was sold to Nono by West.
$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
Step51: <em>"West, who is American"</em><br/>
West is an American.
$\text{American}(\text{West})$
Step52: We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.
$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
Step53: Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
Step54: The subst helper function substitutes variables with given values in first-order logic statements.
This will be useful in later algorithms.
It's implementation is quite simple and self-explanatory.
Step55: Here's an example of how subst can be used.
Step56: Inference in First-Order Logic
In this section we look at a forward chaining and a backward chaining algorithm for FolKB. Both aforementioned algorithms rely on a process called <strong>unification</strong>, a key component of all first-order inference algorithms.
Unification
We sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the unify algorithm. It takes as input two sentences and returns a <em>unifier</em> for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol var with a constant symbol Const is the mapping {var
Step57: In cases where there is no possible substitution that unifies the two sentences the function return None.
Step58: We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
Step59: Forward Chaining Algorithm
We consider the simple forward-chaining algorithm presented in <em>Figure 9.3</em>. We look at each rule in the knowledge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the KB. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the KB. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by unify is an answer to the query. If we run out of sentences to infer, this means the query was a failure.
The function fol_fc_ask is a generator which yields all substitutions which validate the query.
Step60: Let's find out all the hostile nations. Note that we only told the KB that Nono was an enemy of America, not that it was hostile.
Step61: The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
Step62: <strong><em>Note</em>
Step63: AND
The <em>AND</em> corresponds to proving all the conjuncts in the lhs. We need to find a substitution which proves each <em>and</em> every clause in the list of conjuncts.
Step64: Now the main function fl_bc_ask calls fol_bc_or with substitution initialized as empty. The ask method of FolKB uses fol_bc_ask and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from clauses to find hostile nations.
Step65: You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the Unification section
Appendix
Step66: What is the funny |'==>'| syntax? The trick is that "|" is just the regular Python or-operator, and so is exactly equivalent to this
Step67: In other words, there are two applications of or-operators. Here's the first one
Step68: What is going on here is that the __or__ method of Expr serves a dual purpose. If the right-hand-side is another Expr (or a number), then the result is an Expr, as in (P | Q). But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled Expr, one where we know the left-hand-side is P and the operator is ==>, but we don't yet know the right-hand-side.
The PartialExpr class has an __or__ method that says to create an Expr node with the right-hand-side filled in. Here we can see the combination of the PartialExpr with Q to create a complete Expr
Step69: This trick is due to Ferdinand Jamitzky, with a modification by C. G. Vedant,
who suggested using a string inside the or-bars.
Appendix
Step70: is equivalent to doing
Step71: One thing to beware of
Step72: which is probably not what we meant; when in doubt, put in extra parens
Step73: Examples | Python Code:
from utils import *
from logic import *
from notebook import psource
Explanation: Logic
This Jupyter notebook acts as supporting material for topics covered in Chapter 6 Logical Agents, Chapter 7 First-Order Logic and Chapter 8 Inference in First-Order Logic of the book Artificial Intelligence: A Modern Approach. We make use of the implementations in the logic.py module. See the intro notebook for instructions.
Let's first import everything from the logic module.
End of explanation
Symbol('x')
Explanation: CONTENTS
Logical sentences
Expr
PropKB
Knowledge-based agents
Inference in propositional knowledge base
Truth table enumeration
Proof by resolution
Forward and backward chaining
DPLL
WalkSAT
SATPlan
FolKB
Inference in first order knowledge base
Unification
Forward chaining algorithm
Backward chaining algorithm
Logical Sentences
The Expr class is designed to represent any kind of mathematical expression. The simplest type of Expr is a symbol, which can be defined with the function Symbol:
End of explanation
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
Explanation: Or we can define multiple symbols at the same time with the function symbols:
End of explanation
P & ~Q
Explanation: We can combine Exprs with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
End of explanation
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
Explanation: This works because the Expr class overloads the & operator with this definition:
python
def __and__(self, other): return Expr('&', self, other)
and does similar overloads for the other operators. An Expr has two fields: op for the operator, which is always a string, and args for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of Expr, or a number. Let's take a look at the fields for some Expr examples:
End of explanation
3 * f(x, y) + P(y) / 2 + 1
Explanation: It is important to note that the Expr class does not define the logic of Propositional Logic sentences; it just gives you a way to represent expressions. Think of an Expr as an abstract syntax tree. Each of the args in an Expr can be either a symbol, a number, or a nested Expr. We can nest these trees to any depth. Here is a deply nested Expr:
End of explanation
~(P & Q) |'==>'| (~P | ~Q)
Explanation: Operators for Constructing Logical Sentences
Here is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: |'==>'| instead of just ==>. Alternately, you can always use the more verbose Expr constructor forms:
| Operation | Book | Python Infix Input | Python Output | Python Expr Input
|--------------------------|----------------------|-------------------------|---|---|
| Negation | ¬ P | ~P | ~P | Expr('~', P)
| And | P ∧ Q | P & Q | P & Q | Expr('&', P, Q)
| Or | P ∨ Q | P<tt> | </tt>Q| P<tt> | </tt>Q | Expr('|', P, Q)
| Inequality (Xor) | P ≠ Q | P ^ Q | P ^ Q | Expr('^', P, Q)
| Implication | P → Q | P <tt>|</tt>'==>'<tt>|</tt> Q | P ==> Q | Expr('==>', P, Q)
| Reverse Implication | Q ← P | Q <tt>|</tt>'<=='<tt>|</tt> P |Q <== P | Expr('<==', Q, P)
| Equivalence | P ↔ Q | P <tt>|</tt>'<=>'<tt>|</tt> Q |P <=> Q | Expr('<=>', P, Q)
Here's an example of defining a sentence with an implication arrow:
End of explanation
expr('~(P & Q) ==> (~P | ~Q)')
Explanation: expr: a Shortcut for Constructing Sentences
If the |'==>'| notation looks ugly to you, you can use the function expr instead:
End of explanation
expr('sqrt(b ** 2 - 4 * a * c)')
Explanation: expr takes a string as input, and parses it into an Expr. The string can contain arrow operators: ==>, <==, or <=>, which are handled as if they were regular Python infix operators. And expr automatically defines any symbols, so you don't need to pre-define them:
End of explanation
wumpus_kb = PropKB()
Explanation: For now that's all you need to know about expr. If you are interested, we explain the messy details of how expr is implemented and how |'==>'| is handled in the appendix.
Propositional Knowledge Bases: PropKB
The class PropKB can be used to represent a knowledge base of propositional logic sentences.
We see that the class KB has four methods, apart from __init__. A point to note here: the ask method simply calls the ask_generator method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the ask_generator function and not the ask function itself.
The class PropKB now.
* __init__(self, sentence=None) : The constructor __init__ creates a single field clauses which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and ors.
* tell(self, sentence) : When you want to add a sentence to the KB, you use the tell method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the clauses field. So, you need not worry about telling only clauses to the knowledge base. You can tell the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the tell method.
* ask_generator(self, query) : The ask_generator function is used by the ask function. It calls the tt_entails function, which in turn returns True if the knowledge base entails query and False otherwise. The ask_generator itself returns an empty dict {} if the knowledge base entails query and None otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a True or a False instead of the {} or None But this is done to maintain consistency with the way things are in First-Order Logic, where an ask_generator function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the ask function which returns a {} or a False, but if you don't like this, you can always use the ask_if_true function which returns a True or a False.
* retract(self, sentence) : This function removes all the clauses of the sentence given, from the knowledge base. Like the tell function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those.
Wumpus World KB
Let us create a PropKB for the wumpus world with the sentences mentioned in section 7.4.3.
End of explanation
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
Explanation: We define the symbols we use in our clauses.<br/>
$P_{x, y}$ is true if there is a pit in [x, y].<br/>
$B_{x, y}$ is true if the agent senses breeze in [x, y].<br/>
End of explanation
wumpus_kb.tell(~P11)
Explanation: Now we tell sentences based on section 7.4.3.<br/>
There is no pit in [1,1].
End of explanation
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
Explanation: A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
End of explanation
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
Explanation: Now we include the breeze percepts for the first two squares leading up to the situation in Figure 7.3(b)
End of explanation
wumpus_kb.clauses
Explanation: We can check the clauses stored in a KB by accessing its clauses variable
End of explanation
psource(KB_AgentProgram)
Explanation: We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the KB.<br/>
$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.<br/>
$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.<br/>
$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.<br/>
$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner.
Knowledge based agents
A knowledge-based agent is a simple generic agent that maintains and handles a knowledge base.
The knowledge base may initially contain some background knowledge.
<br>
The purpose of a KB agent is to provide a level of abstraction over knowledge-base manipulation and is to be used as a base class for agents that work on a knowledge base.
<br>
Given a percept, the KB agent adds the percept to its knowledge base, asks the knowledge base for the best action, and tells the knowledge base that it has in fact taken that action.
<br>
Our implementation of KB-Agent is encapsulated in a class KB_AgentProgram which inherits from the KB class.
<br>
Let's have a look.
End of explanation
psource(tt_check_all)
Explanation: The helper functions make_percept_sentence, make_action_query and make_action_sentence are all aptly named and as expected,
make_percept_sentence makes first-order logic sentences about percepts we want our agent to receive,
make_action_query asks the underlying KB about the action that should be taken and
make_action_sentence tells the underlying KB about the action it has just taken.
Inference in Propositional Knowledge Base
In this section we will look at two algorithms to check if a sentence is entailed by the KB. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$.
Truth Table Enumeration
It is a model-checking approach which, as the name suggests, enumerates all possible models in which the KB is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the KB and enumerate the $2^{n}$ models in a depth-first manner and check the truth of KB and $\alpha$.
End of explanation
psource(tt_entails)
Explanation: The algorithm basically computes every line of the truth table $KB\implies \alpha$ and checks if it is true everywhere.
<br>
If symbols are defined, the routine recursively constructs every combination of truth values for the symbols and then,
it checks whether model is consistent with kb.
The given models correspond to the lines in the truth table,
which have a true in the KB column,
and for these lines it checks whether the query evaluates to true
<br>
result = pl_true(alpha, model).
<br>
<br>
In short, tt_check_all evaluates this logical expression for each model
<br>
pl_true(kb, model) => pl_true(alpha, model)
<br>
which is logically equivalent to
<br>
pl_true(kb, model) & ~pl_true(alpha, model)
<br>
that is, the knowledge base and the negation of the query are logically inconsistent.
<br>
<br>
tt_entails() just extracts the symbols from the query and calls tt_check_all() with the proper parameters.
End of explanation
tt_entails(P & Q, Q)
Explanation: Keep in mind that for two symbols P and Q, P => Q is false only when P is True and Q is False.
Example usage of tt_entails():
End of explanation
tt_entails(P | Q, Q)
tt_entails(P | Q, P)
Explanation: P & Q is True only when both P and Q are True. Hence, (P & Q) => Q is True
End of explanation
(A, B, C, D, E, F, G) = symbols('A, B, C, D, E, F, G')
tt_entails(A & (B | C) & D & E & ~(F | G), A & D & E & ~F & ~G)
Explanation: If we know that P | Q is true, we cannot infer the truth values of P and Q.
Hence (P | Q) => Q is False and so is (P | Q) => P.
End of explanation
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
Explanation: We can see that for the KB to be true, A, D, E have to be True and F and G have to be False.
Nothing can be said about B or C.
Coming back to our problem, note that tt_entails() takes an Expr which is a conjunction of clauses as the input instead of the KB itself.
You can use the ask_if_true() method of PropKB which does all the required conversions.
Let's check what wumpus_kb tells us about $P_{1, 1}$.
End of explanation
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
Explanation: Looking at Figure 7.9 we see that in all models in which the knowledge base is True, $P_{1, 1}$ is False. It makes sense that ask_if_true() returns True for $\alpha = \neg P_{1, 1}$ and False for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is True in only a portion of all models. Do we return True or False? This doesn't rule out the possibility of $\alpha$ being True but it is not entailed by the KB so we return False in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
End of explanation
psource(to_cnf)
Explanation: Proof by Resolution
Recall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ <em>if and only if</em> $\text{KB} \land \neg \alpha$ is unsatisfiable".<br/>
This technique corresponds to <em>proof by <strong>contradiction</strong></em>, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, <strong>resolution</strong> which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yields us a clause which we add to the KB. We keep doing this until:
There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.
Two clauses resolve to yield the <em>empty clause</em>, in which case $\text{KB} \vDash \alpha$.
The <em>empty clause</em> is equivalent to <em>False</em> because it arises only from resolving two complementary
unit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be <em>True</em> at the same time.
There is one catch however, the algorithm that implements proof by resolution cannot handle complex sentences.
Implications and bi-implications have to be simplified into simpler clauses.
We already know that every sentence of a propositional logic is logically equivalent to a conjunction of clauses.
We will use this fact to our advantage and simplify the input sentence into the conjunctive normal form (CNF) which is a conjunction of disjunctions of literals.
For eg:
<br>
$$(A\lor B)\land (\neg B\lor C\lor\neg D)\land (D\lor\neg E)$$
This is equivalent to the POS (Product of sums) form in digital electronics.
<br>
Here's an outline of how the conversion is done:
1. Convert bi-implications to implications
<br>
$\alpha\iff\beta$ can be written as $(\alpha\implies\beta)\land(\beta\implies\alpha)$
<br>
This also applies to compound sentences
<br>
$\alpha\iff(\beta\lor\gamma)$ can be written as $(\alpha\implies(\beta\lor\gamma))\land((\beta\lor\gamma)\implies\alpha)$
<br>
2. Convert implications to their logical equivalents
<br>
$\alpha\implies\beta$ can be written as $\neg\alpha\lor\beta$
<br>
3. Move negation inwards
<br>
CNF requires atomic literals. Hence, negation cannot appear on a compound statement.
De Morgan's laws will be helpful here.
<br>
$\neg(\alpha\land\beta)\equiv(\neg\alpha\lor\neg\beta)$
<br>
$\neg(\alpha\lor\beta)\equiv(\neg\alpha\land\neg\beta)$
<br>
4. Distribute disjunction over conjunction
<br>
Disjunction and conjunction are distributive over each other.
Now that we only have conjunctions, disjunctions and negations in our expression,
we will distribute disjunctions over conjunctions wherever possible as this will give us a sentence which is a conjunction of simpler clauses,
which is what we wanted in the first place.
<br>
We need a term of the form
<br>
$(\alpha_{1}\lor\alpha_{2}\lor\alpha_{3}...)\land(\beta_{1}\lor\beta_{2}\lor\beta_{3}...)\land(\gamma_{1}\lor\gamma_{2}\lor\gamma_{3}...)\land...$
<br>
<br>
The to_cnf function executes this conversion using helper subroutines.
End of explanation
psource(eliminate_implications)
psource(move_not_inwards)
psource(distribute_and_over_or)
Explanation: to_cnf calls three subroutines.
<br>
eliminate_implications converts bi-implications and implications to their logical equivalents.
<br>
move_not_inwards removes negations from compound statements and moves them inwards using De Morgan's laws.
<br>
distribute_and_over_or distributes disjunctions over conjunctions.
<br>
Run the cell below for implementation details.
End of explanation
A, B, C, D = expr('A, B, C, D')
to_cnf(A |'<=>'| B)
to_cnf(A |'<=>'| (B & C))
to_cnf(A & (B | (C & D)))
to_cnf((A |'<=>'| ~B) |'==>'| (C | ~D))
Explanation: Let's convert some sentences to see how it works
End of explanation
psource(pl_resolution)
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
Explanation: Coming back to our resolution problem, we can see how the to_cnf function is utilized here
End of explanation
psource(PropDefiniteKB.clauses_with_premise)
Explanation: Forward and backward chaining
Previously, we said we will look at two algorithms to check if a sentence is entailed by the KB. Here's a third one.
The difference here is that our goal now is to determine if a knowledge base of definite clauses entails a single proposition symbol q - the query.
There is a catch however - the knowledge base can only contain Horn clauses.
<br>
Horn Clauses
Horn clauses can be defined as a disjunction of literals with at most one positive literal.
<br>
A Horn clause with exactly one positive literal is called a definite clause.
<br>
A Horn clause might look like
<br>
$\neg a\lor\neg b\lor\neg c\lor\neg d... \lor z$
<br>
This, coincidentally, is also a definite clause.
<br>
Using De Morgan's laws, the example above can be simplified to
<br>
$a\land b\land c\land d ... \implies z$
<br>
This seems like a logical representation of how humans process known data and facts.
Assuming percepts a, b, c, d ... to be true simultaneously, we can infer z to also be true at that point in time.
There are some interesting aspects of Horn clauses that make algorithmic inference or resolution easier.
- Definite clauses can be written as implications:
<br>
The most important simplification a definite clause provides is that it can be written as an implication.
The premise (or the knowledge that leads to the implication) is a conjunction of positive literals.
The conclusion (the implied statement) is also a positive literal.
The sentence thus becomes easier to understand.
The premise and the conclusion are conventionally called the body and the head respectively.
A single positive literal is called a fact.
- Forward chaining and backward chaining can be used for inference from Horn clauses:
<br>
Forward chaining is semantically identical to AND-OR-Graph-Search from the chapter on search algorithms.
Implementational details will be explained shortly.
- Deciding entailment with Horn clauses is linear in size of the knowledge base:
<br>
Surprisingly, the forward and backward chaining algorithms traverse each element of the knowledge base at most once, greatly simplifying the problem.
<br>
<br>
The function pl_fc_entails implements forward chaining to see if a knowledge base KB entails a symbol q.
<br>
Before we proceed further, note that pl_fc_entails doesn't use an ordinary KB instance.
The knowledge base here is an instance of the PropDefiniteKB class, derived from the PropKB class,
but modified to store definite clauses.
<br>
The main point of difference arises in the inclusion of a helper method to PropDefiniteKB that returns a list of clauses in KB that have a given symbol p in their premise.
End of explanation
psource(pl_fc_entails)
Explanation: Let's now have a look at the pl_fc_entails algorithm.
End of explanation
clauses = ['(B & F)==>E',
'(A & E & F)==>G',
'(B & C)==>F',
'(A & B)==>D',
'(E & F)==>H',
'(H & I)==>J',
'A',
'B',
'C']
Explanation: The function accepts a knowledge base KB (an instance of PropDefiniteKB) and a query q as inputs.
<br>
<br>
count initially stores the number of symbols in the premise of each sentence in the knowledge base.
<br>
The conjuncts helper function separates a given sentence at conjunctions.
<br>
inferred is initialized as a boolean defaultdict.
This will be used later to check if we have inferred all premises of each clause of the agenda.
<br>
agenda initially stores a list of clauses that the knowledge base knows to be true.
The is_prop_symbol helper function checks if the given symbol is a valid propositional logic symbol.
<br>
<br>
We now iterate through agenda, popping a symbol p on each iteration.
If the query q is the same as p, we know that entailment holds.
<br>
The agenda is processed, reducing count by one for each implication with a premise p.
A conclusion is added to the agenda when count reaches zero. This means we know all the premises of that particular implication to be true.
<br>
clauses_with_premise is a helpful method of the PropKB class.
It returns a list of clauses in the knowledge base that have p in their premise.
<br>
<br>
Now that we have an idea of how this function works, let's see a few examples of its usage, but we first need to define our knowledge base. We assume we know the following clauses to be true.
End of explanation
definite_clauses_KB = PropDefiniteKB()
for clause in clauses:
definite_clauses_KB.tell(expr(clause))
Explanation: We will now tell this information to our knowledge base.
End of explanation
pl_fc_entails(definite_clauses_KB, expr('G'))
pl_fc_entails(definite_clauses_KB, expr('H'))
pl_fc_entails(definite_clauses_KB, expr('I'))
pl_fc_entails(definite_clauses_KB, expr('J'))
Explanation: We can now check if our knowledge base entails the following queries.
End of explanation
psource(dpll)
Explanation: Effective Propositional Model Checking
The previous segments elucidate the algorithmic procedure for model checking.
In this segment, we look at ways of making them computationally efficient.
<br>
The problem we are trying to solve is conventionally called the propositional satisfiability problem, abbreviated as the SAT problem.
In layman terms, if there exists a model that satisfies a given Boolean formula, the formula is called satisfiable.
<br>
The SAT problem was the first problem to be proven NP-complete.
The main characteristics of an NP-complete problem are:
- Given a solution to such a problem, it is easy to verify if the solution solves the problem.
- The time required to actually solve the problem using any known algorithm increases exponentially with respect to the size of the problem.
<br>
<br>
Due to these properties, heuristic and approximational methods are often applied to find solutions to these problems.
<br>
It is extremely important to be able to solve large scale SAT problems efficiently because
many combinatorial problems in computer science can be conveniently reduced to checking the satisfiability of a propositional sentence under some constraints.
<br>
We will introduce two new algorithms that perform propositional model checking in a computationally effective way.
<br>
1. DPLL (Davis-Putnam-Logeman-Loveland) algorithm
This algorithm is very similar to Backtracking-Search.
It recursively enumerates possible models in a depth-first fashion with the following improvements over algorithms like tt_entails:
1. Early termination:
<br>
In certain cases, the algorithm can detect the truth value of a statement using just a partially completed model.
For example, $(P\lor Q)\land(P\lor R)$ is true if P is true, regardless of other variables.
This reduces the search space significantly.
2. Pure symbol heuristic:
<br>
A symbol that has the same sign (positive or negative) in all clauses is called a pure symbol.
It isn't difficult to see that any satisfiable model will have the pure symbols assigned such that its parent clause becomes true.
For example, $(P\lor\neg Q)\land(\neg Q\lor\neg R)\land(R\lor P)$ has P and Q as pure symbols
and for the sentence to be true, P has to be true and Q has to be false.
The pure symbol heuristic thus simplifies the problem a bit.
3. Unit clause heuristic:
<br>
In the context of DPLL, clauses with just one literal and clauses with all but one false literals are called unit clauses.
If a clause is a unit clause, it can only be satisfied by assigning the necessary value to make the last literal true.
We have no other choice.
<br>
Assigning one unit clause can create another unit clause.
For example, when P is false, $(P\lor Q)$ becomes a unit clause, causing true to be assigned to Q.
A series of forced assignments derived from previous unit clauses is called unit propagation.
In this way, this heuristic simplifies the problem further.
<br>
The algorithm often employs other tricks to scale up to large problems.
However, these tricks are currently out of the scope of this notebook. Refer to section 7.6 of the book for more details.
<br>
<br>
Let's have a look at the algorithm.
End of explanation
psource(dpll_satisfiable)
Explanation: The algorithm uses the ideas described above to check satisfiability of a sentence in propositional logic.
It recursively calls itself, simplifying the problem at each step. It also uses helper functions find_pure_symbol and find_unit_clause to carry out steps 2 and 3 above.
<br>
The dpll_satisfiable helper function converts the input clauses to conjunctive normal form and calls the dpll function with the correct parameters.
End of explanation
A, B, C, D = expr('A, B, C, D')
dpll_satisfiable(A & B & ~C & D)
Explanation: Let's see a few examples of usage.
End of explanation
dpll_satisfiable((A & B) | (C & ~A) | (B & ~D))
Explanation: This is a simple case to highlight that the algorithm actually works.
End of explanation
dpll_satisfiable(A |'<=>'| B)
dpll_satisfiable((A |'<=>'| B) |'==>'| (C & ~A))
dpll_satisfiable((A | (B & C)) |'<=>'| ((A | B) & (A | C)))
Explanation: If a particular symbol isn't present in the solution,
it means that the solution is independent of the value of that symbol.
In this case, the solution is independent of A.
End of explanation
psource(WalkSAT)
Explanation: 2. WalkSAT algorithm
This algorithm is very similar to Hill climbing.
On every iteration, the algorithm picks an unsatisfied clause and flips a symbol in the clause.
This is similar to finding a neighboring state in the hill_climbing algorithm.
<br>
The symbol to be flipped is decided by an evaluation function that counts the number of unsatisfied clauses.
Sometimes, symbols are also flipped randomly to avoid local optima. A subtle balance between greediness and randomness is required. Alternatively, some versions of the algorithm restart with a completely new random assignment if no solution has been found for too long as a way of getting out of local minima of numbers of unsatisfied clauses.
<br>
<br>
Let's have a look at the algorithm.
End of explanation
A, B, C, D = expr('A, B, C, D')
WalkSAT([A, B, ~C, D], 0.5, 100)
Explanation: The function takes three arguments:
<br>
1. The clauses we want to satisfy.
<br>
2. The probability p of randomly changing a symbol.
<br>
3. The maximum number of flips (max_flips) the algorithm will run for. If the clauses are still unsatisfied, the algorithm returns None to denote failure.
<br>
The algorithm is identical in concept to Hill climbing and the code isn't difficult to understand.
<br>
<br>
Let's see a few examples of usage.
End of explanation
WalkSAT([A & B, A & C], 0.5, 100)
WalkSAT([A & B, C & D, C & B], 0.5, 100)
WalkSAT([A & B, C | D, ~(D | B)], 0.5, 1000)
Explanation: This is a simple case to show that the algorithm converges.
End of explanation
def WalkSAT_CNF(sentence, p=0.5, max_flips=10000):
return WalkSAT(conjuncts(to_cnf(sentence)), 0, max_flips)
Explanation: This one doesn't give any output because WalkSAT did not find any model where these clauses hold. We can solve these clauses to see that they together form a contradiction and hence, it isn't supposed to have a solution.
One point of difference between this algorithm and the dpll_satisfiable algorithms is that both these algorithms take inputs differently.
For WalkSAT to take complete sentences as input,
we can write a helper function that converts the input sentence into conjunctive normal form and then calls WalkSAT with the list of conjuncts of the CNF form of the sentence.
End of explanation
WalkSAT_CNF((A & B) | (C & ~A) | (B & ~D), 0.5, 1000)
Explanation: Now we can call WalkSAT_CNF and DPLL_Satisfiable with the same arguments.
End of explanation
sentence_1 = A |'<=>'| B
sentence_2 = (A & B) | (C & ~A) | (B & ~D)
sentence_3 = (A | (B & C)) |'<=>'| ((A | B) & (A | C))
%%timeit
dpll_satisfiable(sentence_1)
dpll_satisfiable(sentence_2)
dpll_satisfiable(sentence_3)
%%timeit
WalkSAT_CNF(sentence_1)
WalkSAT_CNF(sentence_2)
WalkSAT_CNF(sentence_3)
Explanation: It works!
<br>
Notice that the solution generated by WalkSAT doesn't omit variables that the sentence doesn't depend upon.
If the sentence is independent of a particular variable, the solution contains a random value for that variable because of the stochastic nature of the algorithm.
<br>
<br>
Let's compare the runtime of WalkSAT and DPLL for a few cases. We will use the %%timeit magic to do this.
End of explanation
psource(SAT_plan)
Explanation: On an average, for solvable cases, WalkSAT is quite faster than dpll because, for a small number of variables,
WalkSAT can reduce the search space significantly.
Results can be different for sentences with more symbols though.
Feel free to play around with this to understand the trade-offs of these algorithms better.
SATPlan
In this section we show how to make plans by logical inference. The basic idea is very simple. It includes the following three steps:
1. Constuct a sentence that includes:
1. A colection of assertions about the initial state.
2. The successor-state axioms for all the possible actions at each time up to some maximum time t.
3. The assertion that the goal is achieved at time t.
2. Present the whole sentence to a SAT solver.
3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true. Together they represent a plan to achieve the goals.
Lets have a look at the algorithm
End of explanation
transition = {'A': {'Left': 'A', 'Right': 'B'},
'B': {'Left': 'A', 'Right': 'C'},
'C': {'Left': 'B', 'Right': 'C'}}
print(SAT_plan('A', transition, 'C', 2))
print(SAT_plan('A', transition, 'B', 3))
print(SAT_plan('C', transition, 'A', 3))
Explanation: Let's see few examples of its usage. First we define a transition and then call SAT_plan.
End of explanation
transition = {(0, 0): {'Right': (0, 1), 'Down': (1, 0)},
(0, 1): {'Left': (1, 0), 'Down': (1, 1)},
(1, 0): {'Right': (1, 0), 'Up': (1, 0), 'Left': (1, 0), 'Down': (1, 0)},
(1, 1): {'Left': (1, 0), 'Up': (0, 1)}}
print(SAT_plan((0, 0), transition, (1, 1), 4))
Explanation: Let us do the same for another transition.
End of explanation
clauses = []
Explanation: First-Order Logic Knowledge Bases: FolKB
The class FolKB can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for PropKB except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections.
Criminal KB
In this section we create a FolKB based on the following paragraph.<br/>
<em>The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.</em><br/>
The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named clauses.
End of explanation
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
Explanation: <em>“... it is a crime for an American to sell weapons to hostile nations”</em><br/>
The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.
Criminal(x): x is a criminal
American(x): x is an American
Sells(x ,y, z): x sells y to z
Weapon(x): x is a weapon
Hostile(x): x is a hostile nation
Let us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal x is also the American x who sells weapon y to z, which is a hostile nation.
$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
End of explanation
clauses.append(expr("Enemy(Nono, America)"))
Explanation: <em>"The country Nono, an enemy of America"</em><br/>
We now know that Nono is an enemy of America. We represent these nations using the constant symbols Nono and America. the enemy relation is show using the predicate symbol Enemy.
$\text{Enemy}(\text{Nono}, \text{America})$
End of explanation
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
Explanation: <em>"Nono ... has some missiles"</em><br/>
This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant M1 which is the missile owned by Nono.
$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
End of explanation
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
Explanation: <em>"All of its missiles were sold to it by Colonel West"</em><br/>
If Nono owns something and it classifies as a missile, then it was sold to Nono by West.
$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
End of explanation
clauses.append(expr("American(West)"))
Explanation: <em>"West, who is American"</em><br/>
West is an American.
$\text{American}(\text{West})$
End of explanation
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
Explanation: We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.
$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
End of explanation
crime_kb = FolKB(clauses)
Explanation: Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
End of explanation
psource(subst)
Explanation: The subst helper function substitutes variables with given values in first-order logic statements.
This will be useful in later algorithms.
It's implementation is quite simple and self-explanatory.
End of explanation
subst({x: expr('Nono'), y: expr('M1')}, expr('Owns(x, y)'))
Explanation: Here's an example of how subst can be used.
End of explanation
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
Explanation: Inference in First-Order Logic
In this section we look at a forward chaining and a backward chaining algorithm for FolKB. Both aforementioned algorithms rely on a process called <strong>unification</strong>, a key component of all first-order inference algorithms.
Unification
We sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the unify algorithm. It takes as input two sentences and returns a <em>unifier</em> for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol var with a constant symbol Const is the mapping {var: Const}. Let's look at a few examples.
End of explanation
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
Explanation: In cases where there is no possible substitution that unifies the two sentences the function return None.
End of explanation
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
Explanation: We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
End of explanation
psource(fol_fc_ask)
Explanation: Forward Chaining Algorithm
We consider the simple forward-chaining algorithm presented in <em>Figure 9.3</em>. We look at each rule in the knowledge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the KB. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the KB. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by unify is an answer to the query. If we run out of sentences to infer, this means the query was a failure.
The function fol_fc_ask is a generator which yields all substitutions which validate the query.
End of explanation
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
Explanation: Let's find out all the hostile nations. Note that we only told the KB that Nono was an enemy of America, not that it was hostile.
End of explanation
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
Explanation: The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
End of explanation
psource(fol_bc_or)
Explanation: <strong><em>Note</em>:</strong> fol_fc_ask makes changes to the KB by adding sentences to it.
Backward Chaining Algorithm
This algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose goal is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the KB and try to prove lhs. There may be multiple clauses in the KB which give multiple lhs. It is sufficient to prove only one of these. But to prove a lhs all the conjuncts in the lhs of the clause must be proved. This makes it similar to <em>And/Or</em> search.
OR
The <em>OR</em> part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's lhs whose rhs unify with the goal, we yield a substitution which proves all the conjuncts in the lhs. We use parse_definite_clause to attain lhs and rhs from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the lhs is an empty list.
End of explanation
psource(fol_bc_and)
Explanation: AND
The <em>AND</em> corresponds to proving all the conjuncts in the lhs. We need to find a substitution which proves each <em>and</em> every clause in the list of conjuncts.
End of explanation
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
Explanation: Now the main function fl_bc_ask calls fol_bc_or with substitution initialized as empty. The ask method of FolKB uses fol_bc_ask and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from clauses to find hostile nations.
End of explanation
P |'==>'| ~Q
Explanation: You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the Unification section
Appendix: The Implementation of |'==>'|
Consider the Expr formed by this syntax:
End of explanation
(P | '==>') | ~Q
Explanation: What is the funny |'==>'| syntax? The trick is that "|" is just the regular Python or-operator, and so is exactly equivalent to this:
End of explanation
P | '==>'
Explanation: In other words, there are two applications of or-operators. Here's the first one:
End of explanation
partial = PartialExpr('==>', P)
partial | ~Q
Explanation: What is going on here is that the __or__ method of Expr serves a dual purpose. If the right-hand-side is another Expr (or a number), then the result is an Expr, as in (P | Q). But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled Expr, one where we know the left-hand-side is P and the operator is ==>, but we don't yet know the right-hand-side.
The PartialExpr class has an __or__ method that says to create an Expr node with the right-hand-side filled in. Here we can see the combination of the PartialExpr with Q to create a complete Expr:
End of explanation
expr('~(P & Q) ==> (~P | ~Q)')
Explanation: This trick is due to Ferdinand Jamitzky, with a modification by C. G. Vedant,
who suggested using a string inside the or-bars.
Appendix: The Implementation of expr
How does expr parse a string into an Expr? It turns out there are two tricks (besides the Jamitzky/Vedant trick):
We do a string substitution, replacing "==>" with "|'==>'|" (and likewise for other operators).
We eval the resulting string in an environment in which every identifier
is bound to a symbol with that identifier as the op.
In other words,
End of explanation
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
Explanation: is equivalent to doing:
End of explanation
P & Q |'==>'| P | Q
Explanation: One thing to beware of: this puts ==> at the same precedence level as "|", which is not quite right. For example, we get this:
End of explanation
(P & Q) |'==>'| (P | Q)
Explanation: which is probably not what we meant; when in doubt, put in extra parens:
End of explanation
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
Explanation: Examples
End of explanation |
12,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
先说明一下,特殊方法的存在是为了被 Python 解释器调用的,我们自己不需要调用它。也就是说没有 my_object.__len__() 这种写法,而是应该使用 len(my_object)。一般来说,通过内置函数(len, iter, str 等等)来使用特殊方法是最好的选择,这些内置函数不仅会调用特殊方法,通常还会提供特殊的好处。而且对于内置类来说,它的速度更快。
实现向量类
通过特殊方法,我们可以自定义对象的 '+' 操作,以后的章节会对此详细介绍,这里只是展示一下特殊方法的使用。
我们想要实现一个简单的支持加法,取绝对值,标量乘法的二维向量类
Step1: __repr__
它能把一个对象用字符串的形式表现出来。
Step2: __repr__() 能把一个对象用字符串的方式表达出来,这就是字符串表示形式。它的返回应该尽量精确的与表达出创建它的对象, 与 __str__() 比较, __str__() 是由 str() 函数调用,并可以让 print() 函数使用。并且它返回的字符串应该对终端用户更友好。 如果你只想实现这两个方法的其中一个,好的选择是 __repr__(),因为一个对象没有 __str__() 函数,python 又要调用它的时候,解释器会使用 __repr__() 来代替。
Step3: 算数运算符
+ 和 * 分别调用的是 __add__ 和 __mul__ 注意这里我们没有修改 self.x, self.y,而是返回了一个新的实例,这是中缀表达式的基本原则,不改变操作对象
Step4: 注意现在我们只能将一个类乘数字,而不能用数字乘类,到后面的章节我们会实现这种乘法的交换性,使用 __rmul__() 解决这个问题。
自定义类型的布尔值
在 if, while 等陈述式运算式,或者 and, or, not 等运算,为了判断值是 true 或 false,Python 会调用 bool() 函数,其实背后调用的是 __bool__(),它永远只返回 True 或 False。
在默认情况下,使用者自定义的实例都是 True,除非这个类对于 __bool__() 或者 __len__() 有自己的实现,如果你没有 __bool__,会尝试调用 __len__,如果 __len__() 等于 0,返回 False,否则返回 True
我们的 __bool__ 逻辑很简单,看向量长度是否为 0。
如果想要 Vector.__bool__ 更高效,可以采用这种实现 | Python Code:
from math import hypot
class Vector:
def __init__(self, x = 0, y = 0):
self.x = x
self.y = y
def __repr__(self):
# %r 获取对象各个属性标准字符串表现形式,这是个好习惯,它说明了一个关键点,Vector(1,2) 和 vector('1','2') 是不一样的
# 后者会在定义的时候报错,因为对象的构造只接收数值,不接受字符串
return "Vector(%r, %r)" % (self.x, self.y)
def __abs__(self):
return hypot(self.x, self.y) #返回欧几里德范数 sqrt(x*x + y*y)
def __bool__(self):
return bool(abs(self))
def __add__(self, other):
x = self.x + other.x
y = self.y + other.y
return Vector(x, y)
def __mul__(self, scalar):
return Vector(self.x * scalar, self.y * scalar)
Explanation: 先说明一下,特殊方法的存在是为了被 Python 解释器调用的,我们自己不需要调用它。也就是说没有 my_object.__len__() 这种写法,而是应该使用 len(my_object)。一般来说,通过内置函数(len, iter, str 等等)来使用特殊方法是最好的选择,这些内置函数不仅会调用特殊方法,通常还会提供特殊的好处。而且对于内置类来说,它的速度更快。
实现向量类
通过特殊方法,我们可以自定义对象的 '+' 操作,以后的章节会对此详细介绍,这里只是展示一下特殊方法的使用。
我们想要实现一个简单的支持加法,取绝对值,标量乘法的二维向量类
End of explanation
test = Vector()
test #如果注释 __repr__() 方法, 显示 <__main__.Vector at 0x7f587c4c1320>
Explanation: __repr__
它能把一个对象用字符串的形式表现出来。
End of explanation
str(test)
Explanation: __repr__() 能把一个对象用字符串的方式表达出来,这就是字符串表示形式。它的返回应该尽量精确的与表达出创建它的对象, 与 __str__() 比较, __str__() 是由 str() 函数调用,并可以让 print() 函数使用。并且它返回的字符串应该对终端用户更友好。 如果你只想实现这两个方法的其中一个,好的选择是 __repr__(),因为一个对象没有 __str__() 函数,python 又要调用它的时候,解释器会使用 __repr__() 来代替。
End of explanation
v1 = Vector(2, 4)
v2 = Vector(2, 1)
v1 + v2
v1 * 3
Explanation: 算数运算符
+ 和 * 分别调用的是 __add__ 和 __mul__ 注意这里我们没有修改 self.x, self.y,而是返回了一个新的实例,这是中缀表达式的基本原则,不改变操作对象
End of explanation
#def __bool__(self):
# return bool(self.x or self.y)
Explanation: 注意现在我们只能将一个类乘数字,而不能用数字乘类,到后面的章节我们会实现这种乘法的交换性,使用 __rmul__() 解决这个问题。
自定义类型的布尔值
在 if, while 等陈述式运算式,或者 and, or, not 等运算,为了判断值是 true 或 false,Python 会调用 bool() 函数,其实背后调用的是 __bool__(),它永远只返回 True 或 False。
在默认情况下,使用者自定义的实例都是 True,除非这个类对于 __bool__() 或者 __len__() 有自己的实现,如果你没有 __bool__,会尝试调用 __len__,如果 __len__() 等于 0,返回 False,否则返回 True
我们的 __bool__ 逻辑很简单,看向量长度是否为 0。
如果想要 Vector.__bool__ 更高效,可以采用这种实现
End of explanation |
12,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estructuras de datos
Python posee además de los tipos de datos básicos, otros tipos de datos más complejos. Se trata de las tuplas, las listas y los diccionarios.
Estos tres tipos, pueden almacenar colecciones de datos de diversos tipos y se diferencian por su sintaxis y por la forma en la cual los datos pueden ser manipulados.
<br />
SECUENCIAS
Step1: <br />
ACCESO A LOS ELEMENTOS DE UNA SECUENCIA
Los elementos de las secuencias pueden ser accedidos mediante el uso de corchetes [ ], como en otros lenguajes de programación.
Podemos indexar las secuencias utilizando la sintaxis [<inicio>
Step2: Los elementos de las secuencias, tanto listas como tuplas, son hetereogéneos, así que es posible definir listas que contienen valores númericos y cadenas de caracteres, así como otras listas
Step3: En el caso de las listas, podemos modificar los datos almacenados
Step4: Se pueden concatenar listas o tuplas mediante el operador +
Step5: <BR />
DESEMPAQUETAR TUPLAS
En muchos casos es interesante asignar nombre a los elementos de las tuplas para, posteriormente, trabajar con esas variables.
Step6: Mostramos otro ejemplo con tuplas anidadas
Step7: <BR />
LISTAS
Step8: <br />
AÑADIR Y ELEMINAR ELEMENTOS DE UNA LISTA
Aunque se pueden añadir elementos como hemos visto antes, la forma más eficiente es mediante el método append, que añade elementos al final de la lista. Otra forma de añadir elementos es mediante el método insert, que inserta un elemento en una determinada posición.
Step9: La operación pop permite eliminar el elemento de la lista que ocupa una determinada posición.
Step10: Pero puede darse el caso de que necesitemos eliminar un elemento de la lista y que no conozcamos la posición que ocupa. En esos casos utilizaremos el método remove.
Step11: <BR />
ORDENAR LISTAS
El método sort permite ordenar una lista sin necesidad de crear una lista nueva, por lo que la operación es muy eficiente.
Step12: <BR />
GENERAR LISTAS
Python 2.7 proporciona la función predefinida range(inicio, fin, paso) para generar listas automáticamente. En Python 3.5 no se genera una lista; se genera un objeto iterable.
Step13: <BR />
SECUENCIAS DE CARACTERES STR
Las cadenas son consideradas como una secuencia de caracteres y por tanto pueden ser tratadas como otras secuencias (tuplas o listas). Podemos acceder a cada uno de los caracteres de una cadena | Python Code:
# Ejemplo de lista, los valores van entre corchetes
una_lista = [4, "Hola", 6.0, 99 ]
# Ejemplo de tupla, los valores van entre paréntesis
una_tupla = (4, "Hola", 6.0, 99)
print ("Lista: " , una_lista)
print ("Tupla: " , una_tupla)
# Las tuplas y las listas aceptan operadores de comparación y devuelven un booleano
print (una_lista == una_tupla)
# una_lista = [4, "Hola", 6.0, 99 ]
# una_tupla = (4, "Hola", 6.0, 99)
# Uso del operador IN
4 in una_lista, 5 not in una_tupla
# Uso de la función LEN, que nos dice cuantos elementos componen una lista o una tupla
len(una_lista), len(una_tupla)
# Uso de la funcion SORTED para ordenar los elementos de una lista/tupla
# Al ordenar la lista no la modifica, se crea una nueva; de ahí que tengamos que definir una variable para ejecutar la función
lista = [ 5, 6, 7, 1, 4, 2, 9 ]
otra = sorted(lista)
otra
Explanation: Estructuras de datos
Python posee además de los tipos de datos básicos, otros tipos de datos más complejos. Se trata de las tuplas, las listas y los diccionarios.
Estos tres tipos, pueden almacenar colecciones de datos de diversos tipos y se diferencian por su sintaxis y por la forma en la cual los datos pueden ser manipulados.
<br />
SECUENCIAS: TUPLAS Y LISTAS
Tanto las tuplas como las listas son conjuntos ordenados de elementos.
Una tupla es una variable que permite almacenar datos inmutables (no pueden ser modificados una vez creados) de tipos diferentes. Las tuplas se encierran entre paréntesis.
Tienen longitud fija
Solo tiene una dimensión (*)
Una lista es similar a una tupla con la diferencia fundamental de que permite modificar los datos una vez creados. Las listas se encierran entre corchetes y además pueden ser de mútiples dimensiones
<br />
End of explanation
lista = [5, 3, 1, 6, 99]
print(lista)
# Mostrar el primer elemento
print(lista[0])
# Mostrar el tercer elemento
print(lista[2])
# Uso de la sintaxis [<inicio>:<final>:<salto>]
lista = [2, 9, 6, 4, 3, 71, 1, 32, 534, 325, 2, 6, 9, 0]
print(lista)
# Muestra 3 elementos, comenzando por el elemento situado en la posición 3 y
# terminando en el de la posición 6, con saltos de 1 elementos
# [3,6) es como se expresa en notación científica, donde el corchete incluye y el paréntesis excluye
print(lista[3:6:1])
# Muestra todos los elementos, desde el primero hasta el último, con saltos de 2 elementos
print(lista[::2])
# Se puede acceder a los elementos de una secuencia de manera inversa
print ( lista[-1] ) # El último elemento
print ( lista[-2] ) # El penúltimo elemento
Explanation: <br />
ACCESO A LOS ELEMENTOS DE UNA SECUENCIA
Los elementos de las secuencias pueden ser accedidos mediante el uso de corchetes [ ], como en otros lenguajes de programación.
Podemos indexar las secuencias utilizando la sintaxis [<inicio>:<final>:<salto>].
En Python, la indexación empieza por CERO
End of explanation
# Lista con valores heterogéneos
lista = ["A", 26, "lista", 9, -62]
lista2 = [1, 2, 3, 4]
lista3 = ["A", "B", "C", "D"]
# Lista que incluye otra lista, así como otros valores
lista4 = [lista2, [6, 98, "Abc"]]
print(lista4)
# Acceder a los elementos de una lista que a su vez está incluida en otra lista (una matriz o array)
# [acceso a la lista][acceso a la posición de dicha lista]
d = lista4[1][2]
print(d)
# Lista simple o de 1 dimensión
matriz1 = [
1, 2, 3,
4, 5, 6,
7, 8, 9,
]
# Acceso al elemento 3 de la lista
print("Valor A: ", matriz1[3])
# Lista de 3 dimensiones
matriz2 = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
# Acceso al elemento 1 de la segunda lista
print("Valor B: ", matriz2[2][0])
Explanation: Los elementos de las secuencias, tanto listas como tuplas, son hetereogéneos, así que es posible definir listas que contienen valores númericos y cadenas de caracteres, así como otras listas:
End of explanation
lista = [1, 2, 3, 4]
print ( "Antes: ", lista)
lista[0] = 0.0
print ("Después: ", lista)
print(matriz2)
matriz2[1] = [0,0,0]
matriz2
# Con esta instrucción "matriz2.*?" nos muestra todas las posibilidades existentes:
# matriz2.append
# matriz2.clear
# matriz2.copy
# matriz2.count
# matriz2.extend
# matriz2.index
# matriz2.insert
# matriz2.pop
# matriz2.remove
# matriz2.reverse
# matriz2.sort
# Por ejemplo para añadir una lista a otra lista mediante la instrucción append.
# Se puede añadir sólo un valor, no es necesario añadir una lista entera
matriz = [[1,2,3],[4,5,6],[7,8,9]]
matriz.append(['A','B','C'])
print(matriz)
matriz.append(5)
matriz
# Para quitar un elemento de la lista, se usa la instrucción remove
matriz.remove(5)
matriz
Explanation: En el caso de las listas, podemos modificar los datos almacenados:
End of explanation
# Ejemplo de concatenación de dos tuplas
# En el primer ejemplo se crea una tupla con dos tuplas concatenadas, definiendolas en la misma línea
tupla = (1,2,3) + ('a','b','c')
print(tupla)
# Creación de una tupla a partir de dos tuplas definidas previamente
tupla1 = (6,9,7,'abc')
tupla2 = (1,6,0,'def')
tupla_f = tupla1 + tupla2
tupla_f
# Se pueden utilizar operadores matemáticos con las listas o tuplas
# A destacar que el uso de los operadores + y * no modifican las secuencias originales, sino que crean nuevas secuencias
tupla_m = tupla1 * 4
tupla_m
Explanation: Se pueden concatenar listas o tuplas mediante el operador +
End of explanation
# Se define una tupla llamada "laborales"
laborales = (1, 2, 3, 4, 5 )
# Se define una variable llamada "laborales" con una serie de valores para dicha tupla
# La definición de la variable puede hacerse antes de los valores o despues de los valores
lunes, martes, miercoles, jueves, viernes = laborales
# Si preguntamos por un valor de dicha variable nos devolverá su correspondencia dentro de la tupla
martes
Explanation: <BR />
DESEMPAQUETAR TUPLAS
En muchos casos es interesante asignar nombre a los elementos de las tuplas para, posteriormente, trabajar con esas variables.
End of explanation
# Creamos una tupla llamada "dias" anidando dos tuplas, una ya existente ("laborales") y otra que creamos al anidar (6,7)
dias = laborales, (6, 7)
# El resultado es una tupla de tuplas
dias
# Crea la variable "dias" a partir de la variable "laborales", añadiendo dos valores nuevos (sabado, domingo)
laborales, (sabado, domingo) = dias
sabado
Explanation: Mostramos otro ejemplo con tuplas anidadas:
End of explanation
# Definimos una tupla
tupla = (3, 4, 5)
# Mediante la función "list" creamos una lista a partir de la tupla anterior
lista = list(tupla)
lista
# Se puede modificar una lista especificando la posición a modificar y un valor
lista[0] = None # valor nulo en Python
lista
lista[0] = 3
lista
Explanation: <BR />
LISTAS: CASOS PARTICULARES
Como hemos dicho anteriormente, las listas pueden tener longitud variable y son mutables. También hemos visto que pueden definirse mediante [ ].
Otra forma de definir las listas es mediante la función list.
End of explanation
# Definimos una lista
lista = ['Lunes', 'Jueves']
print(lista)
# Mediante la instrucción "append(valor)" añadimos a dicha lista un valor
# Con append(valor) añadimos siempre el elemento al final de la lista
lista.append('Viernes')
print(lista)
# La instrucción "insert(posición,valor)" es similar a append(valor), con la salvedad de que nos permite añadir el elemento
# en la posición que nostros queramos
lista.insert(1, 'Martes')
print(lista)
lista.insert(2, 'Miércoles')
print(lista)
Explanation: <br />
AÑADIR Y ELEMINAR ELEMENTOS DE UNA LISTA
Aunque se pueden añadir elementos como hemos visto antes, la forma más eficiente es mediante el método append, que añade elementos al final de la lista. Otra forma de añadir elementos es mediante el método insert, que inserta un elemento en una determinada posición.
End of explanation
# Para usar la instrucción "pop(posición)" hay que definir una variable que realice dicha operación
# En este ejemplo borramos la posición 3 "Jueves"
borrar = lista.pop(3)
borrar
# Verificamos que la lista ya no contiene el elemento "Jueves"
lista
Explanation: La operación pop permite eliminar el elemento de la lista que ocupa una determinada posición.
End of explanation
lista.remove('Viernes')
lista
Explanation: Pero puede darse el caso de que necesitemos eliminar un elemento de la lista y que no conozcamos la posición que ocupa. En esos casos utilizaremos el método remove.
End of explanation
# Definimos una lista
lista = [5,7,2,0,4,7,1,5,4,3,4,1,9,0]
# Mediante el método "sort()" ordenamos la lista
lista.sort()
# Con "remove()" eliminamos un elemento especificando entre paréntesis el primer valor a eliminar, en caso de que haya dos
# o más elementos igules en la lista. La diferencia con "pop(posición)" es que si la lista no está ordenada eliminamos el primer
# elemento que coincida, con pop() podemos eliminar un elemento concreto.
print(lista)
lista.remove(0)
lista
Explanation: <BR />
ORDENAR LISTAS
El método sort permite ordenar una lista sin necesidad de crear una lista nueva, por lo que la operación es muy eficiente.
End of explanation
# Se define la variable 'l' como un rango que empieza en 0, termina en 11 y va en saltos de 2
l = range(0,11,2)
# Después mediante la función list() creamos una lista de 'l'
list(l)
# Ejemplo desglosado, en este caso si no se especifica un paso, lo realiza de 1 en 1
# Variable y muestra del valor de dicha variable
l = range(-5, 5)
l
# Uso de list() para crear la lista de la variable 'l'
list(l)
Explanation: <BR />
GENERAR LISTAS
Python 2.7 proporciona la función predefinida range(inicio, fin, paso) para generar listas automáticamente. En Python 3.5 no se genera una lista; se genera un objeto iterable.
End of explanation
# Definimos una variable con un valor string
a = "Ana"
# Se considera que la cadena o string "Ana" es una secuencia de 3 posiciones, siendo:
# A --> posición 0
# n --> posición 1
# a --> posición 2
# Por tanto se puede acceder a cada uno de los caracteres que componen un string como si fueran elementos de una lista o tupla
a, a[0], a[2]
# Los espacios no se tienen en cuenta
mensaje = "Vaya calor que hace"
mensaje[0], mensaje[12]
# Las cadenas en Python son inmutables. Eso quiere decir que no es posible modificar una cadena sin crear otra nueva.
b = mensaje.replace('V', 'v')
b, mensaje
# Uso de len() para saber la longitud de la string como si fuera una lista o tupla
len(mensaje)
Explanation: <BR />
SECUENCIAS DE CARACTERES STR
Las cadenas son consideradas como una secuencia de caracteres y por tanto pueden ser tratadas como otras secuencias (tuplas o listas). Podemos acceder a cada uno de los caracteres de una cadena:
End of explanation |
12,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tree
定义
一棵二叉树的定义如下。key可以存储任意的对象,亦即每棵树也可以是其他树的子树。
Step1: 遍历
前序
中序
后序
Step2: 二叉堆实现优先队列
二叉堆是队列的一种实现方式。
二叉堆可以用完全二叉树来实现。所谓完全二叉树(complete binary tree),有定义如下:
A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible.
除叶节点外,所有层都是填满的,叶节点则按照从左至右的顺序填满。
完全二叉树的一个重要性质:
当以列表表示完全二叉树时,位置 p 的父节点,其 left child 位于 2p 位置,其 right child 位于 2p+1 的位置。
为了满足使用列表表示的性质,列表中第一个位置list[0]由 0 填充,树从list[1]开始。
Operations
BinaryHeap() creates a new, empty, binary heap.
insert(k) adds a new item to the heap.
findMin() returns the item with the minimum key value, leaving item in the heap.
delMin() returns the item with the minimum key value, removing the item from the heap.
isEmpty() returns true if the heap is empty, false otherwise.
size() returns the number of items in the heap.
buildHeap(list) builds a new heap from a list of keys.
Step3: 二叉搜索树 Binary Search Trees
其性质与字典非常相近。
Operations
Map() Create a new, empty map.
put(key,val) Add a new key-value pair to the map. If the key is already in the map then replace the old value with the new value.
get(key) Given a key, return the value stored in the map or None otherwise.
del Delete the key-value pair from the map using a statement of the form del map[key].
len() Return the number of key-value pairs stored in the map.
in Return True for a statement of the form key in map, if the given key is in the map. | Python Code:
class BinaryTree():
def __init__(self, root_obj):
self.key = root_obj
self.left_child = None
self.right_child = None
def insert_left(self, new_node):
# if the tree do not have a left child
# then create a node: one tree without children
if self.left_child is None:
self.left_child = BinaryTree(new_node)
# if there is a child, then concat the child
# under the node we inserted
else:
t = BinaryTree(new_node)
t.left_child = self.left_child
self.left_child = t
def insert_right(self, new_node):
# if the tree do not have a right child
# then create a node: one tree without children
if self.right_child is None:
self.right_child = BinaryTree(new_node)
# if there is a child, then concat the child
# under the node we inserted
else:
t = BinaryTree(new_node)
t.right_child = self.right_child
self.right_child = t
def get_right_child(self):
return self.right_child
def get_left_child(self):
return self.left_child
def set_root(self, obj):
self.key = obj
def get_root(self):
return self.key
r = BinaryTree('a')
print r.get_root()
print r.get_left_child()
r.insert_left('b')
print r.get_left_child().get_root()
Explanation: Tree
定义
一棵二叉树的定义如下。key可以存储任意的对象,亦即每棵树也可以是其他树的子树。
End of explanation
def preorder(tree):
if tree:
print tree.get_root()
preorder(tree.get_left_child())
preorder(tree.get_right_child())
def postorder(tree):
if tree:
postorder(tree.get_left_child())
postorder(tree.get_right_child())
print tree.get_root()
def inorder(tree):
if tree:
inorder(tree.get_left_child())
print tree.get_root()
inorder(tree.get_right_child())
r = BinaryTree('root')
r.insert_left('l1')
r.insert_left('l2')
r.insert_right('r1')
r.insert_right('r2')
r.get_left_child().insert_right('r3')
preorder(r)
Explanation: 遍历
前序
中序
后序
End of explanation
class BinHeap(object):
def __init__(self):
self.heap_list = [0]
self.current_size = 0
Explanation: 二叉堆实现优先队列
二叉堆是队列的一种实现方式。
二叉堆可以用完全二叉树来实现。所谓完全二叉树(complete binary tree),有定义如下:
A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible.
除叶节点外,所有层都是填满的,叶节点则按照从左至右的顺序填满。
完全二叉树的一个重要性质:
当以列表表示完全二叉树时,位置 p 的父节点,其 left child 位于 2p 位置,其 right child 位于 2p+1 的位置。
为了满足使用列表表示的性质,列表中第一个位置list[0]由 0 填充,树从list[1]开始。
Operations
BinaryHeap() creates a new, empty, binary heap.
insert(k) adds a new item to the heap.
findMin() returns the item with the minimum key value, leaving item in the heap.
delMin() returns the item with the minimum key value, removing the item from the heap.
isEmpty() returns true if the heap is empty, false otherwise.
size() returns the number of items in the heap.
buildHeap(list) builds a new heap from a list of keys.
End of explanation
class BinarySearchTree(object):
def __init__(self):
self.root = None
self.size = 0
def length(self):
return self.size
def __len__(self):
return self.size
def __iter__(self):
return self.root.__iter__()
def put(self, key, val):
if self.root:
self._put(key, val, self.root)
else:
self.root = TreeNode(key, val)
self.size += 1
def _put(key, val, current_node):
if key < current_node:
if current_node.has_left_child():
_put(key, val, current_node.left_child)
else:
current_node.left_child = TreeNode(key, val, parent=current_node)
else:
if current_node.has_right_child():
_put(key, val, current_node.right_child)
else:
current_node.right_child = TreeNode(key, val, parent=current_node)
def __setitem__(self, k, v):
self.put(k, v)
class TreeNode(object):
def __init__(self, key, val, left=None, right=None, parent=None):
self.key = key
self.payload = val
self.left_child = left
self.right_child = right
self.parent = parent
def has_left_child(self):
return self.left_child
def has_right_child(self):
return self.right_child
def is_root(self):
return not self.parent
def is_leaf(self):
return not (self.right_child or self.left_child)
def has_any_children(self):
return self.right_child or self.left_child
def has_both_children(self):
return self.right_child and self.right_child
def replace_node_data(self, key, value, lc, rc):
self.key = key
self.payload = value
self.left_child = lc
self.right_child = rc
if self.has_left_child():
self.left_child.parent = self
if self.has_right_child():
self.right_child.parent = self
Explanation: 二叉搜索树 Binary Search Trees
其性质与字典非常相近。
Operations
Map() Create a new, empty map.
put(key,val) Add a new key-value pair to the map. If the key is already in the map then replace the old value with the new value.
get(key) Given a key, return the value stored in the map or None otherwise.
del Delete the key-value pair from the map using a statement of the form del map[key].
len() Return the number of key-value pairs stored in the map.
in Return True for a statement of the form key in map, if the given key is in the map.
End of explanation |
12,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Is Augmentation Necessary?
In this notebook, we will check how the network trained on ordinary data copes with the augmented data and what will happen if it is learned from the augmented data.
How the implement class with the neural network you'll see in this file.
Step1: Create batch class depended from MnistBatch
Step2: Already familiar to us the construction to create the pipelines. These pipelines train NN on simple MNIST images, without shift.
Step3: Train the model by using next_batch method
Step4: Get variable from pipeline and print accuracy on data without shift
Step5: Now check, how change accuracy, if the first model testing on shift data
Step6: In order for the model to be able to predict the augmentation data, we will teach it on such data
Step7: And now check, how change accuracy on shift data
Step8: It's really better than before.
It is interesting, on what figures we are mistaken? | Python Code:
import sys
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqn
%matplotlib inline
sys.path.append('../../..')
sys.path.append('../../utils')
import utils
from secondbatch import MnistBatch
from simple_conv_model import ConvModel
from batchflow import V, B
from batchflow.opensets import MNIST
Explanation: Is Augmentation Necessary?
In this notebook, we will check how the network trained on ordinary data copes with the augmented data and what will happen if it is learned from the augmented data.
How the implement class with the neural network you'll see in this file.
End of explanation
mnistset = MNIST(batch_class=MnistBatch)
Explanation: Create batch class depended from MnistBatch
End of explanation
normal_train_ppl = (
mnistset.train.p
.init_model('dynamic',
ConvModel,
'conv',
config={'inputs': dict(images={'shape': (28, 28, 1)},
labels={'classes': (10),
'transform': 'ohe',
'name': 'targets'}),
'loss': 'ce',
'optimizer':'Adam',
'input_block/inputs': 'images',
'head/units': 10,
'output': dict(ops=['labels',
'proba',
'accuracy'])})
.train_model('conv',
feed_dict={'images': B('images'),
'labels': B('labels')})
)
normal_test_ppl = (
mnistset.test.p
.import_model('conv', normal_train_ppl)
.init_variable('test_accuracy', init_on_each_run=int)
.predict_model('conv',
fetches='output_accuracy',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('test_accuracy'),
mode='w'))
Explanation: Already familiar to us the construction to create the pipelines. These pipelines train NN on simple MNIST images, without shift.
End of explanation
batch_size = 400
for i in tqn(range(600)):
normal_train_ppl.next_batch(batch_size, n_epochs=None)
normal_test_ppl.next_batch(batch_size, n_epochs=None)
Explanation: Train the model by using next_batch method
End of explanation
acc = normal_test_ppl.get_variable('test_accuracy')
print('Accuracy on normal data: {:.2%}'.format(acc))
Explanation: Get variable from pipeline and print accuracy on data without shift
End of explanation
shift_test_ppl= (
mnistset.test.p
.import_model('conv', normal_train_ppl)
.shift_flattened_pic()
.init_variable('predict', init_on_each_run=int)
.predict_model('conv',
fetches='output_accuracy',
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=V('predict'),
mode='w')
.run(batch_size, n_epochs=1)
)
print('Accuracy with shift: {:.2%}'.format(shift_test_ppl.get_variable('predict')))
Explanation: Now check, how change accuracy, if the first model testing on shift data
End of explanation
shift_train_ppl = (
mnistset.train.p
.shift_flattened_pic()
.init_model('dynamic',
ConvModel,
'conv',
config={'inputs': dict(images={'shape': (28, 28, 1)},
labels={'classes': (10),
'transform': 'ohe',
'name': 'targets'}),
'loss': 'ce',
'optimizer':'Adam',
'input_block/inputs': 'images',
'head/units': 10,
'output': dict(ops=['labels',
'proba',
'accuracy'])})
.train_model('conv',
feed_dict={'images': B('images'),
'labels': B('labels')})
)
for i in tqn(range(600)):
shift_train_ppl.next_batch(batch_size, n_epochs=None)
Explanation: In order for the model to be able to predict the augmentation data, we will teach it on such data
End of explanation
shift_test_ppl = (
mnistset.test.p
.import_model('conv', shift_train_ppl)
.shift_flattened_pic()
.init_variable('acc', init_on_each_run=list)
.init_variable('img', init_on_each_run=list)
.init_variable('predict', init_on_each_run=list)
.predict_model('conv',
fetches=['output_accuracy', 'inputs', 'output_proba'],
feed_dict={'images': B('images'),
'labels': B('labels')},
save_to=[V('acc'), V('img'), V('predict')],
mode='a')
.run(1, n_epochs=1)
)
print('Accuracy with shift: {:.2%}'.format(np.mean(shift_test_ppl.get_variable('acc'))))
Explanation: And now check, how change accuracy on shift data
End of explanation
acc = shift_test_ppl.get_variable('acc')
img = shift_test_ppl.get_variable('img')
predict = shift_test_ppl.get_variable('predict')
_, ax = plt.subplots(3, 4, figsize=(16, 16))
ax = ax.reshape(-1)
for i in range(12):
index = np.where(np.array(acc) == 0)[0][i]
ax[i].imshow(img[index]['images'].reshape(-1,28))
ax[i].set_xlabel('Predict: {}'.format(np.argmax(predict[index][0])), fontsize=18)
ax[i].grid()
Explanation: It's really better than before.
It is interesting, on what figures we are mistaken?
End of explanation |
12,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, we will show how to load pre-trained models and draw things with sketch-rnn
Step3: define the path of the model you want to load, and also the path of the dataset
Step4: We define two convenience functions to encode a stroke into a latent vector, and decode from latent vector to stroke.
Step5: Let's try to encode the sample stroke into latent vector $z$
Step6: Create generated grid at various temperatures from 0.1 to 1.0
Step7: Latent Space Interpolation Example between $z_0$ and $z_1$
Step8: Now we interpolate between sheep $z_0$ and sheep $z_1$
Step9: Let's load the Flamingo Model, and try Unconditional (Decoder-Only) Generation
Step10: Let's load the owl model, and generate two sketches using two random IID gaussian latent vectors
Step11: Let's interpolate between the two owls $z_0$ and $z_1$
Step12: Let's load the model trained on both cats and buses! catbus!
Step13: Let's interpolate between a cat and a bus!!!
Step14: Why stop here? Let's load the model trained on both elephants and pigs!!!
Step15: Tribute to an episode of South Park | Python Code:
# import the required libraries
import numpy as np
import time
import random
import cPickle
import codecs
import collections
import os
import math
import json
import tensorflow as tf
from six.moves import xrange
# libraries required for visualisation:
from IPython.display import SVG, display
import PIL
from PIL import Image
import matplotlib.pyplot as plt
# set numpy output to something sensible
np.set_printoptions(precision=8, edgeitems=6, linewidth=200, suppress=True)
!pip install -qU svgwrite
import svgwrite # conda install -c omnia svgwrite=1.1.6
tf.logging.info("TensorFlow Version: %s", tf.__version__)
!pip install -q magenta
# import our command line tools
from magenta.models.sketch_rnn.sketch_rnn_train import *
from magenta.models.sketch_rnn.model import *
from magenta.models.sketch_rnn.utils import *
from magenta.models.sketch_rnn.rnn import *
# little function that displays vector images and saves them to .svg
def draw_strokes(data, factor=0.2, svg_filename = '/tmp/sketch_rnn/svg/sample.svg'):
tf.gfile.MakeDirs(os.path.dirname(svg_filename))
min_x, max_x, min_y, max_y = get_bounds(data, factor)
dims = (50 + max_x - min_x, 50 + max_y - min_y)
dwg = svgwrite.Drawing(svg_filename, size=dims)
dwg.add(dwg.rect(insert=(0, 0), size=dims,fill='white'))
lift_pen = 1
abs_x = 25 - min_x
abs_y = 25 - min_y
p = "M%s,%s " % (abs_x, abs_y)
command = "m"
for i in xrange(len(data)):
if (lift_pen == 1):
command = "m"
elif (command != "l"):
command = "l"
else:
command = ""
x = float(data[i,0])/factor
y = float(data[i,1])/factor
lift_pen = data[i, 2]
p += command+str(x)+","+str(y)+" "
the_color = "black"
stroke_width = 1
dwg.add(dwg.path(p).stroke(the_color,stroke_width).fill("none"))
dwg.save()
display(SVG(dwg.tostring()))
# generate a 2D grid of many vector drawings
def make_grid_svg(s_list, grid_space=10.0, grid_space_x=16.0):
def get_start_and_end(x):
x = np.array(x)
x = x[:, 0:2]
x_start = x[0]
x_end = x.sum(axis=0)
x = x.cumsum(axis=0)
x_max = x.max(axis=0)
x_min = x.min(axis=0)
center_loc = (x_max+x_min)*0.5
return x_start-center_loc, x_end
x_pos = 0.0
y_pos = 0.0
result = [[x_pos, y_pos, 1]]
for sample in s_list:
s = sample[0]
grid_loc = sample[1]
grid_y = grid_loc[0]*grid_space+grid_space*0.5
grid_x = grid_loc[1]*grid_space_x+grid_space_x*0.5
start_loc, delta_pos = get_start_and_end(s)
loc_x = start_loc[0]
loc_y = start_loc[1]
new_x_pos = grid_x+loc_x
new_y_pos = grid_y+loc_y
result.append([new_x_pos-x_pos, new_y_pos-y_pos, 0])
result += s.tolist()
result[-1][2] = 1
x_pos = new_x_pos+delta_pos[0]
y_pos = new_y_pos+delta_pos[1]
return np.array(result)
Explanation: In this notebook, we will show how to load pre-trained models and draw things with sketch-rnn
End of explanation
data_dir = 'http://github.com/hardmaru/sketch-rnn-datasets/raw/master/aaron_sheep/'
models_root_dir = '/tmp/sketch_rnn/models'
model_dir = '/tmp/sketch_rnn/models/aaron_sheep/layer_norm'
download_pretrained_models(models_root_dir=models_root_dir)
def load_env_compatible(data_dir, model_dir):
Loads environment for inference mode, used in jupyter notebook.
# modified https://github.com/tensorflow/magenta/blob/master/magenta/models/sketch_rnn/sketch_rnn_train.py
# to work with depreciated tf.HParams functionality
model_params = sketch_rnn_model.get_default_hparams()
with tf.gfile.Open(os.path.join(model_dir, 'model_config.json'), 'r') as f:
data = json.load(f)
fix_list = ['conditional', 'is_training', 'use_input_dropout', 'use_output_dropout', 'use_recurrent_dropout']
for fix in fix_list:
data[fix] = (data[fix] == 1)
model_params.parse_json(json.dumps(data))
return load_dataset(data_dir, model_params, inference_mode=True)
def load_model_compatible(model_dir):
Loads model for inference mode, used in jupyter notebook.
# modified https://github.com/tensorflow/magenta/blob/master/magenta/models/sketch_rnn/sketch_rnn_train.py
# to work with depreciated tf.HParams functionality
model_params = sketch_rnn_model.get_default_hparams()
with tf.gfile.Open(os.path.join(model_dir, 'model_config.json'), 'r') as f:
data = json.load(f)
fix_list = ['conditional', 'is_training', 'use_input_dropout', 'use_output_dropout', 'use_recurrent_dropout']
for fix in fix_list:
data[fix] = (data[fix] == 1)
model_params.parse_json(json.dumps(data))
model_params.batch_size = 1 # only sample one at a time
eval_model_params = sketch_rnn_model.copy_hparams(model_params)
eval_model_params.use_input_dropout = 0
eval_model_params.use_recurrent_dropout = 0
eval_model_params.use_output_dropout = 0
eval_model_params.is_training = 0
sample_model_params = sketch_rnn_model.copy_hparams(eval_model_params)
sample_model_params.max_seq_len = 1 # sample one point at a time
return [model_params, eval_model_params, sample_model_params]
[train_set, valid_set, test_set, hps_model, eval_hps_model, sample_hps_model] = load_env_compatible(data_dir, model_dir)
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
Explanation: define the path of the model you want to load, and also the path of the dataset
End of explanation
def encode(input_strokes):
strokes = to_big_strokes(input_strokes).tolist()
strokes.insert(0, [0, 0, 1, 0, 0])
seq_len = [len(input_strokes)]
draw_strokes(to_normal_strokes(np.array(strokes)))
return sess.run(eval_model.batch_z, feed_dict={eval_model.input_data: [strokes], eval_model.sequence_lengths: seq_len})[0]
def decode(z_input=None, draw_mode=True, temperature=0.1, factor=0.2):
z = None
if z_input is not None:
z = [z_input]
sample_strokes, m = sample(sess, sample_model, seq_len=eval_model.hps.max_seq_len, temperature=temperature, z=z)
strokes = to_normal_strokes(sample_strokes)
if draw_mode:
draw_strokes(strokes, factor)
return strokes
# get a sample drawing from the test set, and render it to .svg
stroke = test_set.random_sample()
draw_strokes(stroke)
Explanation: We define two convenience functions to encode a stroke into a latent vector, and decode from latent vector to stroke.
End of explanation
z = encode(stroke)
_ = decode(z, temperature=0.8) # convert z back to drawing at temperature of 0.8
Explanation: Let's try to encode the sample stroke into latent vector $z$
End of explanation
stroke_list = []
for i in range(10):
stroke_list.append([decode(z, draw_mode=False, temperature=0.1*i+0.1), [0, i]])
stroke_grid = make_grid_svg(stroke_list)
draw_strokes(stroke_grid)
Explanation: Create generated grid at various temperatures from 0.1 to 1.0
End of explanation
# get a sample drawing from the test set, and render it to .svg
z0 = z
_ = decode(z0)
stroke = test_set.random_sample()
z1 = encode(stroke)
_ = decode(z1)
Explanation: Latent Space Interpolation Example between $z_0$ and $z_1$
End of explanation
z_list = [] # interpolate spherically between z0 and z1
N = 10
for t in np.linspace(0, 1, N):
z_list.append(slerp(z0, z1, t))
# for every latent vector in z_list, sample a vector image
reconstructions = []
for i in range(N):
reconstructions.append([decode(z_list[i], draw_mode=False), [0, i]])
stroke_grid = make_grid_svg(reconstructions)
draw_strokes(stroke_grid)
Explanation: Now we interpolate between sheep $z_0$ and sheep $z_1$
End of explanation
model_dir = '/tmp/sketch_rnn/models/flamingo/lstm_uncond'
[hps_model, eval_hps_model, sample_hps_model] = load_model_compatible(model_dir)
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
# randomly unconditionally generate 10 examples
N = 10
reconstructions = []
for i in range(N):
reconstructions.append([decode(temperature=0.5, draw_mode=False), [0, i]])
stroke_grid = make_grid_svg(reconstructions)
draw_strokes(stroke_grid)
Explanation: Let's load the Flamingo Model, and try Unconditional (Decoder-Only) Generation
End of explanation
model_dir = '/tmp/sketch_rnn/models/owl/lstm'
[hps_model, eval_hps_model, sample_hps_model] = load_model_compatible(model_dir)
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
z_0 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_0)
z_1 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_1)
Explanation: Let's load the owl model, and generate two sketches using two random IID gaussian latent vectors
End of explanation
z_list = [] # interpolate spherically between z_0 and z_1
N = 10
for t in np.linspace(0, 1, N):
z_list.append(slerp(z_0, z_1, t))
# for every latent vector in z_list, sample a vector image
reconstructions = []
for i in range(N):
reconstructions.append([decode(z_list[i], draw_mode=False, temperature=0.1), [0, i]])
stroke_grid = make_grid_svg(reconstructions)
draw_strokes(stroke_grid)
Explanation: Let's interpolate between the two owls $z_0$ and $z_1$
End of explanation
model_dir = '/tmp/sketch_rnn/models/catbus/lstm'
[hps_model, eval_hps_model, sample_hps_model] = load_model_compatible(model_dir)
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
z_1 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_1)
z_0 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_0)
Explanation: Let's load the model trained on both cats and buses! catbus!
End of explanation
z_list = [] # interpolate spherically between z_1 and z_0
N = 10
for t in np.linspace(0, 1, N):
z_list.append(slerp(z_1, z_0, t))
# for every latent vector in z_list, sample a vector image
reconstructions = []
for i in range(N):
reconstructions.append([decode(z_list[i], draw_mode=False, temperature=0.15), [0, i]])
stroke_grid = make_grid_svg(reconstructions)
draw_strokes(stroke_grid)
Explanation: Let's interpolate between a cat and a bus!!!
End of explanation
model_dir = '/tmp/sketch_rnn/models/elephantpig/lstm'
[hps_model, eval_hps_model, sample_hps_model] = load_model_compatible(model_dir)
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
z_0 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_0)
z_1 = np.random.randn(eval_model.hps.z_size)
_ = decode(z_1)
Explanation: Why stop here? Let's load the model trained on both elephants and pigs!!!
End of explanation
z_list = [] # interpolate spherically between z_1 and z_0
N = 10
for t in np.linspace(0, 1, N):
z_list.append(slerp(z_0, z_1, t))
# for every latent vector in z_list, sample a vector image
reconstructions = []
for i in range(N):
reconstructions.append([decode(z_list[i], draw_mode=False, temperature=0.15), [0, i]])
stroke_grid = make_grid_svg(reconstructions, grid_space_x=25.0)
draw_strokes(stroke_grid, factor=0.3)
Explanation: Tribute to an episode of South Park: The interpolation between an Elephant and a Pig
End of explanation |
12,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Domain DPAPI Backup Key Extraction
Metadata
| | |
|
Step1: Download & Process Mordor Dataset
Step2: Analytic I
Monitor for any SecretObject with the string BCKUPKEY in the ObjectName
| Data source | Event Provider | Relationship | Event |
|
Step3: Analytic II
We can get the user logon id of the user that accessed the bckupkey object and JOIN it with a successful logon event (4624) user logon id to find the source IP
| Data source | Event Provider | Relationship | Event |
|
Step4: Analytic III
Monitoring for access to the protected_storage named pipe via SMB is very interesting to identify potential DPAPI activity over the network. Mimikatz uses the Lsarpc named pipe now.
| Data source | Event Provider | Relationship | Event |
|
Step5: Analytic IV
This event generates every time that a backup is attempted for the DPAPI Master Key. When a computer is a member of a domain, DPAPI has a backup mechanism to allow unprotection of the data. When a Master Key is generated, DPAPI communicates with a domain controller. It migt be aleready created and this event might not trigger.
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: Domain DPAPI Backup Key Extraction
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/06/20 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be extracting the DPAPI domain backup key from my DC to be able to decrypt any domain user master key files.
Technical Context
Starting with Microsoft® Windows® 2000, the operating system began to provide a data protection application-programming interface (API).
This Data Protection API (DPAPI) is a pair of function calls (CryptProtectData / CryptUnprotectData) that provide operating system-level data protection services to user and system processes.
DPAPI initially generates a strong key called a MasterKey, which is protected by the user's password. DPAPI uses a standard cryptographic process called Password-Based Key Derivation to generate a key from the password.
This password-derived key is then used with Triple-DES to encrypt the MasterKey, which is finally stored in the user's profile directory.
When a computer is a member of a domain, DPAPI has a backup mechanism to allow unprotection of the data. When a MasterKey is generated, DPAPI talks to a Domain Controller.
Domain Controllers have a domain-wide public/private key pair, associated solely with DPAPI. The local DPAPI client gets the Domain Controller public key from a Domain Controller by using a mutually authenticated and privacy protected RPC call. The client encrypts the MasterKey with the Domain Controller public key. It then stores this backup MasterKey along with the MasterKey protected by the user's password.
Offensive Tradecraft
If an adversary obtains domain admin (or equivalent) privileges, the domain backup key can be stolen and used to decrypt any domain user master key.
Tools such as Mimikatz with the method/module lsadump::backupkeys can be used to extract the domain backup key.
It uses the LsaOpenPolicy/LsaRetrievePrivateData API calls (instead of MS-BKRP) to retrieve the value for the G$BCKUPKEY_PREFERRED and G$BCKUPKEY_P LSA secrets.
Additional reading
* https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/windows/data_protection_api.md
* https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/windows/lsa_policy_objects.md
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/06_credential_access/SDWIN-190518235535.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/credential_access/host/empire_mimikatz_backupkeys_dcerpc_smb_lsarpc.zip |
Analytics
Initialize Analytics Engine
End of explanation
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/credential_access/host/empire_mimikatz_backupkeys_dcerpc_smb_lsarpc.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
Explanation: Download & Process Mordor Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ObjectServer, ObjectType, ObjectName
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 4662
AND AccessMask = "0x2"
AND lower(ObjectName) LIKE "%bckupkey%"
'''
)
df.show(10,False)
Explanation: Analytic I
Monitor for any SecretObject with the string BCKUPKEY in the ObjectName
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Windows active directory | Microsoft-Windows-Security-Auditing | User accessed AD Object | 4662 |
End of explanation
df = spark.sql(
'''
SELECT o.`@timestamp`, o.Hostname, o.ObjectName, a.IpAddress
FROM mordorTable o
INNER JOIN (
SELECT Hostname,TargetUserName,TargetLogonId,IpAddress
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 4624
AND LogonType = 3
AND NOT TargetUserName LIKE "%$"
) a
ON o.SubjectLogonId = a.TargetLogonId
WHERE LOWER(Channel) = "security"
AND o.EventID = 4662
AND o.AccessMask = "0x2"
AND lower(o.ObjectName) LIKE "%bckupkey%"
AND o.Hostname = a.Hostname
'''
)
df.show(10,False)
Explanation: Analytic II
We can get the user logon id of the user that accessed the bckupkey object and JOIN it with a successful logon event (4624) user logon id to find the source IP
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Authentication log | Microsoft-Windows-Security-Auditing | User authenticated Host | 4624 |
| Windows active directory | Microsoft-Windows-Security-Auditing | User accessed AD Object | 4662 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName, ShareName, RelativeTargetName, AccessMask, IpAddress
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 5145
AND ShareName LIKE "%IPC%"
AND RelativeTargetName = "protected_storage"
'''
)
df.show(10,False)
Explanation: Analytic III
Monitoring for access to the protected_storage named pipe via SMB is very interesting to identify potential DPAPI activity over the network. Mimikatz uses the Lsarpc named pipe now.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 4692
'''
)
df.show(10,False)
Explanation: Analytic IV
This event generates every time that a backup is attempted for the DPAPI Master Key. When a computer is a member of a domain, DPAPI has a backup mechanism to allow unprotection of the data. When a Master Key is generated, DPAPI communicates with a domain controller. It migt be aleready created and this event might not trigger.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User requested access File | 4692 |
End of explanation |
12,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is a brief sketch of how to use Grover's algorithm.
We start by declaring all necessary imports.
Step1: Grover's algorithm can be used to amplify the probability of an oracle-selected bitstring in an input superposition state.
Step2: We'll make sure the bitstring_map is as we expect.
Step3: To use Grover's algorithm on quantum hardware we need to define a connection to a QVM or QPU. However we don't have a real connection in this notebook, so we just mock out the response. If you intend to run this notebook on a QVM or QPU, be sure to replace qc with a pyQuil QuantumComputer object.
Step4: Now let's run Grover's algorithm. We instantiate the Grover object and then call its find_bitstring method.
Finally we assert its correctness by checking with the bitstring we used to generate the object. | Python Code:
from itertools import product
from mock import patch
from grove.amplification.grover import Grover
Explanation: This notebook is a brief sketch of how to use Grover's algorithm.
We start by declaring all necessary imports.
End of explanation
target_bitstring = '010'
bit = ("0", "1")
bitstring_map = {}
target_bitstring_phase = -1
nontarget_bitstring_phase = 1
# We construct the bitmap for the oracle
for bitstring in product(bit, repeat=len(target_bitstring)):
bitstring = "".join(bitstring)
if bitstring == target_bitstring:
bitstring_map[bitstring] = target_bitstring_phase
else:
bitstring_map[bitstring] = nontarget_bitstring_phase
Explanation: Grover's algorithm can be used to amplify the probability of an oracle-selected bitstring in an input superposition state.
End of explanation
target_bitstring_phase = -1
nontarget_bitstring_phase = 1
for k,v in bitstring_map.items():
if k == target_bitstring:
assert v == target_bitstring_phase, "The target bitstring has the wrong phase."
else:
assert v == nontarget_bitstring_phase, "A nontarget bistring has the wrong phase."
Explanation: We'll make sure the bitstring_map is as we expect.
End of explanation
with patch("pyquil.api.QuantumComputer") as qc:
qc.run.return_value = [[int(bit) for bit in target_bitstring]]
Explanation: To use Grover's algorithm on quantum hardware we need to define a connection to a QVM or QPU. However we don't have a real connection in this notebook, so we just mock out the response. If you intend to run this notebook on a QVM or QPU, be sure to replace qc with a pyQuil QuantumComputer object.
End of explanation
grover = Grover()
found_bitstring = grover.find_bitstring(qc, bitstring_map)
assert found_bitstring == target_bitstring, "Found bitstring is not the expected bitstring"
Explanation: Now let's run Grover's algorithm. We instantiate the Grover object and then call its find_bitstring method.
Finally we assert its correctness by checking with the bitstring we used to generate the object.
End of explanation |
12,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Linear Regression
Computational bayes final project.
Nathan Yee
Uma Desai
First example to gain understanding is taken from Cypress Frankenfeld.
http
Step1: Load data from csv file
Step2: Create x and y vectors. x is the ages, y is the heights
Step4: Abstract least squares function using a function
Step5: Use the leastSquares function to get a slope and intercept. Then use the slope and intercept to calculate the size of our alpha and beta ranges
Step6: Visualize the slope and intercept on the same plot as the data so make sure it is working correctly
Step7: Make range of alphas (intercepts), betas (slopes), and sigmas (errors)
Step8: Turn those alphas, betas, and sigmas into our hypotheses
Step9: Make data
Step13: Next make age class where likelihood is calculated based on error from data
Step14: Get random samples of pairs of months and heights. Here we want at least 10000 items to get very smooth sampling
Step15: Next, we want to get the intensity of the data at locations. We do adding the randomly sampled values to buckets. This gives us intensity values for a grid of pixels in our sample range.
Step18: Since density plotting is much easier in Mathematica, we are going to export all our data to csv files and plot them in Mathematica
Step20: <img src="ageHeightAllPlots3.png" alt="Density Plot with orignal data/fit" height="400" width="400"> | Python Code:
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint, EvalNormalPdf
import thinkplot
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Bayesian Linear Regression
Computational bayes final project.
Nathan Yee
Uma Desai
First example to gain understanding is taken from Cypress Frankenfeld.
http://allendowney.blogspot.com/2015/04/two-hour-marathon-by-2041-probably.html
End of explanation
df = pd.read_csv('ageVsHeight.csv', skiprows=0, delimiter='\t')
df
Explanation: Load data from csv file
End of explanation
ages = np.array(df['age'])
heights = np.array(df['height'])
Explanation: Create x and y vectors. x is the ages, y is the heights
End of explanation
def leastSquares(x, y):
leastSquares takes in two arrays of values. Then it returns the slope and intercept
of the least squares of the two.
Args:
x (numpy array): numpy array of values.
y (numpy array): numpy array of values.
Returns:
slope, intercept (tuple): returns a tuple of floats.
A = np.vstack([x, np.ones(len(x))]).T
slope, intercept = np.linalg.lstsq(A, y)[0]
return slope, intercept
Explanation: Abstract least squares function using a function
End of explanation
slope, intercept = leastSquares(ages, heights)
print(slope, intercept)
alpha_range = .03 * intercept
beta_range = .05 * slope
Explanation: Use the leastSquares function to get a slope and intercept. Then use the slope and intercept to calculate the size of our alpha and beta ranges
End of explanation
plt.plot(ages, heights, 'o', label='Original data', markersize=10)
plt.plot(ages, slope*ages + intercept, 'r', label='Fitted line')
# plt.legend()
plt.show()
Explanation: Visualize the slope and intercept on the same plot as the data so make sure it is working correctly
End of explanation
alphas = np.linspace(intercept - alpha_range, intercept + alpha_range, 20)
betas = np.linspace(slope - beta_range, slope + beta_range, 20)
sigmas = np.linspace(2, 4, 15)
# alphas = np.linspace(intercept * (1 - alpha_range),
# intercept * (1 + alpha_range),
# 5)
# betas = np.linspace(slope * (1 - beta_range),
# slope * (1 + beta_range),
# 5)
# sigmas = np.linspace(.1, .2, 5)
# alphas = np.linspace(65, 75, 20)
# betas = np.linspace(.4, .55, 20)
# sigmas = np.linspace(2, 3.5, 10)
Explanation: Make range of alphas (intercepts), betas (slopes), and sigmas (errors)
End of explanation
hypos = ((alpha, beta, sigma) for alpha in alphas
for beta in betas for sigma in sigmas)
Explanation: Turn those alphas, betas, and sigmas into our hypotheses
End of explanation
data = [(age, height) for age in ages for height in heights]
Explanation: Make data
End of explanation
class leastSquaresHypos(Suite, Joint):
def Likelihood(self, data, hypo):
Likelihood calculates the probability of a particular line (hypo)
based on data (ages Vs height) of our original dataset. This is
done with a normal pmf as each hypo also contains a sigma.
Args:
data (tuple): tuple that contains ages (float), heights (float)
hypo (tuple): intercept (float), slope (float), sigma (float)
Returns:
P(data|hypo)
intercept, slope, sigma = hypo
total_likelihood = 1
for age, measured_height in data:
hypothesized_height = slope * age + intercept
error = measured_height - hypothesized_height
total_likelihood *= EvalNormalPdf(error, mu=0, sigma=sigma)
return total_likelihood
LeastSquaresHypos = leastSquaresHypos(hypos)
for item in data:
LeastSquaresHypos.Update([item])
LeastSquaresHypos[LeastSquaresHypos.MaximumLikelihood()]
marginal_intercepts = LeastSquaresHypos.Marginal(0)
thinkplot.hist(marginal_intercepts)
marginal_slopes = LeastSquaresHypos.Marginal(1)
thinkplot.hist(marginal_slopes)
marginal_sigmas = LeastSquaresHypos.Marginal(2)
thinkplot.hist(marginal_sigmas)
def getHeights(hypo_samples, random_months):
getHeights takes in random hypos and random months and returns the coorisponding
random height
Args:
hypo_samples
random_heights = np.zeros(len(random_months))
for i in range(len(random_heights)):
intercept = hypo_samples[i][0]
slope = hypo_samples[i][1]
sigma = hypo_samples[i][2]
month = random_months[i]
random_heights[i] = np.random.normal((slope * month + intercept), sigma, 1)
return random_heights
def getRandomData(start_month, end_month, n, LeastSquaresHypos):
n - number of samples
random_hypos = LeastSquaresHypos.Sample(n)
random_months = np.random.uniform(start_month, end_month, n)
random_heights = getHeights(random_hypos, random_months)
return random_months, random_heights
Explanation: Next make age class where likelihood is calculated based on error from data
End of explanation
num_samples = 10000
random_months, random_heights = getRandomData(18, 29, num_samples, LeastSquaresHypos)
Explanation: Get random samples of pairs of months and heights. Here we want at least 10000 items to get very smooth sampling
End of explanation
num_buckets = 70 # num_buckets^2 is actual number
# create horizontal and vertical linearly spaced ranges as buckets.
hori_range, hori_step = np.linspace(18, 29 , num_buckets, retstep=True)
vert_range, vert_step = np.linspace(65, 100, num_buckets, retstep=True)
hori_step = hori_step / 2
vert_step = vert_step / 2
# store each bucket as a tuple in a the buckets dictionary.
buckets = dict()
keys = [(hori, vert) for hori in hori_range for vert in vert_range]
# set each bucket as empty
for key in keys:
buckets[key] = 0
# loop through the randomly sampled data
for month, height in zip(random_months, random_heights):
# check each bucket and see if randomly sampled data
for key in buckets:
if month > key[0] - hori_step and month < key[0] + hori_step:
if height > key[1] - vert_step and height < key[1] + vert_step:
buckets[key] += 1
break # can only fit in a single bucket
pcolor_months = []
pcolor_heights = []
pcolor_intensities = []
for key in buckets:
pcolor_months.append(key[0])
pcolor_heights.append(key[1])
pcolor_intensities.append(buckets[key])
plt.plot(random_months, random_heights, 'o', label='Random Sampling')
plt.plot(ages, heights, 'o', label='Original data', markersize=10)
plt.plot(ages, slope*ages + intercept, 'r', label='Fitted line')
# plt.legend()
plt.show()
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x = pcolor_months
y = pcolor_heights
z = pcolor_intensities
ax.scatter(x, y, z, c='r', marker='o')
ax.set_xlabel('Age (Months)')
ax.set_ylabel('Height')
ax.set_zlabel('Intensities')
plt.show()
Explanation: Next, we want to get the intensity of the data at locations. We do adding the randomly sampled values to buckets. This gives us intensity values for a grid of pixels in our sample range.
End of explanation
def append_to_file(path, data):
append_to_file appends a line of data to specified file. Then adds new line
Args:
path (string): the file path
Return:
VOID
with open(path, 'a') as file:
file.write(data + '\n')
def delete_file_contents(path):
delete_file_contents deletes the contents of a file
Args:
path: (string): the file path
Return:
VOID
with open(path, 'w'):
pass
def intensityCSV(x, y, z):
file_name = 'intensityData.csv'
delete_file_contents(file_name)
for xi, yi, zi in zip(x, y, z):
append_to_file(file_name, "{}, {}, {}".format(xi, yi, zi))
def monthHeightCSV(ages, heights):
file_name = 'monthsHeights.csv'
delete_file_contents(file_name)
for month, height in zip(ages, heights):
append_to_file(file_name, "{}, {}".format(month, height))
def fittedLineCSV(ages, slope, intercept):
file_name = 'fittedLineCSV.csv'
delete_file_contents(file_name)
for age in ages:
append_to_file(file_name, "{}, {}".format(age, slope*age + intercept))
def makeCSVData(pcolor_months, pcolor_heights, pcolor_intensities, ages, heights, slope, intercept):
intensityCSV(pcolor_months, pcolor_heights, pcolor_intensities)
monthHeightCSV(ages, heights)
fittedLineCSV(ages, slope, intercept)
makeCSVData(pcolor_months, pcolor_heights, pcolor_intensities, ages, heights, slope, intercept)
Explanation: Since density plotting is much easier in Mathematica, we are going to export all our data to csv files and plot them in Mathematica
End of explanation
def probHeightRange(heights, low, high):
probHeightRange returns the probability that height is within a particular range
Args:
height (sequence): sequence of heights
low (number): the bottom of the range
high (number): the top of the range
Returns:
prob (float): the probability of being in the height range
successes = 0
total = len(heights)
for height in heights:
if low < height and height < high:
successes += 1
return successes / total
probHeightRange(random_heights, 0, 76.1)
Explanation: <img src="ageHeightAllPlots3.png" alt="Density Plot with orignal data/fit" height="400" width="400">
End of explanation |
12,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This script shows how to use the existing code in opengrid to create a baseload electricity consumption benchmark.
Step1: Script settings
Step2: We create one big dataframe, the columns are the sensors | Python Code:
import os, sys
import inspect
import numpy as np
import datetime as dt
import time
import pytz
import pandas as pd
import pdb
script_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
# add the path to opengrid to sys.path
sys.path.append(os.path.join(script_dir, os.pardir, os.pardir))
from opengrid.library import config
c=config.Config()
DEV = c.get('env', 'type') == 'dev' # DEV is True if we are in development environment, False if on the droplet
if not DEV:
# production environment: don't try to display plots
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from matplotlib.dates import HourLocator, DateFormatter, AutoDateLocator
# find tmpo
sys.path.append(c.get('tmpo', 'folder'))
try:
if os.path.exists(c.get('tmpo', 'data')):
path_to_tmpo_data = c.get('tmpo', 'data')
except:
path_to_tmpo_data = None
from opengrid.library.houseprint import houseprint
if DEV:
if c.get('env', 'plots') == 'inline':
%matplotlib inline
else:
%matplotlib qt
else:
pass # don't try to render plots
plt.rcParams['figure.figsize'] = 12,8
import charts
Explanation: This script shows how to use the existing code in opengrid to create a baseload electricity consumption benchmark.
End of explanation
number_of_days = 7
Explanation: Script settings
End of explanation
hp = houseprint.load_houseprint_from_file('new_houseprint.pkl')
hp.init_tmpo(path_to_tmpo_data=path_to_tmpo_data)
start = pd.Timestamp(time.time() - number_of_days*86400, unit='s')
sensors = hp.get_sensors()
#sensors.remove('b325dbc1a0d62c99a50609e919b9ea06')
for sensor in sensors:
s = sensor.get_data(head=start, resample='s')
try:
s = s.resample(rule='60s', how='max')
s = s.diff()*3600/60
# plot with charts (don't show it) and save html
charts.plot(pd.DataFrame(s), stock=True,
save=os.path.join(c.get('data', 'folder'), 'figures', 'TimeSeries_'+sensor.key+'.html'), show=True)
except:
pass
len(sensors)
Explanation: We create one big dataframe, the columns are the sensors
End of explanation |
12,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting Models Exercise 1
Imports
Step1: Fitting a quadratic curve
For this problem we are going to work with the following model
Step2: First, generate a dataset using this model using these parameters and the following characteristics
Step3: Now fit the model to the dataset to recover estimates for the model's parameters | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
Explanation: Fitting Models Exercise 1
Imports
End of explanation
a_true = 0.5
b_true = 2.0
c_true = -4.0
Explanation: Fitting a quadratic curve
For this problem we are going to work with the following model:
$$ y_{model}(x) = a x^2 + b x + c $$
The true values of the model parameters are as follows:
End of explanation
dy = 2.0
N = 30
xdata = np.linspace(-5,5,30)
ydata = c_true + b_true * xdata +a_true*xdata + np.random.normal(0.0, dy, size=N)
plt.figure(figsize=(8,6))
plt.scatter(xdata,ydata);
plt.xlabel('x data');
plt.ylabel('y data');
plt.title('Data Points');
plt.tick_params(axis='x',top='off',direction='out');
plt.tick_params(axis='y',right='off',direction='out');
assert True # leave this cell for grading the raw data generation and plot
Explanation: First, generate a dataset using this model using these parameters and the following characteristics:
For your $x$ data use 30 uniformly spaced points between $[-5,5]$.
Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal).
After you generate the data, make a plot of the raw data (use points).
End of explanation
def quad_model(x,a,b,c):
return a*x**2 + b*x + c
theta_best, theta_cov = opt.curve_fit(quad_model, xdata, ydata)
a = theta_best[0]
b = theta_best[1]
c = theta_best[2]
print('a = {0:.3f} +/- {1:.3f}'.format(a, np.sqrt(theta_cov[0,0])))
print('b = {0:.3f} +/- {1:.3f}'.format(b, np.sqrt(theta_cov[1,1])))
print('c = {0:.3f} +/- {1:.3f}'.format(c, np.sqrt(theta_cov[2,2])))
plt.figure(figsize=(8,6))
plt.scatter(xdata,ydata,label = 'data');
plt.plot(xdata,quad_model(xdata,a,b,c),label = 'Curve Fit');
plt.xlabel('x data');
plt.ylabel('y data');
plt.title('Curve Fit');
plt.tick_params(axis='x',top='off',direction='out');
plt.tick_params(axis='y',right='off',direction='out');
plt.text(-5.5, 7, 'a = {0:.3f}'.format(a))
plt.text(-5.5, 5, 'b = {0:.3f}'.format(b))
plt.text(-5.5, 3, 'c = {0:.3f}'.format(c))
plt.legend(loc=2);
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
Explanation: Now fit the model to the dataset to recover estimates for the model's parameters:
Print out the estimates and uncertainties of each parameter.
Plot the raw data and best fit of the model.
End of explanation |
12,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of DSLWP-B 2018-08-12 SSDV transmission
This notebook analyzes SSDV transmissions made by DSLWP-B from the Moon.
Step1: We load a file containing the relevant GMSK transmission. The recording was done at the Dwingeloo radiotelescope and can be obtained here. Remember to edit the path below to point to the correct file.
Step2: The 250bps GMSK signal is converted down to baseband and lowpass filtered to 800Hz.
Step3: Perform arctangent FSK demodulation.
Step4: Correct for phase wrapping.
Step5: We extract the soft bits by guessing the correct clock phase and decimating. No pulse shaping matched filtering has been done, and tracking of the clock frequency doesn't seem necessary either. The separation between the bits 1 and 0 is good enough for demodulation without bit errors.
Note that we correct for the clock drift and for frequency offset and drift. This simple open loop clock recovery is enough to get good separation between the bits.
Step6: Soft bits are now converted to hard bits.
Step7: We construct the CCSDS ASM used to mark the beginning of each Turbo coded word. The ASM is precoded as indicated by the CCSDS standard. See this post for more information.
Step8: We correlated the received hard bits against the precoded ASM. The length of the ASM is 63 bits, so a correlation of 63 indicates that the ASM is found without bit errors. We see that the ASM is found 4 times without any bit errors. Each of these occurences of the ASM marks the start of a Turbo codeword containing a single SSDV frame.
Step9: Note that all the ASMs have correlated without any bit errors.
Step10: We have found a total of 130 ASMs. Of these, 117 mark SSDV packets, 5 are telemetry packets transmitted before the SSDV transmission, and the remaining 8 are telemetry packets interleaved with the SSDV transmission.
Step11: We now look at the distances between the ASMs. DSLWP-B uses Turbo codewords of 3576 symbols. Since the ASM is 64 bits long and Turbo codewords are transmitted back-to-back, without any gap between them, we expect a distance of 3640 bits.
Note that before we have stated that the ASM is 63 bits long. This is because the GMSK precoder is differential, so the first bit of the ASM is not defined, as it depends on the preceding data. Thus, we only use the 63 bits of the ASM that are fixed when doing the correlation.
Below we show the distances between the ASMs. | Python Code:
%matplotlib inline
import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
Explanation: Analysis of DSLWP-B 2018-08-12 SSDV transmission
This notebook analyzes SSDV transmissions made by DSLWP-B from the Moon.
End of explanation
x = np.fromfile('/home/daniel/Descargas/DSLWP-B_PI9CAM_2018-08-12T06_58_02_436.4MHz_40ksps_complex.raw', dtype='complex64')
Explanation: We load a file containing the relevant GMSK transmission. The recording was done at the Dwingeloo radiotelescope and can be obtained here. Remember to edit the path below to point to the correct file.
End of explanation
fs = 40e3
f = 1500
x = x * np.exp(-1j*2*np.pi*np.arange(x.size)*f/fs).astype('complex64')
h = scipy.signal.firwin(1000, 0.01).astype('float32')
x = scipy.signal.lfilter(h, 1, x).astype('complex64')
Explanation: The 250bps GMSK signal is converted down to baseband and lowpass filtered to 800Hz.
End of explanation
s = np.diff(np.angle(x).astype('float32'))
Explanation: Perform arctangent FSK demodulation.
End of explanation
s[s > np.pi] -= 2*np.pi
s[s < -np.pi] += 2*np.pi
Explanation: Correct for phase wrapping.
End of explanation
phase = 50
softbits = s[np.int32(np.arange(phase,s.size,160 - 0.0005))]
softbits = softbits + 1e-8 * np.arange(softbits.size) - 0.005 # correction for frequency offset and drift
plt.plot(softbits,'.')
plt.axhline(y = 0, color='green')
plt.ylim([-0.03,0.03]);
Explanation: We extract the soft bits by guessing the correct clock phase and decimating. No pulse shaping matched filtering has been done, and tracking of the clock frequency doesn't seem necessary either. The separation between the bits 1 and 0 is good enough for demodulation without bit errors.
Note that we correct for the clock drift and for frequency offset and drift. This simple open loop clock recovery is enough to get good separation between the bits.
End of explanation
bits = (softbits > 0)*1
Explanation: Soft bits are now converted to hard bits.
End of explanation
asm = np.unpackbits(np.array([0x03,0x47,0x76,0xC7,0x27,0x28,0x95,0xB0], dtype='uint8'))
asm_diff = asm[:-1] ^ asm[1:]
asm_diff[1::2] ^= 1
Explanation: We construct the CCSDS ASM used to mark the beginning of each Turbo coded word. The ASM is precoded as indicated by the CCSDS standard. See this post for more information.
End of explanation
asm_corr = scipy.signal.correlate(2*bits-1, 2*asm_diff.astype('float')-1)
plt.plot(asm_corr)
Explanation: We correlated the received hard bits against the precoded ASM. The length of the ASM is 63 bits, so a correlation of 63 indicates that the ASM is found without bit errors. We see that the ASM is found 4 times without any bit errors. Each of these occurences of the ASM marks the start of a Turbo codeword containing a single SSDV frame.
End of explanation
asm_corr[asm_corr > 40]
Explanation: Note that all the ASMs have correlated without any bit errors.
End of explanation
asm_corr[asm_corr > 40].size
Explanation: We have found a total of 130 ASMs. Of these, 117 mark SSDV packets, 5 are telemetry packets transmitted before the SSDV transmission, and the remaining 8 are telemetry packets interleaved with the SSDV transmission.
End of explanation
np.diff(np.where(asm_corr > 40)[0])
Explanation: We now look at the distances between the ASMs. DSLWP-B uses Turbo codewords of 3576 symbols. Since the ASM is 64 bits long and Turbo codewords are transmitted back-to-back, without any gap between them, we expect a distance of 3640 bits.
Note that before we have stated that the ASM is 63 bits long. This is because the GMSK precoder is differential, so the first bit of the ASM is not defined, as it depends on the preceding data. Thus, we only use the 63 bits of the ASM that are fixed when doing the correlation.
Below we show the distances between the ASMs.
End of explanation |
12,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <H2>Distance from a point to a line</H2>
\begin{equation}
\frac{|Ax+By+C|}{\sqrt{A^2+B^2}}.
\end{equation}
Step3: <H2>Distance from a point the identity line</H2>
\begin{equation}
\frac{|x - y|}{\sqrt{2}}.
\end{equation} | Python Code:
def distance(mypoint, myline):
Calculates the distance from a point to a line
x, y = mypoint
A, B, C = myline
return np.abs(A*x + B*y + C) / np.sqrt(np.power(A,2)+np.power(B,2))
mypoint = (5,1)
myline = (3,-1, 1)
distance(mypoint, myline) # 15/np.sqrt(10) = 4.743416
mypoint = (0.25, 0.50)
myline = (1, -1, 0) # identity line
distance(mypoint, myline)
Explanation: <H2>Distance from a point to a line</H2>
\begin{equation}
\frac{|Ax+By+C|}{\sqrt{A^2+B^2}}.
\end{equation}
End of explanation
def distance_idline(mypoint):
Calculates the distance from a point to the identity line
x, y = mypoint
return np.abs(x-y) / np.sqrt(2)
myfoo = (0.5, 0.5)
distance_idline(myfoo) # must be zero
distance_idline(mypoint) # must be 0.1767
mypoint= (0,1)
distance_idline(mypoint) #maximal separation is 1/sqrt(2)
1/np.sqrt(2) # voila!
Explanation: <H2>Distance from a point the identity line</H2>
\begin{equation}
\frac{|x - y|}{\sqrt{2}}.
\end{equation}
End of explanation |
12,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass
Step9: Loss layers
Step10: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
Step11: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
Step12: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
Step20: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. | Python Code:
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
Explanation: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
Receive inputs x and weights w
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
End of explanation
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: Affine layer: foward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done you can test your implementaion by running the following:
End of explanation
# Test the affine_backward function
np.random.seed(231)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: Affine layer: backward
Now implement the affine_backward function and test your implementation using numeric gradient checking.
End of explanation
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 5e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: ReLU layer: forward
Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:
End of explanation
np.random.seed(231)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 3e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
Explanation: ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:
End of explanation
np.random.seed(231)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.
You can make sure that the implementations are correct by running the following:
End of explanation
np.random.seed(231)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-3
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
Explanation: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
End of explanation
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
solver = Solver(model, data,
update_rule='sgd',
optim_config={'learning_rate': 1e-3},
lr_decay=0.95,
num_epochs=10,
batch_size=100,
print_every=100)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
Explanation: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
End of explanation
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
Explanation: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
End of explanation
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 0.01
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
End of explanation
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 0.01
weight_scale = 0.04
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
End of explanation
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
Explanation: Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
Answer:
Tuning was way more difficult with the five layer net. Big weights turned out to overflow eventually, while very small weights caused a noisy training with no clear decrease in loss. I didn't find tuning the learning rate to be more difficult.
Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.
End of explanation
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
End of explanation
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
Explanation: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
End of explanation
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
End of explanation
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
model = FullyConnectedNet([200, 200, 200, 200, 200], weight_scale=5e-2, use_batchnorm=True, dropout=0.5)
solver = Solver(model, data,
num_epochs=40, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True,
print_every=1000)
solvers[update_rule] = solver
solver.train()
best_model = model
################################################################################
# END OF YOUR CODE #
################################################################################
Explanation: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
End of explanation
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())
print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
Explanation: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
End of explanation |
12,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing collections (Part Two)
Motivation
Review Part I
Describe what we want
Kendall's tau
Rank-biased overlap
Apply to data
Inspired by "A Similarity Measure for Indefinite Rankings", http
Step1: Review Part I
Set comparision
Looked at intersection and union
List comparision
Looked at Pearson's correlation coefficient
Ordinal (rank) comparison
Pulled tweets from users with 'mom' in bio.
created exact and approximate term frequency distributions of 1-grams in tweet bodies
Kendall's tau coefficient compares exact and approximate rankings
Over/under indexing
Pulled tweets from users with 'dad' in bio
Created exact term frequency distributions for 'mom' and 'dad' tweet corpora
Attemped to find a function that de-emphasizes common terms and emphasizes un-shared, highly ranked terms
So what do we want to do?
We want to compare two lists of objects and their counts. This is a common need
when comparing counts of hashtags, n-grams, locations, etc.
We want the ability to act on lists that are
Step2: Finally
Step7: Apply it!
Lesson
Step8: Do better n-gram extraction
better stopword list removes unimportant words from list of top-ranked n-grams
better lemmatization and normalization removes duplicates
Step9: Simply by looking at this list, we can see other avenues for improving ngram extraction.
do we include handles?
should we remove RTs?
what about emoji?
minimum token length?
For now, we won't go down these paths.
Try Kendall's tau
Step10: Try RBO | Python Code:
import yaml
import time
import operator
import string
import re
import csv
import random
import nltk.tokenize
from sklearn.feature_extraction import text
import twitter
import scipy
Explanation: Comparing collections (Part Two)
Motivation
Review Part I
Describe what we want
Kendall's tau
Rank-biased overlap
Apply to data
Inspired by "A Similarity Measure for Indefinite Rankings", http://www.williamwebber.com/research/papers/wmz10_tois.pdf
Motivation
We often want to programatically describe how two Twitter corpora are difference or the same. A common method for doing this is to count things and rank them by their counts.
End of explanation
## self-correlation
a = [i for i in range(20)]
scipy.stats.kendalltau(a,a).correlation
## remember that the rows need not be ordered
# shuffle in place
random.shuffle(a)
scipy.stats.kendalltau(a,a).correlation
## anti-correlation
a = [i for i in range(20)]
b = list(reversed(a))
scipy.stats.kendalltau(a,b).correlation
## random case
# correlation will average 0 and get closer to 0 for large lists
a = [i for i in range(1000)]
b = random.sample(a, k=len(a))
scipy.stats.kendalltau(a,b).correlation
## ties
# scipy implementation uses:
# https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient#Tau-b
a = [i for i in range(10)]
b = [i for i in range(10)]
# items in list b at indices 0 and 1 will both have rank 1 (zero-based)
b[5] = 1
print(a)
print(b)
scipy.stats.kendalltau(a,b).correlation
Explanation: Review Part I
Set comparision
Looked at intersection and union
List comparision
Looked at Pearson's correlation coefficient
Ordinal (rank) comparison
Pulled tweets from users with 'mom' in bio.
created exact and approximate term frequency distributions of 1-grams in tweet bodies
Kendall's tau coefficient compares exact and approximate rankings
Over/under indexing
Pulled tweets from users with 'dad' in bio
Created exact term frequency distributions for 'mom' and 'dad' tweet corpora
Attemped to find a function that de-emphasizes common terms and emphasizes un-shared, highly ranked terms
So what do we want to do?
We want to compare two lists of objects and their counts. This is a common need
when comparing counts of hashtags, n-grams, locations, etc.
We want the ability to act on lists that are:
* non-conjoint: lists may contain different elements
* incomplete: all elements in the list are not present or not analyzed
* indefinite: the cutoff for the incomplete list is essentially arbitrary
We also want to apply weighting: where the comparison emphasizes elements with highest counts.
For output, we want to:
* calculate a similarity score
* highlight differences (save for next time)
Kendall's tau correlation coefficient, again
Kendall's tau is rank correlation coefficient that compares two sequences of rankings.
Specifically, it is a function of the number of concordant and the number of discordant pairs.
In our case, the data might look something like:
ngram | rank in corpus 1 | rank in corpus 2
--- | --- | ---
dad | 1 | 2
parent | 2 | 3
lol | 3 | 1
know | 4 | 4
The 'dad-parent' pair is concordant (C) because 1>2 and 2>3, while the 'parent-lol' pair
is discordant (D), because 2>3 but 3<1. The 'dad-lol' pair is discordant, while the 'lol-know',
'parent-lol', and 'parent-know' pairs are concordant.
The un-normalized tau coefficient is C - D, which is 2 in this case. The normalized version is:
$\tau$ = (C - D)/n(n-1)/2,
where C(D) is the number of concordant(discordant) pairs, and n is the length of the ranking list(s). This gives us $\tau$ = 0.3.
The scipy implementation of this coefficient accounts for ties (multiple entries with the same rank).
Let's look at some common test points for the measure.
End of explanation
from rbo import rbo
# elements in the list can be any object
rbo(['c', 'b', 'd'], ['a', 'c', 'd'], p=.5)
# self-similarity
a = [i for i in range(20)]
rbo(a,a,p=0.9)
# order doesn't matter
random.shuffle(a)
rbo(a,a,p=0.9)
# we are comparing ordered lists of objects, not rankings
a = [i for i in string.punctuation]
rbo(a,a,p=0.9)
# reversed case
a = [i for i in string.punctuation]
b = list(reversed(a))
print(a)
print(b)
rbo(a,b,p=0.9)
# random comparison
a = [i for i in string.punctuation]
b = random.sample(a, k=len(a))
rbo(a,b,p=0.9)
Explanation: Finally:
Kendall's tau is not defined for non-conjoint lists, meaning that it won't work for most incomplete lists.
Rank-biased Overlap
Rank-biased overlap (RBO) is based on the idea the the overlap (or size of the intersection) of two sets is a good, simple starting point for similarity measures. We apply this to ordinal lists by calculating the overlap at varying depths and cleverly aggregating the results. Importantly, this method does not depend on elements being in both lists.
For ordinal lists S and T, the agreement (A) at depth k is given in terms of the overlap (X, the size of the intersection) between S and T at depth k.
$A_{S,T,k} = \frac{X_{S,T,k}}{k}$
The average overlap for 1 <= k <= d gives decent similarity measure.
If you make it a weighted average and choose your weights to be elements of a geometric series on parameter p, you can take d --> infinity and you have a distance measure r bounded by 0 and 1 and controlled by a single parameter, p. Values of p close to 0 emphasize agreement between highly ranked elements, while larger values of p emphasize a broader range of agreement.
For truncated lists, you can calculate exactly the minimum (min) and maximum value that r can take on, given the unknown values lost in truncation. This is usually quoted in terms of min and the residual difference between the minimum and maximum (res).
For truncated lists, the base score (r) is a function of the cutoff depth (d) and can not actually reach 1. We can instead extrapolate from the visible lists and calculate $r_{ext}$ such that it has the range [0-1].
End of explanation
Get Tweets from the Twitter public API
# Get your Twitter API tokens
# this is specific to my computer; modify for yours
my_creds_file = '/Users/jkolb/.twitter_api_creds'
creds = yaml.load(open(my_creds_file))
consumer_key = creds['audience']['consumer_key']
consumer_secret = creds['audience']['consumer_secret']
access_token_key = creds['audience']['token']
access_token_secret = creds['audience']['token_secret']
api = twitter.Api(consumer_key=consumer_key,
consumer_secret=consumer_secret,
access_token_key=access_token_key,
access_token_secret=access_token_secret
)
mom_tweets = []
for _ in range(20):
mom_tweets.extend( api.GetSearch("mom",count=100) )
time.sleep(1)
dad_tweets = []
for _ in range(20):
dad_tweets.extend( api.GetSearch("dad",count=100) )
time.sleep(1)
mum_tweets = []
for _ in range(20):
mum_tweets.extend( api.GetSearch("mom",count=100) )
time.sleep(1)
Get Tweets from the Gnip Search API
from search.api import Query
import json
import yaml
creds = yaml.load(open('/Users/jkolb/.creds.yaml'))
# set up a query to the Gnip Search API
q = Query(creds['username'],
creds['password'],
creds['search_endpoint'],
paged=True,
hard_max = 2000, ## <--- control tweet volume here
)
# query parameters
start_date = '2017-06-01T00:00'
end_date = '2017-06-03T00:00'
# get the tweet data
rule = 'mom'
rule += ' -is:retweet'
q.execute(rule,start=start_date,end=end_date)
mom_tweets = list(q.get_activity_set())
rule = 'dad'
rule += ' -is:retweet'
q.execute(rule,start=start_date,end=end_date)
dad_tweets = list(q.get_activity_set())
rule = 'mum'
rule += ' -is:retweet'
q.execute(rule,start=start_date,end=end_date)
mum_tweets = list(q.get_activity_set())
Explanation: Apply it!
Lesson: most vocabularies on Twitter (1-grams, 2-grams, hashtags) for a location, date, etc. are sparsely populated, meaning that the rankings are largely non-conjoint. When comparing 2 rankings with the similarity measurements described above, it's hard to know when small similarity differences are due to statistics, platform differences, or real textual differences.
Two paths forward:
draw random samples from a single corpus to create a distribution for the null hypothesis
create a small vocabulary
Get tweets
Let's collect 3 data sets matching 3 keywords: "mom", "dad", and "mum", hypothesizing that a good similarity measurment will be smaller for "mom" vs "mum" than for "mom" vs "dad".
End of explanation
## get tweet bodies
dad_bodies = [tweet['body'] for tweet in dad_tweets]
mom_bodies = [tweet['body'] for tweet in mom_tweets]
mum_bodies = [tweet['body'] for tweet in mum_tweets]
## create a tweet tokenizer and stopword list
my_additional_stop_words = ['https','rt']
my_additional_stop_words.extend(string.punctuation)
stop_words = text.ENGLISH_STOP_WORDS.union(my_additional_stop_words)
tokenizer = nltk.tokenize.TweetTokenizer(preserve_case=False, reduce_len=True)
## make vectorizers
dad_ngram_vectorizer = text.CountVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
dad_ngram_vectorizer_idf = text.TfidfVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
mom_ngram_vectorizer = text.CountVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
mom_ngram_vectorizer_idf = text.TfidfVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
mum_ngram_vectorizer = text.CountVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
mum_ngram_vectorizer_idf = text.TfidfVectorizer(lowercase=True,
stop_words=stop_words,
ngram_range=(1,2),
tokenizer = tokenizer.tokenize,
min_df = 2,
)
# helper functions
def ngram_freq_from_dtmatrix(dtmatrix,col_names):
return dict([(ngram,dtmatrix.getcol(icol).toarray().sum()) for icol,ngram in enumerate(col_names)])
def ranked_tuples_from_ngram_freq(term_freq_dict):
return list(reversed(sorted(term_freq_dict.items(),key=operator.itemgetter(1))))
## get top ranked ngrams for 'dad' tweets
dad_dtmatrix = dad_ngram_vectorizer.fit_transform(dad_bodies)
dad_ngrams = dad_ngram_vectorizer.get_feature_names()
dad_tf_dict = ngram_freq_from_dtmatrix(dad_dtmatrix,dad_ngrams)
dad_ngrams_ranked = ranked_tuples_from_ngram_freq(dad_tf_dict)
## get top ranked ngrams for 'mom' tweets
mom_dtmatrix = mom_ngram_vectorizer.fit_transform(mom_bodies)
mom_ngrams = mom_ngram_vectorizer.get_feature_names()
mom_tf_dict = ngram_freq_from_dtmatrix(mom_dtmatrix,mom_ngrams)
mom_ngrams_ranked = ranked_tuples_from_ngram_freq(mom_tf_dict)
## get top ranked ngrams for 'mum' tweets
mum_dtmatrix = mum_ngram_vectorizer.fit_transform(mum_bodies)
mum_ngrams = mum_ngram_vectorizer.get_feature_names()
mum_tf_dict = ngram_freq_from_dtmatrix(mum_dtmatrix,mum_ngrams)
mum_ngrams_ranked = ranked_tuples_from_ngram_freq(mum_tf_dict)
# sanity check
dad_ngrams_ranked[:20]
Explanation: Do better n-gram extraction
better stopword list removes unimportant words from list of top-ranked n-grams
better lemmatization and normalization removes duplicates
End of explanation
## now let's extract the rankings and compare
# probably want to cut off the rankings somewhere
cutoff = 10000
final_cutoff = 300
# get the (ngram,rank) lists
dad_ngram_ranks = {ngram:rank for rank,(ngram,count) in enumerate(dad_ngrams_ranked[:cutoff])}
mom_ngram_ranks = {ngram:rank for rank,(ngram,count) in enumerate(mom_ngrams_ranked[:cutoff])}
mum_ngram_ranks = {ngram:rank for rank,(ngram,count) in enumerate(mum_ngrams_ranked[:cutoff])}
# get the rank lists
# NB: if cutoff lists are not conjoint (they probably aren't),
# you'll have to choose one list as a reference
dad_ranks = []
mom_ranks = []
mum_ranks = []
data = []
for ngram,mom_rank in mom_ngram_ranks.items():
try:
dad_rank = dad_ngram_ranks[ngram]
except KeyError:
# for elements not in list, rank them last
dad_rank = cutoff
try:
# for elements not in list, rank them last
mum_rank = mum_ngram_ranks[ngram]
except KeyError:
mum_rank = cutoff
if mom_rank < final_cutoff:
dad_ranks.append(dad_rank)
mom_ranks.append(mom_rank)
mum_ranks.append(mum_rank)
data.append((ngram,mom_rank,mum_rank,dad_rank))
dad_mom_tau = scipy.stats.kendalltau(dad_ranks,mom_ranks).correlation
mum_mom_tau = scipy.stats.kendalltau(mum_ranks,mom_ranks).correlation
print('Tau')
print('cutoff = ' + str(final_cutoff))
print('mom-dad: ' + str(dad_mom_tau))
print('mom-mum: ' + str(mum_mom_tau))
Explanation: Simply by looking at this list, we can see other avenues for improving ngram extraction.
do we include handles?
should we remove RTs?
what about emoji?
minimum token length?
For now, we won't go down these paths.
Try Kendall's tau
End of explanation
mom_top_ngrams = [ngram for ngram,ct in mom_ngrams_ranked][:final_cutoff]
mum_top_ngrams = [ngram for ngram,ct in mum_ngrams_ranked][:final_cutoff]
dad_top_ngrams = [ngram for ngram,ct in dad_ngrams_ranked][:final_cutoff]
mum_mom_rbo = rbo(mom_top_ngrams,mum_top_ngrams,p=0.9)['ext']
dad_mom_rbo = rbo(mom_top_ngrams,dad_top_ngrams,p=0.9)['ext']
print('RBO')
print('cutoff = ' + str(cutoff))
print('mom-dad: ' + str(dad_mom_rbo))
print('mom-mum: ' + str(mum_mom_rbo))
Explanation: Try RBO
End of explanation |
12,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div class="alert alert-block alert-info" style="margin-top
Step1: Set the random seed
Step2: Use this function for plotting
Step3: <a id="ref0"></a>
<h2 align=center>Make Some Data </h2>
Create a dataset class with two-dimensional features
Step4: Create a dataset object
Step5: <a id="ref1"></a>
<h2 align=center>Create the Model, Optimizer, and Total Loss Function (cost)</h2>
Create a custom module
Step6: Create a model. Use two features
Step7: Display the parameters
Step8: Create an optimizer object. Set the learning rate to 0.1. Don't forget to enter the model parameters in the constructor.
<img src = "https
Step9: Create the criterion function that calculates the total loss or cost
Step10: Create a data loader object. Set the batch_size equal to 2
Step11: <a id="ref2"></a>
<h2 align=center>Train the Model via Mini-Batch Gradient Descent </h2>
Run 100 epochs of Mini-Batch Gradient Descent and store the total loss or cost for every iteration. Remember that this is an approximation of the true total loss or cost
Step12: <a id="ref3"></a>
<h2>Practice Questions</h2>
Create a new <code>model1</code>. Train the model with a batch size of 30, store the loss or total cost in a list <code>LOSS1</code>, and plot the results.
Double-click here for the solution.
<!-- Your answer is below | Python Code:
from torch import nn,optim
import torch
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from torch.utils.data import Dataset, DataLoader
Explanation: <div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="http://cocl.us/pytorch_link_top"><img src = "http://cocl.us/Pytorch_top" width = 950, align = "center"></a>
<img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 200, align = "center">
<h1 align=center><font size = 5>Multiple Linear Regression </font></h1>
# Table of Contents
In this lab, you will create a model the Pytroch way. This will help you more complicated models.
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#ref0">Make Some Data</a></li>
<li><a href="#ref1">Create the Model and Cost Function the Pytorch way</a></li>
<li><a href="#ref2">Train the Model: Batch Gradient Descent</a></li>
<li><a href="#ref3">Practice Questions</a></li>
<br>
<p></p>
Estimated Time Needed: <strong>20 min</strong>
</div>
<hr>
Import the following libraries:
End of explanation
torch.manual_seed(1)
Explanation: Set the random seed:
End of explanation
def Plot_2D_Plane(model,dataset,n=0):
from mpl_toolkits.mplot3d import Axes3D
w1=model.state_dict()['linear.weight'].numpy()[0][0]
w2=model.state_dict()['linear.weight'].numpy()[0][0]
b=model.state_dict()['linear.bias'].numpy()
#data
x1 =data_set.x[:,0].view(-1,1).numpy()
x2 = data_set.x[:,1].view(-1,1).numpy()
y = data_set.y.numpy()
#make plane
X, Y = np.meshgrid(np.arange(x1.min(), x1.max(), 0.05), np.arange(x2.min(), x2.max(), 0.05))
yhat = w1*X+w2*Y+b
#plotting
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(x1[:,0],x2[:,0],y[:,0],'ro',label='y') #scatter plot
ax.plot_surface(X,Y,yhat) #plane plot
ax.set_xlabel('x1 ')
ax.set_ylabel('x2 ')
ax.set_zlabel('y')
#ax.set_ylim((y.min()-3, y.max()+3))
plt.title('estimated plane iteration:'+str(n))
ax.legend()
plt.show()
Explanation: Use this function for plotting:
End of explanation
from torch.utils.data import Dataset, DataLoader
class Data2D(Dataset):
def __init__(self):
self.x=torch.zeros(20,2)
self.x[:,0]=torch.arange(-1,1,0.1)
self.x[:,1]=torch.arange(-1,1,0.1)
self.w=torch.tensor([ [1.0],[1.0]])
self.b=1
self.f=torch.mm(self.x,self.w)+self.b
self.y=self.f+0.1*torch.randn((self.x.shape[0],1))
self.len=self.x.shape[0]
def __getitem__(self,index):
return self.x[index],self.y[index]
def __len__(self):
return self.len
Explanation: <a id="ref0"></a>
<h2 align=center>Make Some Data </h2>
Create a dataset class with two-dimensional features:
End of explanation
data_set=Data2D()
Explanation: Create a dataset object:
End of explanation
class linear_regression(nn.Module):
def __init__(self,input_size,output_size):
super(linear_regression,self).__init__()
self.linear=nn.Linear(input_size,output_size)
def forward(self,x):
yhat=self.linear(x)
return yhat
Explanation: <a id="ref1"></a>
<h2 align=center>Create the Model, Optimizer, and Total Loss Function (cost)</h2>
Create a custom module:
End of explanation
model=linear_regression(2,1)
Explanation: Create a model. Use two features: make the input size 2 and the output size 1:
End of explanation
print(list(model.parameters()))
Explanation: Display the parameters:
End of explanation
optimizer = optim.SGD(model.parameters(), lr = 0.1)
Explanation: Create an optimizer object. Set the learning rate to 0.1. Don't forget to enter the model parameters in the constructor.
<img src = "https://ibm.box.com/shared/static/f8hskuwrnctjg21agud69ddla0jkbef5.png" width = 100, align = "center">
End of explanation
criterion = nn.MSELoss()
Explanation: Create the criterion function that calculates the total loss or cost:
End of explanation
train_loader=DataLoader(dataset=data_set,batch_size=2)
Explanation: Create a data loader object. Set the batch_size equal to 2:
End of explanation
LOSS=[]
Plot_2D_Plane(model,data_set)
epochs=100
for epoch in range(epochs):
for x,y in train_loader:
#make a prediction
yhat=model(x)
#calculate the loss
loss=criterion(yhat,y)
#store loss/cost
LOSS.append(loss.item())
#clear gradient
optimizer.zero_grad()
#Backward pass: compute gradient of the loss with respect to all the learnable parameters
loss.backward()
#the step function on an Optimizer makes an update to its parameters
optimizer.step()
Plot_2D_Plane(model,data_set,epoch)
plt.plot(LOSS)
plt.xlabel("iterations ")
plt.ylabel("Cost/total loss ")
Explanation: <a id="ref2"></a>
<h2 align=center>Train the Model via Mini-Batch Gradient Descent </h2>
Run 100 epochs of Mini-Batch Gradient Descent and store the total loss or cost for every iteration. Remember that this is an approximation of the true total loss or cost:
End of explanation
torch.manual_seed(2)
validation_data=Data2D()
Y=validation_data.y
X=validation_data.x
Explanation: <a id="ref3"></a>
<h2>Practice Questions</h2>
Create a new <code>model1</code>. Train the model with a batch size of 30, store the loss or total cost in a list <code>LOSS1</code>, and plot the results.
Double-click here for the solution.
<!-- Your answer is below:
train_loader=DataLoader(dataset=data_set,batch_size=30)
model1=linear_regression(2,1)
optimizer = optim.SGD(model1.parameters(), lr = 0.1)
LOSS1=[]
epochs=100
for epoch in range(epochs):
for x,y in train_loader:
#make a prediction
yhat=model1(x)
#calculate the loss
loss=criterion(yhat,y)
LOSS1.append(loss.item())
#clear gradient
optimizer.zero_grad()
#Backward pass: compute gradient of the loss with respect to all the learnable parameters
loss.backward()
#the step function on an Optimizer makes an update to its parameters
optimizer.step()
plt.plot(LOSS1)
plt.xlabel("iterations ")
plt.ylabel("Cost/total loss ") -->
Use the following validation data to calculate the total loss or cost for both models:
End of explanation |
12,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Correlation of DCO2 and weight-corrected DCO2 with PCO2
This file contains the code used for data processing, statistical analysis and visualization described in the following paper
Step1: List and set the working directory and the directory to write out data
Step2: List of recordings
Step3: Initialize directories, analysis settings, and visualization markers
Step4: Import clinical details
Step5: Define function to import data
Step6: Import ventilator modes and settings
Step7: Import DCO2 data
Step8: Clean up DCO2 data
Step9: Write out and import processed HFOV data to binary "pickle" file
Step10: Calculate HFOV periods
Step11: Calculate descriptive statistics on DCO2 data over the whole duration of the recordings and write them to file
Step12: Import blood gases
Step13: Change the index of PCO2s into single index format
Step14: Create a dictionary containing the time of blood gases taken during HFOV recording
Step15: Calculate DCO2s and weight-corrected DCO2s for 10 minute periods before the blood gases, combine them with the pCO2 readings, collect them in a dictionary of dictionaries and write it to multisheet Excel files.
Step16: Combine all DCO2 and DCO2_corr data into single dataframes (one for each) and write them to excel files
standard DCO2
Step17: weight_corrected DCO2
Step18: Write cumulative data to excel files and binary pickle files
Step19: Calculate weight-corrected high frequency tidal volumes (VThfs) before the blood gases, combine them with the pCO2 readings, collect them in a dictionary of dictionaries
Step20: Combine all VThfs data into single dataframe
Step21: Calculate weight-square-corrected VThfs before the blood gases, combine them with the pCO2 readings, collect them in a dictionary of dictionaries
Step22: Combine all VThf-squared data into single dataframe
Step23: Create DataFrames with leak data together with the mean DCO2, DCO2_corr and with PCO2 data
Step24: Combine all leak data into single dataframes (one for each) and write them to excel files
Step25: Define functions to visualise data
Define function to calculate correlation
Step26: Define functions to visualize individual recordings and to save them to files
Step27: Visualise data from individual recordings where at least ten data points are available
All blood gases
These graphs are shown in Supplementary Figure 3 of the paper.
Step28: Arterial gases only
Step29: Create ROC curve
Step30: Select blood gases with DCO2_corr values over 60 mL2/kg2/sec
All gases
Step31: Arterial blood gases
Step32: Generate figures for the manuscript
Step33: Figures in the main article
Figure 1
Step34: Figure 2
Step35: Figure 3
Step36: Figure 4
Step37: Figure 5
Step38: Supplementary figures
Supplementary Figure 1
Step39: Supplementary Figure 2
Step40: For Supplementary Figure 3 of the paper please see the functions creating the individual graphs above.
Supplementary Figure 4
Step41: Supplementary Figure 5
Step42: Supplementary Figure 6
Step43: Supplementary Figure 7 | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
import re
import operator
import warnings
from pandas import Series, DataFrame
from scipy.stats.stats import pearsonr
from scipy.stats import ttest_rel
from datetime import datetime, timedelta
from pprint import pprint
%matplotlib inline
pd.set_option('display.max_rows', 20)
pd.set_option('display.max_columns', 50)
warnings.simplefilter('ignore')
Explanation: Correlation of DCO2 and weight-corrected DCO2 with PCO2
This file contains the code used for data processing, statistical analysis and visualization described in the following paper: "Weight-correction of carbon dioxide diffusion coefficient (DCO2) reduces its inter-individual variability and improves its correlation with blood carbon dioxide levels in neonates receiving high-frequency oscillatory ventilation." Pediatric Pulmonology, 2017;1–7. https://doi.org/10.1002/ppul.23759
Authors: Gusztav Belteki MD, PhD, FRCPCH, Benjamin Lin BA, Colin Morley MD, FRCPCH
Contact: [email protected]
The outputs (numbers, tables, graphs) of this Notebook have been suppresses to comply with copyrights. The corresponding data and graphs can be found in the paper.
Import the required modules and libraries and set options
End of explanation
# Directory to read the data from
%cd /Users/guszti/ventilation_data
cwd = os.getcwd()
# Directory to write the data to
dir_write = '/Users/guszti/ventilation_data/Analyses/DCO2/revision'
Explanation: List and set the working directory and the directory to write out data
End of explanation
# Part or all of these recordings is HFOV
recordings = [ 'DG005_1', 'DG005_2', 'DG005_3', 'DG006_1', 'DG009', 'DG016', 'DG017', 'DG018_1',
'DG020', 'DG022', 'DG025', 'DG027', 'DG032_2', 'DG038_1', 'DG038_2', 'DG040_1', 'DG040_2', 'DG046_1']
Explanation: List of recordings
End of explanation
clinical_details = {}
current_weights = {}
blood_gases = {}
pCO2s = {}
sedation = {}
vent_modes = {}
vent_modes_selected = {} # only important mode parameters are kept in this one
vent_settings = {}
vent_settings_selected = {} # only important mode parameters are kept in this one
slow_measurements = {} # 'slow measurements' are the ventilator parameters obtained with 1/sec frequeency
slow_measurements_hfov = {} # this only contains those parts of 'slow measurements' which do have a DCO2 reading
# and therefore are hfov ventilation periods
# Analysing DCO2s over "interval" minutes stopping "offset" minutes prior to the time of the blood gas
interval = 10 # 10 minutes
offset = 2 # 2 minutes
# color and marker set for visualization
colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k', 'magenta', 'maroon', 'navy', 'salmon', 'silver', 'tomato',
'violet', 'orchid', 'peru', 'wheat', 'lime', 'limegreen']
markers = [".", "+", ",", "x", "o", "D", "d", "8", "s", "p", "*", "h", 0, 4, "<", "3",
1, 5, ">", "4", 2, 6, "^", "2", 3, 7, "v", "1"]
Explanation: Initialize directories, analysis settings, and visualization markers
End of explanation
for recording in recordings:
clinical_details[recording] = pd.read_excel('%s/%s' % (cwd, 'data_grabber_patient_data.xlsx'),
sheetname = recording[:5])
# If there are multiple recordings for the same patient this imports the weight relevant for the given recording
for recording in recordings:
w = 1 if len(recording) == 5 else int(recording[-1])
current_weights[recording] = clinical_details[recording].ix[4, w]/1000
Explanation: Import clinical details
End of explanation
def data_loader(lst):
'''
lst -> DataFrame
- Takes a list of csv files (given as filenames with absolute paths) and import them as a dataframes
- Combines the 'Date' and 'Time' columns into a Datetime index while keeping the original columns
- If there are more files it concatenates them into a single dataframe.
- It returns the concatenated data as one DataFrame.
'''
data = []
for i, filename in enumerate(lst):
data.append(pd.read_csv(lst[i], keep_date_col = 'True', parse_dates = [['Date', 'Time']]))
data = pd.concat(data)
data.index = data.Date_Time
return data
Explanation: Define function to import data
End of explanation
def slow_setting_finder(lst):
'''
list -> list
Takes a list of filenames and returns those ones that end with '_slow_Setting.csv'
'''
return [n for n in lst if n.endswith('_slow_Setting.csv')]
def slow_text_finder(lst):
'''
list -> list
Takes a list of filenames and returns those ones that end with '_slow_Text.csv'
'''
return [n for n in lst if n.endswith('_slow_Text.csv')]
for recording in recordings:
flist = os.listdir('%s/%s' % (cwd, recording))
files = slow_setting_finder(flist)
# print('Loading recording %s' % recording)
# print(files)
fnames = ['%s/%s/%s' % (cwd, recording, filename) for filename in files]
vent_settings[recording] = data_loader(fnames)
# remove less important ventilator settings to simplify the table
for recording in recordings:
a = vent_settings[recording][vent_settings[recording].Id != 'FiO2']
a = a[a.Id != 'Ø tube']
a = a[a.Id != 'Tapn']
a = a[a.Id != 'ΔPsupp']
a = a[a.Id != 'Tube Æ']
a = a[a.Id != 'RRsigh']
a = a[a.Id != 'Psigh']
a = a[a.Id != 'Slopesigh']
a = a[a.Unit != 'L']
a = a[a.Id != 'I (I:E)']
a = a[a.Id != 'E (I:E)']
a = a[a.Id != 'MVlow delay']
a = a[a.Id != 'MVhigh delay']
a.drop_duplicates(["Rel.Time [s]", "Name"], inplace = True)
vent_settings_selected[recording] = a
for recording in recordings:
flist = os.listdir('%s/%s' % (cwd, recording))
files = slow_text_finder(flist)
# print('Loading recording %s' % recording)
# print(files)
fnames = ['%s/%s/%s' % (cwd, recording, filename) for filename in files]
vent_modes[recording] = data_loader(fnames)
# remove less important ventilator mode settings to simplify the table
for recording in recordings:
a = vent_modes[recording][vent_modes[recording].Id != 'Device is in neonatal mode']
a = a[a.Id != 'Device is in neonatal mode']
a = a[a.Id != 'Selected CO2 unit is mmHg']
a = a[a.Id != "Selected language (variable text string transmitted with this code, see 'MEDIBUS.X, Rules and Standards for Implementation' for further information)"]
a = a[a.Id != 'Device configured for intubated patient ventilation']
a = a[a.Id != 'Active humid heated']
a = a[a.Id != 'Audio pause active']
a = a[a.Id != 'Active humid unheated']
a = a[a.Id != 'Suction maneuver active']
a.drop_duplicates(["Rel.Time [s]", "Id"], inplace = True)
vent_modes_selected[recording] = a
Explanation: Import ventilator modes and settings
End of explanation
def slow_measurement_finder(lst):
'''
list -> list
Takes a list of filenames and returns those ones that end with 'Measurement.csv''
'''
return [n for n in lst if n.endswith('_slow_Measurement.csv')]
for recording in recordings:
flist = os.listdir('%s/%s' % (cwd, recording))
files = slow_measurement_finder(flist)
# print('Loading recording %s' % recording)
# print(files)
fnames = ['%s/%s/%s' % (cwd, recording, filename) for filename in files]
slow_measurements[recording] = data_loader(fnames)
Explanation: Import DCO2 data
End of explanation
# resampling to adjust the time stamps to full seconds
for recording in recordings:
slow_measurements[recording] = slow_measurements[recording].resample('1S').mean()
# Remove rows which have no DCO2 data - this keeps HFOV periods only
for recording in recordings:
try:
slow_measurements_hfov[recording] = slow_measurements[recording].dropna(subset = ['5001|DCO2 [10*mL^2/s]'])
except KeyError:
print('%s cannot be converted' % recording )
for recording in recordings:
# Create a column in the DataFrame with the DCO2 data. The original values need to be multiplied by 10
# as in the the downloaded data they are expressed as 1/10th of the DCO2 readings (see original column labels)
slow_measurements_hfov[recording]['DCO2'] = \
DataFrame(slow_measurements_hfov[recording]['5001|DCO2 [10*mL^2/s]'] * 10)
# this normalizes the DCO2 to the body weight. DCO2 = f x VT x VT. If VT is expressed as mL/kg then DCO2_corr
# should be 1/sec x mL/kg x mL/kg = mL^2 / kg^2 / sec
slow_measurements_hfov[recording]['DCO2_corr'] = \
DataFrame(slow_measurements_hfov[recording]['5001|DCO2 [10*mL^2/s]'] * 10 / (current_weights[recording] ** 2))
# This column normalizes the VThf to body weight
slow_measurements_hfov[recording]['VThf_kg'] = \
DataFrame(slow_measurements_hfov[recording]['5001|VThf [mL]'] / current_weights[recording])
# This column normalizs the VThf to body weight squares
slow_measurements_hfov[recording]['VThf_kg_2'] = slow_measurements_hfov[recording]['VThf_kg']**2
Explanation: Clean up DCO2 data
End of explanation
for recording in recordings:
print('%s is being pickled' % recording)
slow_measurements_hfov[recording].to_pickle('%s/%s_slow_measurements_hfov.p' % (dir_write, recording))
# importing from pickles
for recording in recordings:
slow_measurements_hfov[recording] = pd.read_pickle('%s/%s_slow_measurements_hfov.p' % (dir_write, recording))
Explanation: Write out and import processed HFOV data to binary "pickle" file
End of explanation
HFOV_periods = {}
for recording in recordings:
start = slow_measurements_hfov[recording].index[0]
end = slow_measurements_hfov[recording].index[-1]
HFOV_periods[recording] = [start, end]
# How long are the HFOV periods (in seconds) ?
hfov_durations = {}
for recording in recordings:
hfov_durations[recording] = HFOV_periods[recording][1] - HFOV_periods[recording][0]
Explanation: Calculate HFOV periods
End of explanation
def stats_calculator(obj, col):
'''
in: obj (Series or DataFrame), col (str)
out: tuple of size 8
Calculates 8 different descriptive statistics for the data in the 'col' column of a series or dataframe
'''
a = obj[col]
return (a.size, a.mean(), a.std(), a.median(), a.mad(),
a.min(), a.quantile(0.25), a.quantile(0.75), a.max())
# Descriptive statistics on uncorrected DCO2 data
DCO2_stats_all = []
for recording in recordings:
stats = (stats_calculator(slow_measurements_hfov[recording], 'DCO2'))
frame = DataFrame([stats], columns = ['number_of_seconds', 'mean', 'stdev', 'median', 'mad',
'min', '25pc', '75pc', 'max', ], index = [recording])
DCO2_stats_all.append(frame)
DCO2_stats_all = pd.concat(DCO2_stats_all)
DCO2_stats_all['hours'] = DCO2_stats_all['number_of_seconds'] / 3600
# Descriptive statistics on weight-squared-corrected DCO2 data
DCO2_corr_stats_all = []
for recording in recordings:
stats = (stats_calculator(slow_measurements_hfov[recording], 'DCO2_corr'))
frame = DataFrame([stats], columns = ['number_of_seconds', 'mean', 'stdev', 'median', 'mad',
'min', '25pc', '75pc', 'max', ], index = [recording])
DCO2_corr_stats_all.append(frame)
DCO2_corr_stats_all = pd.concat(DCO2_corr_stats_all)
Explanation: Calculate descriptive statistics on DCO2 data over the whole duration of the recordings and write them to file
End of explanation
for recording in recordings:
blood_gases[recording] = pd.read_excel('%s/%s' % (cwd, 'data_grabber_gases.xlsx'),
sheetname = recording[:5], header = None)
blood_gases[recording] = DataFrame(blood_gases[recording].T)
blood_gases[recording].columns = blood_gases[recording].ix[0]
blood_gases[recording] = blood_gases[recording][1:]
blood_gases[recording].index = [blood_gases[recording]['Date:'], blood_gases[recording]['Time:']]
pCO2s = {}
for recording in recordings:
pCO2s[recording] = blood_gases[recording][['pCO2, POC', 'Blood specimen type, POC']]
Explanation: Import blood gases
End of explanation
# Change the index of pCO2s into single index format
for recording in recordings:
# print('processing CO2 data for %s' % recording)
time_list_all = []
for i in range(len(pCO2s[recording])):
day = str(pCO2s[recording].index[i][0])[:10]
time = str(pCO2s[recording].index[i][1])
date_time = day + ' ' + time
time_list_all.append(date_time)
pCO2s[recording].index = time_list_all
# Convert the indices of the pCO2s DataFrames to datetime index
for recording in recordings:
try:
pCO2s[recording].index = pd.to_datetime(pCO2s[recording].index)
except:
print('%s cannot be converted' % recording)
Explanation: Change the index of PCO2s into single index format
End of explanation
gas_list = {}
gas_list['DG005_1'] = ['2015-10-13 15:30:00', '2015-10-13 17:38:00', '2015-10-13 21:33:00', '2015-10-13 23:05:00',
'2015-10-14 01:29:00', '2015-10-14 03:56:00', '2015-10-14 07:02:00', '2015-10-14 11:12:00', '2015-10-14 13:10:00',
'2015-10-14 14:07:00', '2015-10-14 14:31:00', '2015-10-14 16:32:00', '2015-10-14 19:08:00', '2015-10-14 22:42:00',
'2015-10-15 00:01:00', '2015-10-15 02:44:00', '2015-10-15 03:57:00', '2015-10-15 05:21:00', '2015-10-15 07:45:00',
'2015-10-15 11:56:00' ]
gas_list['DG005_2'] = ['2015-10-24 12:33:00', '2015-10-24 13:47:00', ]
gas_list['DG005_3'] = ['2015-11-04 04:09:00', '2015-11-04 07:24:00', ]
gas_list['DG006_1'] = [ '2015-10-15 17:45:00', '2015-10-15 20:12:00', '2015-10-15 21:12:00', '2015-10-15 22:30:00',
'2015-10-16 01:13:00', '2015-10-16 03:28:00', '2015-10-16 05:07:00', '2015-10-16 07:17:00',
'2015-10-16 10:59:00', '2015-10-16 15:34:00']
gas_list['DG009'] = [ '2015-10-26 17:54:00', '2015-10-26 20:08:00', '2015-10-26 21:33:00', '2015-10-27 01:08:00',
'2015-10-27 06:34:00', '2015-10-27 11:56:00', '2015-10-27 13:47:00', '2015-10-27 21:06:00',
'2015-10-28 02:49:00', '2015-10-28 07:04:00', '2015-10-28 09:44:00']
gas_list['DG016'] = [ '2015-12-07 14:35:00', '2015-12-07 16:51:00']
gas_list['DG017'] = ['2015-12-08 12:58:00', '2015-12-08 16:44:00', '2015-12-08 19:33:00',
'2015-12-08 22:51:00', '2015-12-08 23:27:00', '2015-12-08 23:34:00',
'2015-12-09 02:06:00', '2015-12-09 03:48:00', '2015-12-09 05:41:00',
'2015-12-09 07:49:00', '2015-12-09 07:56:00', '2015-12-09 09:12:00',
'2015-12-09 09:19:00', '2015-12-09 12:27:00', '2015-12-09 16:37:00',
'2015-12-09 17:39:00', '2015-12-09 20:50:00', '2015-12-10 01:04:00',
'2015-12-10 03:35:00', '2015-12-10 03:37:00', '2015-12-10 04:33:00',
'2015-12-10 10:41:00', '2015-12-10 10:48:00', '2015-12-10 12:29:00',
'2015-12-10 14:20:00', '2015-12-10 17:02:00', '2015-12-10 22:02:00',
'2015-12-11 03:04:00', '2015-12-11 03:12:00', '2015-12-11 06:16:00',
'2015-12-11 06:25:00', '2015-12-11 06:59:00', '2015-12-11 10:21:00',
'2015-12-11 14:16:00']
gas_list['DG018_1'] = ['2015-12-13 01:27:00',
'2015-12-13 03:43:00', '2015-12-13 05:20:00', '2015-12-13 07:05:00',
'2015-12-13 08:46:00', '2015-12-13 10:55:00', '2015-12-13 11:40:00',
'2015-12-13 12:50:00', '2015-12-13 14:50:00', '2015-12-13 16:50:00',
'2015-12-13 19:40:00', '2015-12-13 22:22:00', '2015-12-14 00:19:00',
'2015-12-14 01:33:00', '2015-12-14 04:37:00', '2015-12-14 06:55:00',
'2015-12-14 09:59:00', '2015-12-14 11:23:00', '2015-12-14 13:07:00']
gas_list['DG020'] = [ '2015-12-29 11:52:00', '2015-12-29 15:36:00', '2015-12-29 19:13:00', '2015-12-29 22:23:00',
'2015-12-30 02:10:00', '2015-12-30 03:54:00', '2015-12-30 05:43:00', '2015-12-30 07:46:00',
'2015-12-30 09:06:00', '2015-12-30 13:07:00']
gas_list['DG022'] = ['2016-01-02 12:43:00', '2016-01-02 15:58:00', '2016-01-02 19:46:00',
'2016-01-02 21:41:00', '2016-01-02 22:43:00', '2016-01-03 00:01:00',
'2016-01-03 01:15:00', '2016-01-03 02:14:00', '2016-01-03 03:05:00',
'2016-01-03 03:59:00', '2016-01-03 05:34:00', '2016-01-03 07:37:00',
'2016-01-03 13:28:00', '2016-01-03 18:54:00', '2016-01-03 22:44:00',
'2016-01-05 02:50:00', '2016-01-05 07:09:00', '2016-01-05 14:08:00',
'2016-01-05 23:31:00']
gas_list['DG025'] = [ '2016-01-21 23:46:00',
'2016-01-22 03:31:00', '2016-01-22 06:27:00', '2016-01-22 09:06:00',
'2016-01-22 11:09:00', '2016-01-22 12:35:00', '2016-01-22 14:14:00',
'2016-01-22 18:03:00', '2016-01-22 22:12:00', '2016-01-23 01:24:00',
'2016-01-23 04:24:00', '2016-01-23 08:46:00', '2016-01-23 10:17:00',
'2016-01-23 14:47:00', '2016-01-23 19:22:00', '2016-01-23 21:41:00',
'2016-01-24 00:45:00', '2016-01-24 07:50:00', '2016-01-24 10:59:00',]
gas_list['DG027'] = ['2016-01-29 10:48:00', '2016-01-29 11:56:00', '2016-01-29 14:34:00']
gas_list['DG032_2'] = ['2016-03-22 13:22:00',
'2016-03-22 21:07:00', '2016-03-23 02:17:00', '2016-03-23 11:19:00',
'2016-03-23 18:46:00', '2016-03-23 21:33:00', '2016-03-24 00:15:00',
'2016-03-24 02:13:00', '2016-03-24 06:15:00', '2016-03-24 09:08:00',
'2016-03-24 13:20:00', '2016-03-24 18:19:00', '2016-03-25 01:41:00',
'2016-03-25 03:29:00', '2016-03-25 10:46:00', '2016-03-25 17:44:00',
'2016-03-26 02:54:00', '2016-03-26 03:34:00', '2016-03-26 04:15:00',
'2016-03-26 05:46:00', '2016-03-26 09:02:00']
gas_list['DG038_1'] = ['2016-05-06 10:34:00', '2016-05-06 14:25:00',
'2016-05-06 17:58:00', '2016-05-06 19:34:00', '2016-05-06 21:22:00',
'2016-05-06 23:36:00', '2016-05-07 03:11:00', '2016-05-07 05:45:00',
'2016-05-07 10:13:00', '2016-05-07 12:45:00', '2016-05-07 16:33:00',
'2016-05-07 19:01:00', '2016-05-08 00:19:00', '2016-05-08 03:54:00',
'2016-05-08 09:13:00', '2016-05-08 13:41:00', '2016-05-08 19:30:00',
'2016-05-08 22:35:00', '2016-05-09 03:53:00', '2016-05-09 10:19:00',
'2016-05-09 11:59:00', '2016-05-09 18:45:00', '2016-05-09 21:53:00',
'2016-05-10 03:06:00', '2016-05-10 07:38:00', '2016-05-10 12:08:00',
'2016-05-10 13:51:00', '2016-05-10 13:59:00', '2016-05-10 17:35:00',
'2016-05-10 23:06:00', '2016-05-11 05:13:00', '2016-05-11 10:03:00',]
gas_list['DG038_2'] =['2016-05-17 18:01:00',
'2016-05-18 01:14:00', '2016-05-18 07:04:00', '2016-05-18 11:49:00',
'2016-05-18 17:54:00', '2016-05-19 03:08:00', '2016-05-19 12:57:00',
'2016-05-19 17:57:00', '2016-05-20 03:34:00', '2016-05-20 09:09:00',
'2016-05-20 17:20:00', '2016-05-21 00:46:00', '2016-05-21 01:57:00',
'2016-05-21 05:34:00', '2016-05-21 09:35:00', '2016-05-21 14:07:00',
'2016-05-21 18:18:00', '2016-05-21 21:35:00', '2016-05-22 02:25:00',
'2016-05-22 09:43:00']
gas_list['DG040_1'] = ['2016-06-09 16:24:00',
'2016-06-09 16:57:00', '2016-06-09 19:15:00', '2016-06-09 21:51:00',
'2016-06-09 23:12:00', '2016-06-09 23:21:00', '2016-06-09 23:32:00',
'2016-06-10 01:17:00', '2016-06-10 03:28:00', '2016-06-10 07:36:00',
'2016-06-10 11:34:00', '2016-06-10 11:42:00', '2016-06-10 18:03:00',
'2016-06-10 19:27:00', '2016-06-10 19:41:00', '2016-06-10 22:04:00',
'2016-06-11 02:53:00', '2016-06-11 11:17:00', '2016-06-11 15:45:00',
'2016-06-11 16:47:00']
gas_list['DG040_2'] = ['2016-06-21 14:17:00', '2016-06-21 18:55:00',
'2016-06-21 23:24:00', '2016-06-22 07:03:00', '2016-06-22 09:55:00',
'2016-06-22 13:36:00', '2016-06-22 13:47:00', '2016-06-22 17:59:00',
'2016-06-22 22:16:00', '2016-06-23 00:36:00', '2016-06-23 05:34:00',
'2016-06-23 09:41:00', '2016-06-23 14:09:00', '2016-06-23 19:05:00',
'2016-06-23 23:00:00', '2016-06-24 04:57:00', '2016-06-24 09:25:00',
'2016-06-24 11:27:00', '2016-06-24 13:33:00', '2016-06-24 19:39:00',
'2016-06-24 22:51:00', '2016-06-24 01:17:00', '2016-06-24 07:24:00']
gas_list['DG046_1'] = ['2016-07-10 20:40:00',
'2016-07-10 21:45:00', '2016-07-10 23:12:00', '2016-07-11 00:26:00',
'2016-07-11 03:11:00', '2016-07-11 05:43:00', '2016-07-11 07:17:00',
'2016-07-11 09:04:00', '2016-07-11 16:58:00', '2016-07-11 18:18:00',
'2016-07-11 21:44:00', '2016-07-12 01:08:00', '2016-07-12 06:55:00',
'2016-07-12 10:02:00', '2016-07-12 15:09:00']
# Only keep those pCO2 data when there is HFOV recording
for recording in recordings:
start = gas_list[recording][0]
end = gas_list[recording][-1]
pCO2s[recording] = pCO2s[recording].ix[start:end]
Explanation: Create a dictionary containing the time of blood gases taken during HFOV recording
End of explanation
def DCO2_calculator(recording, column, time, interval, offset, ):
'''
in:
-recording: recording name (string)
-column: column to calculate the statistics on, e.g. DCO2 or DCO2_corr
-time: time of blood gas (string)
-interval: time in minutes (int)
-offset: time in minutes (int)
out:
tuple of 9 values:
- number of DCO2 measurements over the period
- mean
- st deviation
- median
- mad (mean absolute deviation)
- minimum
- 25th centile
- 75th centile
- maximum
Calculates statistics about DCO2 values before a blood gas. The interval in 'interval' long (in minutes)
and it ends 'offset' minutes before the blood gas
'''
#time is given in nanoseconds: 1 min = 60000000000 nanoseconds
end = pd.to_datetime(time) - pd.to_timedelta(offset * 60000000000)
start = end - pd.to_timedelta(interval * 60000000000)
# 1 sec (1000000000 nsec) needs to be taken away otherwise the period will be 901 seconds
# as it includes the last second
data = slow_measurements_hfov[recording][start : end - pd.to_timedelta(1000000000)]
stats = stats_calculator(data, column)
return stats
# Create a dictionary of DCO2 calculations
DCO2_stats_gases = {}
DCO2_stats_gases[interval, offset] = {}
columns = ['number_of_seconds', 'mean', 'stdev', 'median', 'mad', 'min', '25pc', '75pc', 'max', ]
for recording in recordings:
lst = []
for gas in gas_list[recording]:
lst.append(DCO2_calculator(recording,'DCO2', gas, interval, offset))
stats = DataFrame(lst, columns = columns, index = gas_list[recording])
DCO2_stats_gases[interval, offset][recording] = stats
# add pCO2 data
DCO2_stats_gases[interval, offset][recording] = \
DCO2_stats_gases[interval, offset][recording].join(pCO2s[recording])
# Dropping the rows from the DataFrame that do not have DCO2 or pCO2 values
# Some rows do not have DCO2 values because the baby received conventional ventilation
# during this period (flanked by HFOV periods)
# Some rows do not have pCO2 data because the blood gas was insufficient
DCO2_stats_gases[interval, offset][recording].dropna(axis = 0, how = 'any', subset = ['mean', 'pCO2, POC']
, inplace = True)
writer = pd.ExcelWriter('%s/%s_%d_%d.xlsx' % (dir_write, 'DCO2_stats_gases', interval, offset) )
for recording in recordings:
DCO2_stats_gases[interval, offset][recording].to_excel(writer, recording)
writer.save()
# Create a dictionary of weight-squared corrected DCO2 calculations
DCO2_corr_stats_gases = {}
DCO2_corr_stats_gases[interval, offset] = {}
columns = ['number_of_seconds', 'mean', 'stdev', 'median', 'mad', 'min', '25pc', '75pc', 'max', ]
for recording in recordings:
lst = []
for gas in gas_list[recording]:
lst.append(DCO2_calculator(recording,'DCO2_corr', gas, interval, offset))
stats = DataFrame(lst, columns = columns, index = gas_list[recording])
DCO2_corr_stats_gases[interval, offset][recording] = stats
# add pCO2 data
DCO2_corr_stats_gases[interval, offset][recording] = \
DCO2_corr_stats_gases[interval, offset][recording].join(pCO2s[recording])
# Dropping the rows from the DataFrame that do not have DCO2 or pCO2 values
# Some rows do not have DCO2 values because the baby received conventional ventilation
# during this period (flanked by HFOV periods)
# Some rows do not have pCO2 data because the blood gas was insufficient
DCO2_corr_stats_gases[interval, offset][recording].dropna(axis = 0, how = 'any', subset = ['mean', 'pCO2, POC']
, inplace = True)
writer = pd.ExcelWriter('%s/%s_%d_%d.xlsx' % (dir_write, 'DCO2_corr_stats_gases', interval, offset) )
for recording in recordings:
DCO2_corr_stats_gases[interval, offset][recording].to_excel(writer, recording)
writer.save()
Explanation: Calculate DCO2s and weight-corrected DCO2s for 10 minute periods before the blood gases, combine them with the pCO2 readings, collect them in a dictionary of dictionaries and write it to multisheet Excel files.
End of explanation
DCO2_stats_gases_all = {}
lst = []
for recording in recordings:
DCO2_stats_gases[interval, offset][recording]['recording'] = recording
lst.append(DCO2_stats_gases[interval, offset][recording])
DCO2_stats_gases_all[interval, offset] = pd.concat(lst)
DCO2_stats_gases_all[interval, offset];
Explanation: Combine all DCO2 and DCO2_corr data into single dataframes (one for each) and write them to excel files
standard DCO2
End of explanation
DCO2_corr_stats_gases_all = {}
lst = []
for recording in recordings:
DCO2_corr_stats_gases[interval, offset][recording]['recording'] = recording
lst.append(DCO2_corr_stats_gases[interval, offset][recording])
DCO2_corr_stats_gases_all[interval, offset] = pd.concat(lst)
DCO2_corr_stats_gases_all[interval, offset];
Explanation: weight_corrected DCO2
End of explanation
writer = pd.ExcelWriter('%s/%s_%d_%d.xlsx' % (dir_write, 'DCO2_stats_all_gases', interval, offset) )
DCO2_stats_gases_all[interval, offset].to_excel(writer, 'DCO2')
writer.save()
writer = pd.ExcelWriter('%s/%s_%d_%d.xlsx' % (dir_write, 'DCO2_corr_stats_all_gases', interval, offset) )
DCO2_corr_stats_gases_all[interval, offset].to_excel(writer, 'DCO2_corr')
writer.save()
DCO2_stats_gases_all[interval, offset].to_pickle('%s/%s_%d_%d.p'
% (dir_write, 'DCO2_stats_all_gases', interval, offset))
Explanation: Write cumulative data to excel files and binary pickle files
End of explanation
# Create a dictionary of DCO2 calculations
VThf_stats_gases = {}
VThf_stats_gases[interval, offset] = {}
columns = ['number_of_seconds', 'mean', 'stdev', 'median', 'mad', 'min', '25pc', '75pc', 'max', ]
for recording in recordings:
lst = []
for gas in gas_list[recording]:
lst.append(DCO2_calculator(recording,'VThf_kg', gas, interval, offset))
stats = DataFrame(lst, columns = columns, index = gas_list[recording])
VThf_stats_gases[interval, offset][recording] = stats
# add pCO2 data
VThf_stats_gases[interval, offset][recording] = \
VThf_stats_gases[interval, offset][recording].join(pCO2s[recording])
# Dropping the rows from the DataFrame that do not have DCO2 or pCO2 values
# Some rows do not have DCO2 values because the baby received conventional ventilation
# during this period (flanked by HFOV periods)
# Some rows do not have pCO2 data because the blood gas was insufficient
VThf_stats_gases[interval, offset][recording].dropna(axis = 0, how = 'any', subset = ['mean', 'pCO2, POC']
, inplace = True)
Explanation: Calculate weight-corrected high frequency tidal volumes (VThfs) before the blood gases, combine them with the pCO2 readings, collect them in a dictionary of dictionaries
End of explanation
VThf_stats_gases_all = {}
lst = []
for recording in recordings:
VThf_stats_gases[interval, offset][recording]['recording'] = recording
lst.append(VThf_stats_gases[interval, offset][recording])
VThf_stats_gases_all[interval, offset] = pd.concat(lst)
Explanation: Combine all VThfs data into single dataframe
End of explanation
# Create a dictionary of DCO2 calculations
VThf_sq_stats_gases = {}
VThf_sq_stats_gases[interval, offset] = {}
columns = ['number_of_seconds', 'mean', 'stdev', 'median', 'mad', 'min', '25pc', '75pc', 'max', ]
for recording in recordings:
lst = []
for gas in gas_list[recording]:
lst.append(DCO2_calculator(recording,'VThf_kg_2', gas, interval, offset))
stats = DataFrame(lst, columns = columns, index = gas_list[recording])
VThf_sq_stats_gases[interval, offset][recording] = stats
# add pCO2 data
VThf_sq_stats_gases[interval, offset][recording] = \
VThf_sq_stats_gases[interval, offset][recording].join(pCO2s[recording])
# Dropping the rows from the DataFrame that do not have DCO2 or pCO2 values
# Some rows do not have DCO2 values because the baby received conventional ventilation
# during this period (flanked by HFOV periods)
# Some rows do not have pCO2 data because the blood gas was insufficient
VThf_sq_stats_gases[interval, offset][recording].dropna(axis = 0, how = 'any', subset = ['mean', 'pCO2, POC']
, inplace = True)
Explanation: Calculate weight-square-corrected VThfs before the blood gases, combine them with the pCO2 readings, collect them in a dictionary of dictionaries
End of explanation
VThf_sq_stats_gases_all = {}
lst = []
for recording in recordings:
VThf_sq_stats_gases[interval, offset][recording]['recording'] = recording
lst.append(VThf_sq_stats_gases[interval, offset][recording])
VThf_sq_stats_gases_all[interval, offset] = pd.concat(lst)
Explanation: Combine all VThf-squared data into single dataframe
End of explanation
def parameter_calculator(recording, parameter, time, interval, offset, ):
'''
in:
-recording: recording name (string)
-parameter: parameter to calculate the mean on, e.g. DCO2 or DCO2_corr
-time: time of blood gas (string)
-interval: time in minutes (int)
-offset: time in minutes (int)
out:
-
Calculates the mean value of the parameter before a blood gas.
The interval in 'interval' long (in minutes) and it ends 'offset' minutes before the blood gas
'''
# time is given in nanoseconds: 1 min = 60000000000 nanoseconds
end = pd.to_datetime(time) - pd.to_timedelta(offset * 60000000000)
start = end - pd.to_timedelta(interval * 60000000000)
# 1 sec (1000000000 nsec) needs to be taken away otherwise the period will be 901 seconds
# as it includes the last second
data = slow_measurements_hfov[recording][start : end - pd.to_timedelta(1000000000)]
return data[parameter].mean()
# Create a dictionary of Vthf, DCO2 calculations together with the leak data
DCO2_stats_gases_leak = {}
DCO2_stats_gases_leak[interval, offset] = {}
for recording in recordings:
columns = ['DCO2', 'DCO2_corr', '5001|% leak [%]', '5001|MVleak [L/min]', '5001|MVi [L/min]',
'5001|MVe [L/min]']
par_dict = {}
for gas in gas_list[recording]:
lst = []
lst.append(parameter_calculator(recording,'DCO2', gas, interval, offset))
lst.append(parameter_calculator(recording,'DCO2_corr', gas, interval, offset))
lst.append(parameter_calculator(recording,'5001|% leak [%]', gas, interval, offset))
lst.append(parameter_calculator(recording,'5001|MVleak [L/min]', gas, interval, offset))
lst.append(parameter_calculator(recording,'5001|MVi [L/min]', gas, interval, offset))
lst.append(parameter_calculator(recording,'5001|MVe [L/min]', gas, interval, offset))
par_dict[gas] = lst
DCO2_stats_gases_leak[interval, offset][recording] = \
DataFrame(par_dict).T
DCO2_stats_gases_leak[interval, offset][recording].columns = columns
# add pCO2 data
DCO2_stats_gases_leak[interval, offset][recording] = \
DCO2_stats_gases_leak[interval, offset][recording].join(pCO2s[recording])
# Dropping the rows from the DataFrame that do not have pCO2 values
# Some rows do not have pCO2 data because the blood gas was insufficient
DCO2_stats_gases_leak[interval, offset][recording].dropna(axis = 0, how = 'any', subset = ['pCO2, POC']
, inplace = True)
Explanation: Create DataFrames with leak data together with the mean DCO2, DCO2_corr and with PCO2 data
End of explanation
DCO2_stats_gases_leak_all = {}
lst = []
for recording in recordings:
DCO2_stats_gases_leak[interval, offset][recording]['recording'] = recording
lst.append(DCO2_stats_gases_leak[interval, offset][recording])
DCO2_stats_gases_leak_all[interval, offset] = pd.concat(lst)
DCO2_stats_gases_leak_all[interval, offset].columns = \
['DCO2', 'DCO2_corr', 'leak%', 'MV_leak', 'MVi', 'MVe', 'pCO2', 'specimen', 'recording']
# Mean leak more that 5% before the blood gas
len(DCO2_stats_gases_leak_all[10,2][DCO2_stats_gases_leak_all[10,2]['leak%'] > 5]);
# Mean leak more that 10% before the blood gas
len(DCO2_stats_gases_leak_all[10,2][DCO2_stats_gases_leak_all[10,2]['leak%'] > 10]);
# Mean leak more that 25% before the blood gas
len(DCO2_stats_gases_leak_all[10,2][DCO2_stats_gases_leak_all[10,2]['leak%'] > 25]);
# Mean leak more that 50% before the blood gas
len(DCO2_stats_gases_leak_all[10,2][DCO2_stats_gases_leak_all[10,2]['leak%'] > 50]);
Explanation: Combine all leak data into single dataframes (one for each) and write them to excel files
End of explanation
def correl(x, y):
'''
input: two numeric arrays of the same size
returns: a tuple of
1. Pearson's correlation coefficient: r
2. low and high 95% confidence intervals or r (two values)
3. Coefficient of determination: r^2
4. p-value of correlation
'''
assert len(x) == len(y)
r, p = pearsonr(x, y)
f = 0.5*np.log((1+r)/(1-r))
se = 1/np.sqrt(len(x)-3)
ucl = f + 1.96 * se
lcl = f - 1.96 * se
lcl = (np.exp(2*lcl) - 1) / (np.exp(2*lcl) + 1)
ucl = (np.exp(2*ucl) - 1) / (np.exp(2*ucl) + 1)
return r , lcl, ucl , r*r, p
Explanation: Define functions to visualise data
Define function to calculate correlation
End of explanation
def visualise_DCO2_corr(rec, i, xlimoffset = 20, ylimoffset = 2, textboxoffset_x = 0.55, textboxoffset_y = 0.9):
'''
input:
- rec = list of recordings (list)
- xlimoffset = how much longer is the X-axis than the highest X value (int)
- ylimoffset = how much longer is the y-axis than the highest y value (int)
- textboxoffset_x = vertical position of the textbox compared to X-lim, fraction (float)
- textboxoffset_y = horizontal position of the textbox compared to y-lim, fraction (float)
Draws a pCO2-DCO2_corr graph using data in recordings 'rec'. Points belonging to different recordings
have different markers and colors.
A correlation line is also drawn.
It also calculates Spearman's correlation coefficient with confidence intervals and p-value.
Puts these data on the graph.
'''
total = DCO2_stats_gases_all[interval, offset]
sample = total.ix[total['recording'] == recording]
x = sample['mean']
y = sample['pCO2, POC']
if len(x) == 0:
return
ax = fig.add_subplot(len(rec),1,i+1);
plt.scatter(x , y, color = 'black', marker = markers[0], s = 60)
plt.ylabel(r'pCO$ \rm _2$ (kPa)', fontsize = 14)
plt.xlabel(r'DCO$ \rm _2$_corr (mL$\rm^2$/sec/kg$^2$)', fontsize = 14)
plt.title(r'%s: Weight-corrected DCO$ \rm _2$' % recording , fontsize = 14)
xlim = ax.get_xlim()[1]
ylim = ax.get_ylim()[1]
plt.xlim([0, xlim + xlimoffset])
plt.ylim([0, ylim + ylimoffset])
# Polynomial Coefficients
coeffs = np.polyfit(x, y, deg = 1)
result = coeffs.tolist()
# Fit a trendline
l = np.poly1d(coeffs)
plt.plot(x,l(x),'r--')
# Calculate pearson's correlation coefficient with confidence intervals, coefficiet of determination and p value
r , lcl, ucl , r2, p = correl(x,y)
p = round(p, 4)
# print the equation on the graph area
text = 'y=%.4fx+(%.4f)\nr=%.4f (%.4f , %.4f)\np=%.4f' % (result[0], result[1], r, lcl, ucl, p)
plt.text(xlim * textboxoffset_x, ylim * textboxoffset_y, text, color = 'black', style='normal', fontsize=15,
bbox={'facecolor':'white', 'edgecolor':'red', 'alpha':1, 'pad':10})
text2 = 'y=%.4fx+(%.4f) r=%.4f (%.4f , %.4f) p=%.4f' % (result[0], result[1], r, lcl, ucl, p)
print(text2)
fig.savefig('%s/%s_DCO2_corr.pdf' % (dir_write, recording), dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'pdf',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
def visualise_DCO2_corr_art(rec, i, xlimoffset = 20, ylimoffset = 2, textboxoffset_x = 0.55, textboxoffset_y = 0.9):
'''
input:
- rec = list of recordings (list)
- xlimoffset = how much longer is the X-axis than the highest X value (int)
- ylimoffset = how much longer is the y-axis than the highest y value (int)
- textboxoffset_x = vertical position of the textbox compared to X-lim, fraction (float)
- textboxoffset_y = horizontal position of the textbox compared to y-lim, fraction (float)
Draws a pCO2-DCO2_corr graph using data in recordings 'rec'. Points belonging to different recordings
have different markers and colors. THIS FUNCTION ONLY CONSIDERS ARTERIAL BLOOD GASES.
A correlation line is also drawn.
It also calculates Spearman's correlation coefficient with confidence intervals and p-value.
Puts these data on the graph.
'''
total = DCO2_stats_gases_all[interval, offset]
sample = total.ix[total['recording'] == recording]
sample_art = sample.ix[sample['Blood specimen type, POC'] == 'Arteri...']
x = sample_art['mean']
y = sample_art['pCO2, POC']
if len(x) == 0:
return
ax = fig.add_subplot(len(rec),1,i+1);
plt.scatter(x , y, color = 'black', marker = markers[0], s = 60)
plt.ylabel("pCO2 (kPa)", fontsize = 16)
plt.xlabel("DCO2 (ml^2/kg^2.sec)", fontsize = 16)
plt.title("%s: Weight-corrected DCO2 - arterial pCO2" % recording , fontsize = 18)
xlim = ax.get_xlim()[1]
ylim = ax.get_ylim()[1]
plt.xlim([0, xlim + xlimoffset])
plt.ylim([0, ylim + ylimoffset])
# Polynomial Coefficients
coeffs = np.polyfit(x, y, deg = 1)
result = coeffs.tolist()
# Fit a trendline
l = np.poly1d(coeffs)
plt.plot(x,l(x),'r--')
# Calculate pearson's correlation coefficient with confidence intervals, coefficiet of determination and p value
r , lcl, ucl , r2, p = correl(x,y)
p = round(p, 3)
# print the equation on the graph area
text = 'y=%.4fx+(%.4f)\nr=%.4f (%.4f , %.4f)\np=%.3f' % (result[0], result[1], r, lcl, ucl, p)
plt.text(xlim * textboxoffset_x, ylim * textboxoffset_y, text, color = 'black', style='normal', fontsize=15,
bbox={'facecolor':'white', 'edgecolor':'red', 'alpha':1, 'pad':10})
fig.savefig('%s/%s_DCO2_corr_art.pdf' % (dir_write, recording), dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'pdf',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
Explanation: Define functions to visualize individual recordings and to save them to files
End of explanation
# Select only recordings where at least 10 blood gases are available
select_recs = [key for key in pCO2s if len(pCO2s[key]) >= 10]
select_recs = sorted(select_recs)
select_recs;
recordings = select_recs[0:1]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 6, textboxoffset_x = 0.50, textboxoffset_y = 1.1)
plt.close()
recordings = select_recs[1:2]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 4, textboxoffset_x = 0.66, textboxoffset_y = 1.2)
plt.close()
recordings = select_recs[2:3]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 0.55, textboxoffset_y = 1.2)
plt.close()
recordings = select_recs[3:4]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 0.5, textboxoffset_y = 1.1)
plt.close()
recordings = select_recs[4:5]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 0.55, textboxoffset_y = 1.2)
plt.close()
recordings = select_recs[5:6]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 0.5, textboxoffset_y = 1.2)
plt.close()
recordings = select_recs[6:7]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 0.45, textboxoffset_y = 1)
plt.close()
recordings = select_recs[7:8]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 1.2, textboxoffset_y = 1.3)
plt.close()
recordings = select_recs[8:9]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 0.55, textboxoffset_y = 1.1)
plt.close()
recordings = select_recs[9:10]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 0.55, textboxoffset_y = 1.1)
plt.close()
recordings = select_recs[10:11]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 0.6, textboxoffset_y = 1.2)
plt.close()
recordings = select_recs[11:12]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 0.6, textboxoffset_y = 1.2)
plt.close()
recordings = select_recs[12:13]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 0.75, textboxoffset_y = 1.2)
plt.close()
recordings = select_recs[13:]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr(recordings, i, xlimoffset = 20, ylimoffset = 5, textboxoffset_x = 0.55, textboxoffset_y = 1.1)
plt.close()
Explanation: Visualise data from individual recordings where at least ten data points are available
All blood gases
These graphs are shown in Supplementary Figure 3 of the paper.
End of explanation
# Select only recordings where at least 10 ARTERIAL blood gases are available
select_recs_art = [key for key in pCO2s if
len(pCO2s[key].ix[pCO2s[key]['Blood specimen type, POC'] == 'Arteri...']) >= 10]
select_recs_art = sorted(select_recs_art)
select_recs_art;
recordings = select_recs_art[0:1]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr_art(recordings, i, xlimoffset = 20, ylimoffset = 5,
textboxoffset_x = 0.65, textboxoffset_y = 1.3)
plt.close()
recordings = select_recs_art[1:2]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr_art(recordings, i, xlimoffset = 20, ylimoffset = 5,
textboxoffset_x = 0.5, textboxoffset_y = 1.2)
plt.close()
recordings = select_recs_art[2:3]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr_art(recordings, i, xlimoffset = 20, ylimoffset = 5,
textboxoffset_x = 0.45, textboxoffset_y = 1)
plt.close()
recordings = select_recs_art[3:4]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr_art(recordings, i, xlimoffset = 20, ylimoffset = 5,
textboxoffset_x = 1.3, textboxoffset_y = 1.25)
plt.close()
recordings = select_recs_art[4:5]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr_art(recordings, i, xlimoffset = 20, ylimoffset = 5,
textboxoffset_x = 0.6, textboxoffset_y = 1.3)
plt.close()
recordings = select_recs_art[5:6]
fig = plt.figure()
fig.set_size_inches(8,6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
visualise_DCO2_corr_art(recordings, i, xlimoffset = 20, ylimoffset = 5,
textboxoffset_x = 0.6, textboxoffset_y = 1.2)
plt.close()
Explanation: Arterial gases only
End of explanation
val = range(0, 180, 5)
true_pos = []
false_pos = []
true_neg = []
false_neg = []
a = DCO2_stats_gases_leak_all[10,2]
for i in val:
pos = a[a['DCO2_corr'] >= i]
neg = a[a['DCO2_corr'] < i]
tp = len(pos[pos['pCO2'] <= 8])
fp = len(pos[pos['pCO2'] > 8])
tn = len(neg[neg['pCO2'] > 8])
fn = len(neg[neg['pCO2'] <= 8])
true_pos.append(tp)
false_pos.append(fp)
true_neg.append(tn)
false_neg.append(fn)
DCO2_test = DataFrame({'tp': true_pos, 'fp': false_pos, 'tn': true_neg, 'fn': false_neg}, index = val )
DCO2_test['sensitivity'] = round(DCO2_test.tp / (DCO2_test.tp + DCO2_test.fn), 3)
DCO2_test['specificity'] = round(DCO2_test.tn / (DCO2_test.tn + DCO2_test.fp), 3)
DCO2_test['pos_pred_value'] = round(DCO2_test.tp / (DCO2_test.tp + DCO2_test.fp), 3)
DCO2_test['neg_pred_value'] = round(DCO2_test.tn / (DCO2_test.tn + DCO2_test.fn), 3)
DCO2_test['Youden'] = round(DCO2_test.sensitivity + DCO2_test.specificity - 1, 3)
DCO2_test[DCO2_test.Youden == DCO2_test.Youden.max()];
DCO2_test['1-specificity'] = 1 - DCO2_test['specificity']
DCO2_test.sort_values('1-specificity', inplace = True)
DCO2_test;
DCO2_test['1-specificity'].iloc[30];
DCO2_test['sensitivity'].iloc[30];
AUC = 0 # Area under the ROC curve
for i in range(len(DCO2_test['1-specificity'])-2):
c_high = ((DCO2_test['1-specificity'].iloc[i+1] - DCO2_test['1-specificity'].iloc[i]) *
DCO2_test['sensitivity'].iloc[i+1])
c_low = ((DCO2_test['1-specificity'].iloc[i+1] - DCO2_test['1-specificity'].iloc[i]) *
DCO2_test['sensitivity'].iloc[i])
c = (c_high + c_low) / 2
AUC += c
Explanation: Create ROC curve
End of explanation
s = DCO2_corr_stats_gases_all[interval, offset]
s = s[s['mean'] > 60]
len(s);
s[s['pCO2, POC'] > 8];
s['recording'].unique();
len(s['recording'].unique());
Explanation: Select blood gases with DCO2_corr values over 60 mL2/kg2/sec
All gases
End of explanation
s = DCO2_corr_stats_gases_all[interval, offset]
s = s.ix[s['Blood specimen type, POC'] == 'Arteri...']
s = s[s['mean'] > 60]
len(s);
s[s['pCO2, POC'] > 8];
s['recording'].unique();
len(s['recording'].unique());
Explanation: Arterial blood gases
End of explanation
recordings = [ 'DG005_1', 'DG005_2', 'DG005_3', 'DG006_1', 'DG009', 'DG016', 'DG017', 'DG018_1',
'DG020', 'DG022', 'DG025', 'DG027', 'DG032_2', 'DG038_1', 'DG038_2', 'DG040_1', 'DG040_2', 'DG046_1']
Explanation: Generate figures for the manuscript
End of explanation
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
total = DCO2_stats_gases_all[interval, offset]
sample = total.ix[total['recording'] == recording]
x = sample['mean']
y = sample['pCO2, POC']
fig.add_subplot(1,1,1);
plt.scatter(x , y, color = colors[i] , marker = markers[i], s = 25)
plt.ylabel(r'pCO$ \rm _2$ (kPa)', fontsize = 14)
plt.xlabel(r'DCO$ \rm _2$ (mL$\rm^2$/sec)', fontsize = 14)
plt.tick_params(axis='both', which='major', labelsize=14)
plt.xlim([0,600])
plt.title(r'Uncorrected DCO$ \rm _2$ - all gases' , fontsize = 14);
a = DCO2_stats_gases_all[interval, offset]['mean']
b = DCO2_stats_gases_all[interval, offset]['pCO2, POC']
# Polynomial Coefficients
coeffs = np.polyfit(a, b, deg = 1)
result = coeffs.tolist()
# Fit a trendline
l = np.poly1d(coeffs)
plt.plot(a,l(a),'--', color = 'red')
# Calculate pearson's correlation coefficient with confidence intervals, coefficiet of determination and p value
r , lcl, ucl , r2, p = correl(a,b)
# print the equation on the graph area
text = 'y=%.4fx+(%.4f)\nr=%.4f (%.4f , %.4f)' % (result[0], result[1], r, lcl, ucl)
plt.text(275, b.max()-2, text, color = 'black', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'black', 'alpha':1, 'pad':10});
fig.savefig('%s/Figure_1_color.tiff' % dir_write, dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'tiff',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
plt.close()
Explanation: Figures in the main article
Figure 1
End of explanation
x = DCO2_stats_gases_all[interval, offset]['mean']
y = DCO2_corr_stats_gases_all[interval, offset]['mean']
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
ax1 = fig.add_subplot(1, 1, 1);
bp = plt.boxplot([x, y] )
plt.setp(bp['boxes'], color='black')
plt.setp(bp['whiskers'], color='black')
plt.setp(bp['medians'], color='red')
plt.setp(bp['fliers'], color='black', marker='+')
plt.ylabel("Units", fontsize = 14)
plt.xlabel("", fontsize = 14)
plt.xticks([1,2], [r'DCO$ \rm _2$ (mL$\rm^2$/sec)', r'DCO$ \rm _2$ (mL$\rm^2$/sec/kg$^2$)'], size = 14)
plt.tick_params(axis='both', which='major', labelsize=14)
plt.title(r'Boxplots of all DCO$ \rm _2$ data' , fontsize = 14);
# calculate p values
t_stats, p = ttest_rel(x, y)
ax1.text(ax1.get_xlim()[1]-1.2, 550, 'p<0.001', color = 'red', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'None', 'alpha':1, 'pad':10});
fig.savefig('%s/Figure_2_color.tiff' % dir_write, dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'tiff',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
plt.close()
Explanation: Figure 2
End of explanation
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
for i, recording in enumerate(recordings):
total = DCO2_corr_stats_gases_all[interval, offset]
sample = total.ix[total['recording'] == recording]
x = sample['mean']
y = sample['pCO2, POC']
fig.add_subplot(1,1,1);
ax = plt.scatter(x , y, color = colors[i], marker = markers[i], s = 25)
plt.axvline(x= 50 , linewidth=1, color = 'black')
plt.axhline(y= 8, linewidth=1, color = 'black')
plt.ylabel(r'pCO$ \rm _2$ (kPa)', fontsize = 14)
plt.xlabel(r'DCO$ \rm _2$_corr (mL$\rm^2$/sec/kg$^2$) - all gases', fontsize = 14)
plt.tick_params(axis='both', which='major', labelsize=14)
plt.xlim([0,200])
plt.title(r'Weight-corrected DCO$ \rm _2$ - all gases' , fontsize = 14)
a = DCO2_corr_stats_gases_all[interval, offset]['mean']
b = DCO2_corr_stats_gases_all[interval, offset]['pCO2, POC']
# Polynomial Coefficients
coeffs = np.polyfit(a, b, deg = 1)
result = coeffs.tolist()
# Fit a trendline
l = np.poly1d(coeffs)
plt.plot(a,l(a), '--', color = 'red', linewidth = 2)
# Calculate pearson's correlation coefficient with confidence intervals, coefficiet of determination and p value
r , lcl, ucl , r2, p = correl(a,b)
# print the equation on the graph area
text = 'y=%.4fx+(%.4f)\nr=%.4f (%.4f , %.4f)\np<0.001' % (result[0], result[1], r, lcl, ucl)
plt.text(a.max()*0.47, b.max() * 0.85, text, color = 'black', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'black', 'alpha':1, 'pad':10})
fig.savefig('%s/Figure_3_color.tiff' % dir_write, dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'tiff',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
plt.close()
Explanation: Figure 3
End of explanation
# ROC curve
# A corrected DCO2 > 50 ml/min/kg2 predict a pCO2 < 8 kPa with a senstiivty of 0.390 and a specificity of 0.825
# (Youden score = 0.215)
x = [0, 1]; y = [0, 1]
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
fig.add_subplot(1,1,1)
text = r'DCO$ \rm _2$corr = 50'
text2 = 'AUC = %.3f' % AUC
plt.plot(1 - DCO2_test.specificity , DCO2_test.sensitivity, c = 'black')
ax = plt.plot(x, y, c = 'black', linestyle = '--', )
plt.vlines(1 - 0.824561, 1 - 0.824561, 0.390, color='k', linewidth = 3)
plt.text(0.05, 0.60, text, color = 'black', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'black', 'alpha':1, 'pad':10});
plt.text(0.6, 0.3, text2, color = 'black', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'black', 'alpha':1, 'pad':10});
plt.title('ROC curve', size = 14)
plt.ylabel('Sensitivity', fontsize = 14)
plt.xlabel('1 - Specificity', fontsize = 14)
plt.grid()
fig.savefig('%s/Figure_4.tiff' % dir_write , dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'tiff',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
plt.close()
Explanation: Figure 4
End of explanation
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
subset = DCO2_stats_gases_leak_all[10,2][DCO2_stats_gases_leak_all[10,2]['leak%'] < 10]
for i, recording in enumerate(recordings):
total = subset
sample = total.ix[total['recording'] == recording]
x = sample['DCO2_corr']
y = sample['pCO2']
fig.add_subplot(1,1,1);
plt.scatter(x , y, color = colors[i], marker = markers[i], s = 25)
plt.ylabel(r'pCO$ \rm _2$ (kPa)', fontsize = 14)
plt.xlabel(r'DCO$ \rm _2$_corr (mL$\rm^2$/sec/kg$^2$)', fontsize = 14)
plt.xlim([0,200])
plt.title(r'Weight-corrected DCO$ \rm _2$ - leak < 10%' , fontsize = 14)
a = total['DCO2_corr']
b = total['pCO2']
# Polynomial Coefficients
coeffs = np.polyfit(a, b, deg = 1)
result = coeffs.tolist()
# Fit a trendline
l = np.poly1d(coeffs)
plt.plot(a,l(a),'r--')
# Calculate pearson's correlation coefficient with confidence intervals, coefficiet of determination and p value
r , lcl, ucl , r2, p = correl(a,b)
# print the equation on the graph area
text = 'y=%.4fx+(%.4f)\nr=%.4f (%.4f , %.4f)\np<0.001' % (result[0], result[1], r, lcl, ucl)
plt.text(a.max()*0.52, b.max()*0.84, text, color = 'black', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'red', 'alpha':1, 'pad':10})
fig.savefig('%s/Figure_5_color.tiff' % dir_write, dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'tiff',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
plt.close()
Explanation: Figure 5
End of explanation
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
fig.suptitle('E-Figure 1', size = 14)
for i, recording in enumerate(recordings):
total = DCO2_stats_gases_all[interval, offset]
sample = total.ix[total['recording'] == recording]
sample_art = sample.ix[sample['Blood specimen type, POC'] == 'Arteri...']
x = sample_art['mean']
y = sample_art['pCO2, POC']
fig.add_subplot(1,1,1);
plt.scatter(x , y, color = colors[i], marker = markers[i], s = 25)
plt.ylabel(r'pCO$ \rm _2$ (kPa)', fontsize = 14)
plt.xlabel(r'DCO$ \rm _2$ (mL$\rm^2$/sec)', fontsize = 14)
plt.tick_params(axis='both', which='major', labelsize=14)
plt.xlim([0,600])
plt.title(r'Uncorrected DCO$ \rm _2$ - arterial gases only' , fontsize = 14);
sample2 = total.ix[total['Blood specimen type, POC'] == 'Arteri...']
a = sample2['mean']
b = sample2['pCO2, POC']
# Polynomial Coefficients
coeffs = np.polyfit(a, b, deg = 1)
result = coeffs.tolist()
# Fit a trendline
l = np.poly1d(coeffs)
plt.plot(a,l(a),'--', color = 'red')
# Calculate pearson's correlation coefficient with confidence intervals, coefficiet of determination and p value
r , lcl, ucl , r2, p = correl(a,b)
# print the equation on the graph area
text = 'y=%.4fx+(%.4f)\nr=%.4f (%.4f , %.4f)' % (result[0], result[1], r, lcl, ucl)
plt.text(a.max()*0.48, b.max()-2, text, color = 'black', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'black', 'alpha':1, 'pad':10});
fig.savefig('%s/E_Figure_1_color.tiff' % dir_write, dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'tiff',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
plt.close()
Explanation: Supplementary figures
Supplementary Figure 1
End of explanation
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
fig.suptitle('E-Figure 2', size = 14)
for i, recording in enumerate(recordings):
total = DCO2_corr_stats_gases_all[interval, offset]
sample = total.ix[total['recording'] == recording]
sample_art = sample.ix[sample['Blood specimen type, POC'] == 'Arteri...']
x = sample_art['mean']
y = sample_art['pCO2, POC']
fig.add_subplot(1,1,1);
plt.scatter(x , y, color = colors[i], marker = markers[i], s = 25)
plt.axvline(x= 60 , linewidth=1, color = 'black')
plt.axhline(y= 8, linewidth=1, color = 'black')
plt.ylabel(r'pCO$ \rm _2$ (kPa)', fontsize = 14)
plt.xlabel(r'DCO$ \rm _2$_corr (mL$\rm^2$/sec/kg$^2$)', fontsize = 14)
plt.tick_params(axis='both', which='major', labelsize=14)
plt.xlim([0,200])
plt.title(r'Weight-corrected DCO$ \rm _2$ - arterial gases only' , fontsize = 14)
sample2 = total.ix[total['Blood specimen type, POC'] == 'Arteri...']
a = sample2['mean']
b = sample2['pCO2, POC']
# Polynomial Coefficients
coeffs = np.polyfit(a, b, deg = 1)
result = coeffs.tolist()
# Fit a trendline
l = np.poly1d(coeffs)
plt.plot(a,l(a), '--', color = 'red', linewidth = 2)
# Calculate pearson's correlation coefficient with confidence intervals, coefficient of determination and p value
r , lcl, ucl , r2, p = correl(a,b)
# print the equation on the graph area
text = 'y=%.4fx+(%.4f)\nr=%.4f (%.4f , %.4f)\np=%.3f' % (result[0], result[1], r, lcl, ucl, p)
plt.text(a.max()*0.48, b.max()*0.83, text, color = 'black', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'red', 'alpha':1, 'pad':10});
fig.savefig('%s/E_figure_2_color.tiff' % dir_write, dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'tiff',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
plt.close()
Explanation: Supplementary Figure 2
End of explanation
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
fig.suptitle('E-Figure 4', size = 14)
subset = DCO2_stats_gases_leak_all[10,2][DCO2_stats_gases_leak_all[10,2]['leak%'] < 10]
for i, recording in enumerate(recordings):
total = subset
sample = total.ix[total['recording'] == recording]
sample_art = sample.ix[subset['specimen'] == 'Arteri...']
x = sample_art['DCO2_corr']
y = sample_art['pCO2']
fig.add_subplot(1,1,1);
plt.scatter(x , y, color = colors[i], marker = markers[i], s = 25)
plt.ylabel(r'pCO$ \rm _2$ (kPa)', fontsize = 14)
plt.xlabel(r'DCO$ \rm _2$_corr (mL$\rm^2$/sec/kg$^2$)', fontsize = 14)
plt.xlim([0,200])
plt.title(r'Weight-corrected DCO$ \rm _2$ - leak<10%' , fontsize = 14)
sample2 = total.ix[total['specimen'] == 'Arteri...']
a = sample2['DCO2_corr']
b = sample2['pCO2']
# Polynomial Coefficients
coeffs = np.polyfit(a, b, deg = 1)
result = coeffs.tolist()
# Fit a trendline
l = np.poly1d(coeffs)
plt.plot(a,l(a),'--', color = 'red')
# Calculate pearson's correlation coefficient with confidence intervals, coefficiet of determination and p value
r , lcl, ucl , r2, p = correl(a,b)
# print the equation on the graph area
text = 'y=%.4fx+(%.4f)\nr=%.4f (%.4f , %.4f)\np=%.3f' % (result[0], result[1], r, lcl, ucl, p)
plt.text(a.max()*0.52, b.max()*0.84, text, color = 'black', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'red', 'alpha':1, 'pad':10})
fig.savefig('%s/E_figure_4_color.tiff' % dir_write, dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'tiff',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
plt.close()
Explanation: For Supplementary Figure 3 of the paper please see the functions creating the individual graphs above.
Supplementary Figure 4
End of explanation
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
fig.suptitle('E-Figure 5', size = 14)
subset = DCO2_stats_gases_leak_all[10,2][DCO2_stats_gases_leak_all[10,2]['leak%'] >= 10]
for i, recording in enumerate(recordings):
total = subset
sample = total.ix[total['recording'] == recording]
sample_art = sample.ix[subset['specimen'] == 'Arteri...']
x = sample_art['DCO2_corr']
y = sample_art['pCO2']
fig.add_subplot(1,1,1);
plt.scatter(x , y, color = colors[i], marker = markers[i], s = 25)
plt.ylabel(r'pCO$ \rm _2$ (kPa)', fontsize = 14)
plt.xlabel(r'DCO$ \rm _2$_corr (mL$\rm^2$/sec/kg$^2$)', fontsize = 14)
plt.xlim([0,200])
plt.title(r'Weight-corrected DCO$ \rm _2$ - leak>10%' , fontsize = 14)
sample2 = total.ix[total['specimen'] == 'Arteri...']
a = sample2['DCO2_corr']
b = sample2['pCO2']
# Polynomial Coefficients
coeffs = np.polyfit(a, b, deg = 1)
result = coeffs.tolist()
# Fit a trendline
l = np.poly1d(coeffs)
plt.plot(a,l(a),'--', color = 'red')
# Calculate pearson's correlation coefficient with confidence intervals, coefficiet of determination and p value
r , lcl, ucl , r2, p = correl(a,b)
# print the equation on the graph area
text = 'y=%.4fx+(%.4f)\nr=%.4f (%.4f , %.4f)\np=%.3f' % (result[0], result[1], r, lcl, ucl, p)
plt.text(a.max()*0.52, b.max()*0.84, text, color = 'black', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'red', 'alpha':1, 'pad':10})
fig.savefig('%s/E_figure_5_color.tiff' % dir_write, dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'tiff',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
plt.close()
Explanation: Supplementary Figure 5
End of explanation
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
fig.suptitle('E-Figure 6', size = 14)
for i, recording in enumerate(recordings):
total = VThf_stats_gases_all[interval, offset]
sample = total.ix[total['recording'] == recording]
x = sample['mean']
y = sample['pCO2, POC']
fig.add_subplot(1,1,1);
plt.scatter(x , y, color = colors[i], marker = markers[i], s = 25)
plt.ylabel(r'pCO$ \rm _2$ (kPa)', fontsize = 14)
plt.xlabel(r'VThf (mL/kg)', fontsize = 14)
plt.xlim([0,5])
# plt.title(r'Tidal volume' , fontsize = 14)
a = VThf_stats_gases_all[interval, offset]['mean']
b = VThf_stats_gases_all[interval, offset]['pCO2, POC']
# Polynomial Coefficients
coeffs = np.polyfit(a, b, deg = 1)
result = coeffs.tolist()
# Fit a trendline
l = np.poly1d(coeffs)
plt.plot(a,l(a),'--', color = 'red')
# Calculate pearson's correlation coefficient with confidence intervals, coefficiet of determination and p value
r , lcl, ucl , r2, p = correl(a,b)
# print the equation on the graph area
text = 'y=%.4fx+(%.4f)\nr=%.4f (%.4f , %.4f)\np<0.001' % (result[0], result[1], r, lcl, ucl)
plt.text(a.max()*0.52, b.max()*0.84, text, color = 'black', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'red', 'alpha':1, 'pad':10})
fig.savefig('%s/E_figure_6_color.tiff' % dir_write, dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'tiff',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
plt.close()
Explanation: Supplementary Figure 6
End of explanation
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
fig.suptitle('E-Figure 7', size = 14)
for i, recording in enumerate(recordings):
total = VThf_sq_stats_gases_all[interval, offset]
sample = total.ix[total['recording'] == recording]
x = sample['mean']
y = sample['pCO2, POC']
fig.add_subplot(1,1,1);
plt.scatter(x , y, color = colors[i], marker = markers[i], s = 25)
plt.ylabel(r'pCO$ \rm _2$ (kPa)', fontsize = 14)
plt.xlabel(r'VThf (mL$^2$/kg$^2$)', fontsize = 14)
plt.xlabel(r'VThf$^2$ (mL$^2$/kg$^2$)', fontsize = 14)
a = VThf_sq_stats_gases_all[interval, offset]['mean']
b = VThf_sq_stats_gases_all[interval, offset]['pCO2, POC']
# Polynomial Coefficients
coeffs = np.polyfit(a, b, deg = 1)
result = coeffs.tolist()
# Fit a trendline
l = np.poly1d(coeffs)
plt.plot(a,l(a),'--', color = 'red')
# Calculate pearson's correlation coefficient with confidence intervals, coefficiet of determination and p value
r , lcl, ucl , r2, p = correl(a,b)
# print the equation on the graph area
text = 'y=%.4fx+(%.4f)\nr=%.4f (%.4f , %.4f)\np<0.001' % (result[0], result[1], r, lcl, ucl)
plt.text(a.max()*0.43, b.max()*0.83, text, color = 'black', style='normal', fontsize=14,
bbox={'facecolor':'white', 'edgecolor':'red', 'alpha':1, 'pad':10})
fig.savefig('%s/E_Figure_7_color.tiff' % dir_write, dpi = 300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format = 'tiff',
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
plt.close()
Explanation: Supplementary Figure 7
End of explanation |
12,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Lorenz63 model implemented in FABM
The equations read
Step1: Import pyfabm - the python module that contains the Fortran based FABM
Step2: Configuration
The model configuration is done via the YAML formatted file.
Step3: Model increment
Step4: Time axis and model integration
Step5: Plot the results | Python Code:
import numpy
import scipy.integrate
Explanation: The Lorenz63 model implemented in FABM
The equations read:
$ \frac{dx}{dt} = \sigma ( y - x ) - \beta x y$
$ \frac{dy}{dt} = x ( \rho - z ) - y$
$ \frac{dz}{dt} = x y - \beta z$
For further information see
Import standard python packages and pyfabm
End of explanation
import pyfabm
#pyfabm.get_version()
Explanation: Import pyfabm - the python module that contains the Fortran based FABM
End of explanation
yaml_file = 'fabm-bb-lorenz63.yaml'
model = pyfabm.Model(yaml_file)
model.findDependency('bottom_depth').value = 1.
model.checkReady(stop=True)
Explanation: Configuration
The model configuration is done via the YAML formatted file.
End of explanation
def dy(y,t0):
model.state[:] = y
return model.getRates()
Explanation: Model increment
End of explanation
t = numpy.arange(0.0, 40.0, 0.01)
y = scipy.integrate.odeint(dy,model.state,t)
Explanation: Time axis and model integration
End of explanation
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(y[:,0], y[:,1], y[:,2])
plt.show()
Explanation: Plot the results
End of explanation |
12,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Validation Playground
Watch a short tutorial video or read the written tutorial
This notebook assumes that you created at least one expectation suite in your project.
Here you will learn how to validate data in a SQL database against an expectation suite.
We'd love it if you reach out for help on the Great Expectations Slack Channel
Step1: 1. Get a DataContext
This represents your project that you just created using great_expectations init.
Step2: 2. Choose an Expectation Suite
List expectation suites that you created in your project
Step3: 3. Load a batch of data you want to validate
To learn more about get_batch, see this tutorial
Step5: 4. Validate the batch with Validation Operators
Validation Operators provide a convenient way to bundle the validation of
multiple expectation suites and the actions that should be taken after validation.
When deploying Great Expectations in a real data pipeline, you will typically discover these needs
Step6: 5. View the Validation Results in Data Docs
Let's now build and look at your Data Docs. These will now include an data quality report built from the ValidationResults you just created that helps you communicate about your data with both machines and humans.
Read more about Data Docs in the tutorial | Python Code:
import json
import great_expectations as ge
import great_expectations.jupyter_ux
from great_expectations.datasource.types import BatchKwargs
import datetime
Explanation: Validation Playground
Watch a short tutorial video or read the written tutorial
This notebook assumes that you created at least one expectation suite in your project.
Here you will learn how to validate data in a SQL database against an expectation suite.
We'd love it if you reach out for help on the Great Expectations Slack Channel
End of explanation
context = ge.data_context.DataContext()
Explanation: 1. Get a DataContext
This represents your project that you just created using great_expectations init.
End of explanation
context.list_expectation_suite_names()
expectation_suite_name = # TODO: set to a name from the list above
Explanation: 2. Choose an Expectation Suite
List expectation suites that you created in your project
End of explanation
# list datasources of the type SqlAlchemyDatasource in your project
[datasource['name'] for datasource in context.list_datasources() if datasource['class_name'] == 'SqlAlchemyDatasource']
datasource_name = # TODO: set to a datasource name from above
# If you would like to validate an entire table or view in your database's default schema:
batch_kwargs = {'table': "YOUR_TABLE", 'datasource': datasource_name}
# If you would like to validate an entire table or view from a non-default schema in your database:
batch_kwargs = {'table': "YOUR_TABLE", "schema": "YOUR_SCHEMA", 'datasource': datasource_name}
# If you would like to validate the result set of a query:
# batch_kwargs = {'query': 'SELECT YOUR_ROWS FROM YOUR_TABLE', 'datasource': datasource_name}
batch = context.get_batch(batch_kwargs, expectation_suite_name)
batch.head()
Explanation: 3. Load a batch of data you want to validate
To learn more about get_batch, see this tutorial
End of explanation
# This is an example of invoking a validation operator that is configured by default in the great_expectations.yml file
Create a run_id. The run_id must be of type RunIdentifier, with optional run_name and run_time instantiation
arguments (or a dictionary with these keys). The run_name can be any string (this could come from your pipeline
runner, e.g. Airflow run id). The run_time can be either a dateutil parsable string or a datetime object.
Note - any provided datetime will be assumed to be a UTC time. If no instantiation arguments are given, run_name will
be None and run_time will default to the current UTC datetime.
run_id = {
"run_name": "some_string_that_uniquely_identifies_this_run", # insert your own run_name here
"run_time": datetime.datetime.now(datetime.timezone.utc)
}
results = context.run_validation_operator(
"action_list_operator",
assets_to_validate=[batch],
run_id=run_id)
Explanation: 4. Validate the batch with Validation Operators
Validation Operators provide a convenient way to bundle the validation of
multiple expectation suites and the actions that should be taken after validation.
When deploying Great Expectations in a real data pipeline, you will typically discover these needs:
validating a group of batches that are logically related
validating a batch against several expectation suites such as using a tiered pattern like warning and failure
doing something with the validation results (e.g., saving them for a later review, sending notifications in case of failures, etc.).
Read more about Validation Operators in the tutorial
End of explanation
context.open_data_docs()
Explanation: 5. View the Validation Results in Data Docs
Let's now build and look at your Data Docs. These will now include an data quality report built from the ValidationResults you just created that helps you communicate about your data with both machines and humans.
Read more about Data Docs in the tutorial
End of explanation |
12,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualization with Matplotlib
Learning Objectives
Step1: Overview
The following conceptual organization is simplified and adapted from Benjamin Root's AnatomyOfMatplotlib tutorial.
Figures and Axes
In Matplotlib a single visualization is a Figure.
A Figure can have multiple areas, called subplots. Each subplot is an Axes.
If you don't create a Figure and Axes yourself, Matplotlib will automatically create one for you.
All plotting commands apply to the current Figure and Axes.
The following functions can be used to create and manage Figure and Axes objects.
Function | Description
Step2: Basic plot modification
With a third argument you can provide the series color and line/marker style. Here we create a Figure object and modify its size.
Step3: Here is a list of the single character color strings
Step4: To change the plot's limits, use xlim and ylim
Step5: You can change the ticks along a given axis by using xticks, yticks and tick_params
Step6: Box and grid
You can enable a grid or disable the box. Notice that the ticks and tick labels remain.
Step7: Multiple series
Multiple calls to a plotting function will all target the current Axes
Step8: Subplots
Subplots allow you to create a grid of plots in a single figure. There will be an Axes associated with each subplot and only one Axes can be active at a time.
The first way you can create subplots is to use the subplot function, which creates and activates a new Axes for the active Figure
Step9: In many cases, it is easier to use the subplots function, which creates a new Figure along with an array of Axes objects that can be indexed in a rational manner
Step10: The subplots function also makes it easy to pass arguments to Figure and to share axes
Step11: More marker and line styling
All plot commands, including plot, accept keyword arguments that can be used to style the lines in more detail. Fro more information see | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Visualization with Matplotlib
Learning Objectives: Learn how to make basic plots using Matplotlib's pylab API and how to use the Matplotlib documentation.
This notebook focuses only on the Matplotlib API, rather that the broader question of how you can use this API to make effective and beautiful visualizations.
Imports
The following imports should be used in all of your notebooks where Matplotlib in used:
End of explanation
t = np.linspace(0, 10.0, 100)
plt.plot(t, np.sin(t))
plt.xlabel('Time')
plt.ylabel('Signal')
plt.title('My Plot'); # supress text output
Explanation: Overview
The following conceptual organization is simplified and adapted from Benjamin Root's AnatomyOfMatplotlib tutorial.
Figures and Axes
In Matplotlib a single visualization is a Figure.
A Figure can have multiple areas, called subplots. Each subplot is an Axes.
If you don't create a Figure and Axes yourself, Matplotlib will automatically create one for you.
All plotting commands apply to the current Figure and Axes.
The following functions can be used to create and manage Figure and Axes objects.
Function | Description
:-----------------|:----------------------------------------------------------
figure | Creates a new Figure
gca | Get the current Axes instance
savefig | Save the current Figure to a file
sca | Set the current Axes instance
subplot | Create a new subplot Axes for the current Figure
subplots | Create a new Figure and a grid of subplots Axes
Plotting Functions
Once you have created a Figure and one or more Axes objects, you can use the following function to put data onto that Axes.
Function | Description
:-----------------|:--------------------------------------------
bar | Make a bar plot
barh | Make a horizontal bar plot
boxplot | Make a box and whisker plot
contour | Plot contours
contourf | Plot filled contours
hist | Plot a histogram
hist2d | Make a 2D histogram plot
imshow | Display an image on the axes
matshow | Display an array as a matrix
pcolor | Create a pseudocolor plot of a 2-D array
pcolormesh | Plot a quadrilateral mesh
plot | Plot lines and/or markers
plot_date | Plot with data with dates
polar | Make a polar plot
scatter | Make a scatter plot of x vs y
Plot modifiers
You can then use the following functions to modify your visualization.
Function | Description
:-----------------|:---------------------------------------------------------------------
annotate | Create an annotation: a piece of text referring to a data point
box | Turn the Axes box on or off
clabel | Label a contour plot
colorbar | Add a colorbar to a plot
grid | Turn the Axes grids on or off
legend | Place a legend on the current Axes
loglog | Make a plot with log scaling on both the x and y axis
semilogx | Make a plot with log scaling on the x axis
semilogy | Make a plot with log scaling on the y axis
subplots_adjust | Tune the subplot layout
tick_params | Change the appearance of ticks and tick labels
ticklabel_format| Change the ScalarFormatter used by default for linear axes
tight_layout | Automatically adjust subplot parameters to give specified padding
text | Add text to the axes
title | Set a title of the current axes
xkcd | Turns on XKCD sketch-style drawing mode
xlabel | Set the x axis label of the current axis
xlim | Get or set the x limits of the current axes
xticks | Get or set the x-limits of the current tick locations and labels
ylabel | Set the y axis label of the current axis
ylim | Get or set the y-limits of the current axes
yticks | Get or set the y-limits of the current tick locations and labels
Basic plotting
For now, we will work with basic line plots (plt.plot) to show how the Matplotlib pylab plotting API works. In this case, we don't create a Figure so Matplotlib does that automatically.
End of explanation
f = plt.figure(figsize=(9,6)) # 9" x 6", default is 8" x 5.5"
plt.plot(t, np.sin(t), 'r.');
plt.xlabel('x')
plt.ylabel('y')
Explanation: Basic plot modification
With a third argument you can provide the series color and line/marker style. Here we create a Figure object and modify its size.
End of explanation
from matplotlib import lines
lines.lineStyles.keys()
from matplotlib import markers
markers.MarkerStyle.markers.keys()
Explanation: Here is a list of the single character color strings:
b: blue
g: green
r: red
c: cyan
m: magenta
y: yellow
k: black
w: white
The following will show all of the line and marker styles:
End of explanation
plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')
plt.xlim(-1.0, 11.0)
plt.ylim(-1.0, 1.0)
Explanation: To change the plot's limits, use xlim and ylim:
End of explanation
plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')
plt.xlim(0.0, 10.0)
plt.ylim(-1.0, 1.0)
plt.xticks([0,5,10], ['zero','five','10'])
plt.tick_params(axis='y', direction='inout', length=10) #modifies parameters of actual tick marks
Explanation: You can change the ticks along a given axis by using xticks, yticks and tick_params:
End of explanation
plt.plot(np.random.rand(100), 'b-')
plt.grid(True)
plt.box(False)
Explanation: Box and grid
You can enable a grid or disable the box. Notice that the ticks and tick labels remain.
End of explanation
plt.plot(t, np.sin(t), label='sin(t)')
plt.plot(t, np.cos(t), label='cos(t)')
plt.xlabel('t')
plt.ylabel('Signal(t)')
plt.ylim(-1.5, 1.5)
plt.xlim(right=12.0)
plt.legend()
Explanation: Multiple series
Multiple calls to a plotting function will all target the current Axes:
End of explanation
plt.subplot(2,1,1) # 2 rows x 1 col, plot 1
plt.plot(t, np.exp(0.1*t))
plt.ylabel('Exponential')
plt.subplot(2,1,2) # 2 rows x 1 col, plot 2
plt.plot(t, t**2)
plt.ylabel('Quadratic')
plt.xlabel('x')
plt.tight_layout()
Explanation: Subplots
Subplots allow you to create a grid of plots in a single figure. There will be an Axes associated with each subplot and only one Axes can be active at a time.
The first way you can create subplots is to use the subplot function, which creates and activates a new Axes for the active Figure:
End of explanation
# f, ax = plt.subplots(2, 2)
# for i in range(2):
# for j in range(2):
# plt.sca(ax[i,j])
# plt.plot(np.random.rand(20))
# plt.xlabel('x')
# plt.ylabel('y')
# plt.tight_layout()
print (year[0,99])
Explanation: In many cases, it is easier to use the subplots function, which creates a new Figure along with an array of Axes objects that can be indexed in a rational manner:
End of explanation
f, ax = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(6,6))
for i in range(2):
for j in range(2):
plt.sca(ax[i,j])
plt.plot(np.random.rand(20))
if i==1:
plt.xlabel('x')
if j==0:
plt.ylabel('y')
plt.tight_layout()
Explanation: The subplots function also makes it easy to pass arguments to Figure and to share axes:
End of explanation
plt.plot(t, np.sin(t), marker='o', color='darkblue',
linestyle='--', alpha=0.3, markersize=10)
Explanation: More marker and line styling
All plot commands, including plot, accept keyword arguments that can be used to style the lines in more detail. Fro more information see:
Controlling line properties
Specifying colors
End of explanation |
12,792 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I've read several posts about how to convert Pandas columns to float using pd.to_numeric as well as applymap(locale.atof). | Problem:
import pandas as pd
s = pd.Series(['2,144.78', '2,036.62', '1,916.60', '1,809.40', '1,711.97', '6,667.22', '5,373.59', '4,071.00', '3,050.20', '-0.06', '-1.88', '', '-0.13', '', '-0.14', '0.07', '0', '0'],
index=['2016-10-31', '2016-07-31', '2016-04-30', '2016-01-31', '2015-10-31', '2016-01-31', '2015-01-31', '2014-01-31', '2013-01-31', '2016-09-30', '2016-06-30', '2016-03-31', '2015-12-31', '2015-09-30', '2015-12-31', '2014-12-31', '2013-12-31', '2012-12-31'])
def g(s):
return pd.to_numeric(s.str.replace(',',''), errors='coerce')
result = g(s.copy()) |
12,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing various MNE solutions
This example shows example fixed- and free-orientation source localizations
produced by the minimum-norm variants implemented in MNE-Python
Step1: Fixed orientation
First let's create a fixed-orientation inverse, with the default weighting.
Step2: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
Step3: Next let's use the default noise normalization, dSPM
Step4: And sLORETA
Step5: And finally eLORETA
Step6: Free orientation
Now let's not constrain the orientation of the dipoles at all by creating
a free-orientation inverse.
Step7: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
Step8: Next let's use the default noise normalization, dSPM
Step9: sLORETA
Step10: And finally eLORETA | Python Code:
# Author: Eric Larson <[email protected]>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read data (just MEG here for speed, though we could use MEG+EEG)
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Right Auditory',
baseline=(None, 0))
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
# crop for speed in these examples
evoked.crop(0.05, 0.15)
Explanation: Computing various MNE solutions
This example shows example fixed- and free-orientation source localizations
produced by the minimum-norm variants implemented in MNE-Python:
MNE, dSPM, sLORETA, and eLORETA.
End of explanation
inv = make_inverse_operator(evoked.info, fwd, cov, loose=0., depth=0.8,
verbose=True)
Explanation: Fixed orientation
First let's create a fixed-orientation inverse, with the default weighting.
End of explanation
snr = 3.0
lambda2 = 1.0 / snr ** 2
kwargs = dict(initial_time=0.08, hemi='lh', subjects_dir=subjects_dir,
size=(600, 600), clim=dict(kind='percent', lims=[90, 95, 99]),
smoothing_steps=7)
stc = abs(apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True))
brain = stc.plot(figure=1, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
Explanation: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
End of explanation
stc = abs(apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True))
brain = stc.plot(figure=2, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
Explanation: Next let's use the default noise normalization, dSPM:
End of explanation
stc = abs(apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True))
brain = stc.plot(figure=3, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
Explanation: And sLORETA:
End of explanation
stc = abs(apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True))
brain = stc.plot(figure=4, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
del inv
Explanation: And finally eLORETA:
End of explanation
inv = make_inverse_operator(evoked.info, fwd, cov, loose=1., depth=0.8,
verbose=True)
del fwd
Explanation: Free orientation
Now let's not constrain the orientation of the dipoles at all by creating
a free-orientation inverse.
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True)
brain = stc.plot(figure=5, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
Explanation: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True)
brain = stc.plot(figure=6, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
Explanation: Next let's use the default noise normalization, dSPM:
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True)
brain = stc.plot(figure=7, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
Explanation: sLORETA:
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True,
method_params=dict(eps=1e-4)) # larger eps just for speed
brain = stc.plot(figure=8, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
Explanation: And finally eLORETA:
End of explanation |
12,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building your Deep Neural Network
Step2: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will
Step4: Expected output
Step6: Expected output
Step8: Expected output
Step10: Expected output
Step12: <table style="width
Step14: Expected Output
Step16: Expected Output
Step18: Expected output with sigmoid
Step20: Expected Output
<table style="width | Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
Explanation: Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
In this notebook, you will implement all the functions required to build a deep neural network.
In the next assignment, you will use these functions to build a deep neural network for image classification.
After this assignment you will be able to:
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
Notation:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the main package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h,n_x)*0.01
b1 = np.zeros((n_h,1))
W2 = np.random.randn(n_y,n_h)*0.01
b2 = np.zeros((n_y,1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(2,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
Initialize the parameters for a two-layer network and for an $L$-layer neural network.
Implement the forward propagation module (shown in purple in the figure below).
Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
We give you the ACTIVATION function (relu/sigmoid).
Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
Compute the loss.
Implement the backward propagation module (denoted in red in the figure below).
Complete the LINEAR part of a layer's backward propagation step.
We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> Figure 1</center></caption><br>
Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
3.1 - 2-layer Neural Network
Exercise: Create and initialize the parameters of the 2-layer neural network.
Instructions:
- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.
- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.
- Use zero initialization for the biases. Use np.zeros(shape).
End of explanation
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l],layer_dims[l-1])*0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l],1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756]
[-0.00528172 -0.01072969]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.00865408 -0.02301539]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\
m & n & o \
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\
d & e & f \
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \
t \
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
Exercise: Implement initialization for an L-layer Neural Network.
Instructions:
- The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.
- Use zeros initialization for the biases. Use np.zeros(shape).
- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
End of explanation
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W,A)+b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
4 - Forward propagation module
4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
LINEAR
LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
Exercise: Build the linear part of forward propagation.
Reminder:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
End of explanation
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev,W,b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev,W,b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = sigmoid(Z)
ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = relu(Z)
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
End of explanation
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W'+str(l)], parameters['b'+str(l)], activation='relu')
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W'+str(l+1)], parameters['b'+str(l+1)], activation='sigmoid')
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>
Exercise: Implement the forward propagation of the above model.
Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.)
Tips:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = -np.sum(np.dot(Y,np.log(AL).T)+np.dot((1-Y),np.log(1-AL).T))/m
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
Explanation: <table style="width:40%">
<tr>
<td> **AL** </td>
<td > [[ 0.17007265 0.2524272 ]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 2</td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
End of explanation
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = np.dot(dZ,A_prev.T)/m
db = np.sum(dZ,axis=1,keepdims=True)/m
dA_prev = np.dot(W.T,dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
Reminder:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> Figure 4 </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
Exercise: Use the 3 formulas above to implement linear_backward().
End of explanation
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.
To help you implement linear_activation_backward, we provided two backward functions:
- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:
python
dZ = sigmoid_backward(dA, activation_cache)
relu_backward: Implements the backward propagation for RELU unit. You can call it as follows:
python
dZ = relu_backward(dA, activation_cache)
If $g(.)$ is the activation function,
sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
End of explanation
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[L-1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation = "sigmoid")
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 2)], current_cache, activation = "relu")
#dA_prev_temp, dW_temp, db_temp = linear_activation_backward(dA_prev_temp, current_cache, "relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
Explanation: Expected output with sigmoid:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
Expected output with relu
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> Figure 5 : Backward pass </center></caption>
Initializing backpropagation:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"].
Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
End of explanation
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - grads['dW'+str(l+1)]*learning_rate
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - grads['db'+str(l+1)]*learning_rate
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
Explanation: Expected Output
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0. 0.52257901]
[ 0. -0.3269206 ]
[ 0. -0.32070404]
[ 0. -0.74079187]] </td>
</tr>
</table>
6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
Exercise: Implement update_parameters() to update your parameters using gradient descent.
Instructions:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
End of explanation |
12,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Performing Scenario Discovery in Python
The purpose of example is to demonstrate how one can do scenario discovery in python. I will demonstrate how we can perform both PRIM in an interactive way, as well as briefly show how to use CART, which is also available in the exploratory modeling workbench. There is ample literature on both CART and PRIM and their relative merits for use in scenario discovery. So I won't be discussing that here in any detail.
In order to demonstrate the use of the exploratory modeling workbench for scenario discovery, I am using a published example. I am using the data used in the original article by Ben Bryant and Rob Lempert where they first introduced 2010. Ben Bryant kindly made this data available and allowed me to share it. The data comes as a csv file. We can import the data easily using pandas. columns 2 up to and including 10 contain the experimental design, while the classification is presented in column 15
This example is a slightly updated version of a blog post on https
Step1: the exploratory modeling workbench comes with a seperate analysis package. This analysis package contains prim. So let's import prim. The workbench also has its own logging functionality. We can turn this on to get some more insight into prim while it is running.
Step2: Next, we need to instantiate the prim algorithm. To mimic the original work of Ben Bryant and Rob Lempert, we set the peeling alpha to 0.1. The peeling alpha determines how much data is peeled off in each iteration of the algorithm. The lower the value, the less data is removed in each iteration. The minimium coverage threshold that a box should meet is set to 0.8. Next, we can use the instantiated algorithm to find a first box.
Step3: Let's investigate this first box is some detail. A first thing to look at is the trade off between coverage and density. The box has a convenience function for this called show_tradeoff.
Step4: Since we are doing this analysis in a notebook, we can take advantage of the interactivity that the browser offers. A relatively recent addition to the python ecosystem is the library altair. Altair can be used to create interactive plots for use in a browser. Altair is an optional dependency for the workbench. If available, we can create the following visual.
Step5: Here we can interactively explore the boxes associated with each point in the density coverage trade-off. It also offers mouse overs for the various points on the trade off curve. Given the id of each point, we can also use the workbench to manually inpect the peeling trajectory. Following Bryant & Lempert, we inspect box 21.
Step6: If one where to do a detailed comparison with the results reported in the original article, one would see small numerical differences. These differences arise out of subtle differences in implementation. The most important difference is that the exploratory modeling workbench uses a custom objective function inside prim which is different from the one used in the scenario discovery toolkit. Other differences have to do with details about the hill climbing optimization that is used in prim, and in particular how ties are handled in selected the next step. The differences between the two implementations are only numerical, and don't affect the overarching conclusions drawn from the analysis.
Let's select this 21 box, and get a more detailed view of what the box looks like. Following Bryant et al., we can use scatter plots for this.
Step7: Because the last restriction is not significant, we can choose to drop this restriction from the box.
Step8: We have now found a first box that explains over 75% of the cases of interest. Let's see if we can find a second box that explains the remainder of the cases.
Step9: As we can see, we are unable to find a second box. The best coverage we can achieve is 0.35, which is well below the specified 0.8 threshold. Let's look at the final overal results from interactively fitting PRIM to the data. For this, we can use to convenience functions that transform the stats and boxes to pandas data frames.
Step10: CART
The way of interacting with CART is quite similar to how we setup the prim analysis. We import cart from the analysis package. We instantiate the algorithm, and next fit CART to the data. This is done via the build_tree method.
Step11: Now that we have trained CART on the data, we can investigate its results. Just like PRIM, we can use stats_to_dataframe and boxes_to_dataframe to get an overview.
Step12: Alternatively, we might want to look at the classification tree directly. For this, we can use the show_tree method. | Python Code:
import pandas as pd
data = pd.read_csv("./data/bryant et al 2010 data.csv", index_col=False)
x = data.iloc[:, 2:11]
y = data.iloc[:, 15].values
Explanation: Performing Scenario Discovery in Python
The purpose of example is to demonstrate how one can do scenario discovery in python. I will demonstrate how we can perform both PRIM in an interactive way, as well as briefly show how to use CART, which is also available in the exploratory modeling workbench. There is ample literature on both CART and PRIM and their relative merits for use in scenario discovery. So I won't be discussing that here in any detail.
In order to demonstrate the use of the exploratory modeling workbench for scenario discovery, I am using a published example. I am using the data used in the original article by Ben Bryant and Rob Lempert where they first introduced 2010. Ben Bryant kindly made this data available and allowed me to share it. The data comes as a csv file. We can import the data easily using pandas. columns 2 up to and including 10 contain the experimental design, while the classification is presented in column 15
This example is a slightly updated version of a blog post on https://waterprogramming.wordpress.com/2015/08/05/scenario-discovery-in-python/
End of explanation
from ema_workbench.analysis import prim
from ema_workbench.util import ema_logging
ema_logging.log_to_stderr(ema_logging.INFO);
Explanation: the exploratory modeling workbench comes with a seperate analysis package. This analysis package contains prim. So let's import prim. The workbench also has its own logging functionality. We can turn this on to get some more insight into prim while it is running.
End of explanation
prim_alg = prim.Prim(x, y, threshold=0.8, peel_alpha=0.1)
box1 = prim_alg.find_box()
Explanation: Next, we need to instantiate the prim algorithm. To mimic the original work of Ben Bryant and Rob Lempert, we set the peeling alpha to 0.1. The peeling alpha determines how much data is peeled off in each iteration of the algorithm. The lower the value, the less data is removed in each iteration. The minimium coverage threshold that a box should meet is set to 0.8. Next, we can use the instantiated algorithm to find a first box.
End of explanation
box1.show_tradeoff()
plt.show()
Explanation: Let's investigate this first box is some detail. A first thing to look at is the trade off between coverage and density. The box has a convenience function for this called show_tradeoff.
End of explanation
box1.inspect_tradeoff()
Explanation: Since we are doing this analysis in a notebook, we can take advantage of the interactivity that the browser offers. A relatively recent addition to the python ecosystem is the library altair. Altair can be used to create interactive plots for use in a browser. Altair is an optional dependency for the workbench. If available, we can create the following visual.
End of explanation
box1.resample(21)
box1.inspect(21)
box1.inspect(21, style="graph")
plt.show()
Explanation: Here we can interactively explore the boxes associated with each point in the density coverage trade-off. It also offers mouse overs for the various points on the trade off curve. Given the id of each point, we can also use the workbench to manually inpect the peeling trajectory. Following Bryant & Lempert, we inspect box 21.
End of explanation
box1.select(21)
fig = box1.show_pairs_scatter(21)
plt.show()
Explanation: If one where to do a detailed comparison with the results reported in the original article, one would see small numerical differences. These differences arise out of subtle differences in implementation. The most important difference is that the exploratory modeling workbench uses a custom objective function inside prim which is different from the one used in the scenario discovery toolkit. Other differences have to do with details about the hill climbing optimization that is used in prim, and in particular how ties are handled in selected the next step. The differences between the two implementations are only numerical, and don't affect the overarching conclusions drawn from the analysis.
Let's select this 21 box, and get a more detailed view of what the box looks like. Following Bryant et al., we can use scatter plots for this.
End of explanation
box1.drop_restriction("Cellulosic cost")
box1.inspect(style="graph")
plt.show()
Explanation: Because the last restriction is not significant, we can choose to drop this restriction from the box.
End of explanation
box2 = prim_alg.find_box()
Explanation: We have now found a first box that explains over 75% of the cases of interest. Let's see if we can find a second box that explains the remainder of the cases.
End of explanation
prim_alg.stats_to_dataframe()
prim_alg.boxes_to_dataframe()
Explanation: As we can see, we are unable to find a second box. The best coverage we can achieve is 0.35, which is well below the specified 0.8 threshold. Let's look at the final overal results from interactively fitting PRIM to the data. For this, we can use to convenience functions that transform the stats and boxes to pandas data frames.
End of explanation
from ema_workbench.analysis import cart
cart_alg = cart.CART(x, y, 0.05)
cart_alg.build_tree()
Explanation: CART
The way of interacting with CART is quite similar to how we setup the prim analysis. We import cart from the analysis package. We instantiate the algorithm, and next fit CART to the data. This is done via the build_tree method.
End of explanation
cart_alg.stats_to_dataframe()
cart_alg.boxes_to_dataframe()
Explanation: Now that we have trained CART on the data, we can investigate its results. Just like PRIM, we can use stats_to_dataframe and boxes_to_dataframe to get an overview.
End of explanation
fig = cart_alg.show_tree()
fig.set_size_inches((18, 12))
plt.show()
Explanation: Alternatively, we might want to look at the classification tree directly. For this, we can use the show_tree method.
End of explanation |
12,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015
Los datos del experimento
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como segunda aproximación, vamos a modificar los incrementos en los que el diámetro se encuentra entre $1.80mm$ y $1.70 mm$, en ambos sentidos. (casos 3 a 6)
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
Step3: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$ | Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('ensayo1.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015
Los datos del experimento:
* Hora de inicio: 10:30
* Hora final : 11:00
* Filamento extruido: 447cm
* $T: 150ºC$
* $V_{min} tractora: 1.5 mm/s$
* $V_{max} tractora: 3.4 mm/s$
* Los incrementos de velocidades en las reglas del sistema experto son distintas:
* En los caso 3 y 5 se mantiene un incremento de +2.
* En los casos 4 y 6 se reduce el incremento a -1.
End of explanation
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
Explanation: Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como segunda aproximación, vamos a modificar los incrementos en los que el diámetro se encuentra entre $1.80mm$ y $1.70 mm$, en ambos sentidos. (casos 3 a 6)
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
Explanation: Representación de X/Y
End of explanation
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Explanation: Analizamos datos del ratio
End of explanation
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation |
12,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
課題1
1900年から2200年までのうるう年を教えてくれるプログラムを書いてください。
ただし、うるう年は下記のルールで決定されるものとします。
西暦が4で割り切れる年は閏年である。
ただし、100で割り切れる年は閏年ではない。
ただし、400で割り切れる年は閏年である。
課題2
任意の西暦年を渡すと、うるう年か否かを判定してくれる関数を、def機能を使って実装してください。
※課題2を先に解いて、そのプログラムを課題1に用いる
Step1: 課題2への回答として1900年から2001年までのうるう年を探してみる。
Step2: 少し書き換えて課題1を解いてみる。
Step3: 課題3
お笑いアンケートで観客の生まれた年も聞いてみた結果、以下のデータが集められました。
名前 誕生年
田中花子 1980
鈴木一郎 2000
山田太郎 1989
佐藤恵子 1992
石井あや 1978
うるう年に生まれた観客の平均評価点と、うるう年以外に生まれた観客の平均評価点を求めるプログラムを書いてください。(小数点以下も出力してください。)
例えば、次のような形で聴衆のデータが与えられたとする。 | Python Code:
def leap_year(y):
if y < 0:
print("Please input positive number.")
return
if y%4 == 0:
if y%400 == 0:
print("%d is leap year."%y)
return
elif y%100 == 0:
print("%d is NOT leap year."%y)
return
print("%d is leap year."%y)
return
else:
print("%d is NOT leap year."%y)
return
Explanation: 課題1
1900年から2200年までのうるう年を教えてくれるプログラムを書いてください。
ただし、うるう年は下記のルールで決定されるものとします。
西暦が4で割り切れる年は閏年である。
ただし、100で割り切れる年は閏年ではない。
ただし、400で割り切れる年は閏年である。
課題2
任意の西暦年を渡すと、うるう年か否かを判定してくれる関数を、def機能を使って実装してください。
※課題2を先に解いて、そのプログラムを課題1に用いる
End of explanation
for i in range(1900,2001):
leap_year(i)
Explanation: 課題2への回答として1900年から2001年までのうるう年を探してみる。
End of explanation
def leap_year2(y):
if y < 0:
return 0
if y%4 == 0:
if y%400 == 0:
return 1
elif y%100 == 0:
return 0
return 1
else:
return 0
leap_years = [leap_year2(i) for i in range(1900,2201)]
print(sum(leap_years))
Explanation: 少し書き換えて課題1を解いてみる。
End of explanation
audience = {"tanaka":(1980, 1), "suzuki":(2000, 3), "yamada":(1989, 2), "sato":(1992, 5), "ishii":(1978, 5)}
def ave_by_ly(a):
x = a.values()
num_ly = 0
point_ly = 0
num_not_ly = 0
point_not_ly = 0
for y, p in x:
if y < 0:
print("Please input positive number.")
return
if y%4 == 0:
if y%400 == 0:
num_ly += 1
point_ly += p
elif y%100 == 0:
num_not_ly += 1
point_not_ly += p
num_ly += 1
point_ly += p
else:
num_not_ly += 1
point_not_ly += p
print(y, p)
ave_ly = point_ly/num_ly
ave_not_ly = point_not_ly/num_not_ly
print("The average point of those who were born in leap year is {ave_ly}".format(**locals()))
print("The average point of those who were NOT born in leap year is {ave_not_ly}".format(**locals()))
return
ave_by_ly(audience)
Explanation: 課題3
お笑いアンケートで観客の生まれた年も聞いてみた結果、以下のデータが集められました。
名前 誕生年
田中花子 1980
鈴木一郎 2000
山田太郎 1989
佐藤恵子 1992
石井あや 1978
うるう年に生まれた観客の平均評価点と、うるう年以外に生まれた観客の平均評価点を求めるプログラムを書いてください。(小数点以下も出力してください。)
例えば、次のような形で聴衆のデータが与えられたとする。
End of explanation |
12,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../../images/qiskit-heading.gif" alt="Note
Step1: First we set up an empty program for one qubit.
Step2: We don't want to do anything to the qubit, so we'll skip straight to reading it out.
Step3: Now we'll tell the local simulator to execute this entirely trivial program.
Step4: And then print out the result. Since qubits are initialized as 0, and we did nothing to our qubit before readout, we'll just get the result 0 many times.
Step5: Now let's try it on the least busy real device. This will have a few samples which output 1 due to noise, but most of the samples should be for an output of 0. | Python Code:
import qiskit
Explanation: <img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
Doing nothing with Qiskit Terra
We are going to use Qiskit to do nothing.
End of explanation
qr = qiskit.QuantumRegister(1)
cr = qiskit.ClassicalRegister(1)
program = qiskit.QuantumCircuit(qr, cr)
Explanation: First we set up an empty program for one qubit.
End of explanation
program.measure(qr,cr)
Explanation: We don't want to do anything to the qubit, so we'll skip straight to reading it out.
End of explanation
job = qiskit.execute( program, qiskit.Aer.get_backend('qasm_simulator') )
Explanation: Now we'll tell the local simulator to execute this entirely trivial program.
End of explanation
print( job.result().get_counts() )
Explanation: And then print out the result. Since qubits are initialized as 0, and we did nothing to our qubit before readout, we'll just get the result 0 many times.
End of explanation
qiskit.IBMQ.load_accounts()
backend = qiskit.backends.ibmq.least_busy(qiskit.IBMQ.backends(simulator=False))
print("We'll use the least busy device:",backend.name())
job = qiskit.execute( program, backend )
print( job.result().get_counts() )
Explanation: Now let's try it on the least busy real device. This will have a few samples which output 1 due to noise, but most of the samples should be for an output of 0.
End of explanation |
12,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Text classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Import the required packages.
Step3: Get the data path
Download the dataset for this tutorial.
Step4: You can also upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.
<img src="https
Step5: Step 2. Load train and test data specific to an on-device ML app and preprocess the data according to a specific model_spec.
Step6: Step 3. Customize the TensorFlow model.
Step7: Step 4. Evaluate the model.
Step8: Step 5. Export as a TensorFlow Lite model.
Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress it by almost 4x with minimal performance degradation.
Step9: You can also download the model using the left sidebar in Colab.
After executing the 5 steps above, you can further use the TensorFlow Lite model file and label file in on-device applications like in a text classification reference app.
The following sections walk through the example step by step to show more detail.
Choose a model_spec that Represents a Model for Text Classifier
Each model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and [BERT-Base]((https
Step10: Load Input Data Specific to an On-device ML App
The SST-2 (Stanford Sentiment Treebank) is one of the tasks in the GLUE benchmark . It contains 67,349 movie reviews for training and 872 movie reviews for validation. The dataset has two classes
Step11: The SST-2 dataset has train.tsv for training and dev.tsv for validation. The files have the following format
Step12: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders.
Customize the TensorFlow Model
Create a custom text classifier model based on the loaded data.
Step13: Examine the detailed model structure.
Step14: Evaluate the Customized Model
Evaluate the result of the model and get the loss and accuracy of the model.
Evaluate the loss and accuracy in the test data.
Step15: Export as a TensorFlow Lite Model
Convert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Save the text labels in a label file and vocabulary in a vocab file. The default TFLite filename is model.tflite, the default label filename is label.txt and the default vocab filename is vocab.
Step16: The TensorFlow Lite model file can be used in the text classification reference app by adding model.tflite to the assets directory. Do not forget to also change the filenames in the code.
You can evalute the tflite model with evaluate_tflite method.
Step17: Advanced Usage
The create function is the driver function that the Model Maker library uses to create models. The model spec parameter defines the model specification. The AverageWordVecModelSpec and BertClassifierModelSpec classes are currently supported. The create function comprises of the following steps
Step18: Get the preprocessed data.
Step19: Train the new model.
Step20: You can also adjust the MobileBERT model.
The model parameters you can adjust are
Step21: Tune the training hyperparameters
You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,
epochs
Step22: Evaluate the newly retrained model with 20 training epochs.
Step23: Change the Model Architecture
You can change the model by changing the model_spec. The following shows how to change to BERT-Base model.
Change the model_spec to BERT-Base model for the text classifier. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install tflite-model-maker
Explanation: Text classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories.The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews.
Prerequisites
Install the required packages
To run this example, install the required packages, including the Model Maker package from the GitHub repo.
End of explanation
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker import TextClassifierDataLoader
Explanation: Import the required packages.
End of explanation
data_dir = tf.keras.utils.get_file(
fname='SST-2.zip',
origin='https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8',
extract=True)
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')
Explanation: Get the data path
Download the dataset for this tutorial.
End of explanation
spec = model_spec.get('mobilebert_classifier')
Explanation: You can also upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_text_classification.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your dataset to the cloud, you can also locally run the library by following the guide.
End-to-End Workflow
This workflow consists of five steps as outlined below:
Step 1. Choose a model specification that represents a text classification model.
This tutorial uses MobileBERT as an example.
End of explanation
train_data = TextClassifierDataLoader.from_csv(
filename=os.path.join(os.path.join(data_dir, 'train.tsv')),
text_column='sentence',
label_column='label',
model_spec=spec,
delimiter='\t',
is_training=True)
test_data = TextClassifierDataLoader.from_csv(
filename=os.path.join(os.path.join(data_dir, 'dev.tsv')),
text_column='sentence',
label_column='label',
model_spec=spec,
delimiter='\t',
is_training=False)
Explanation: Step 2. Load train and test data specific to an on-device ML app and preprocess the data according to a specific model_spec.
End of explanation
model = text_classifier.create(train_data, model_spec=spec)
Explanation: Step 3. Customize the TensorFlow model.
End of explanation
loss, acc = model.evaluate(test_data)
Explanation: Step 4. Evaluate the model.
End of explanation
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config._experimental_new_quantizer = True
model.export(export_dir='mobilebert/', quantization_config=config)
Explanation: Step 5. Export as a TensorFlow Lite model.
Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress it by almost 4x with minimal performance degradation.
End of explanation
spec = model_spec.get('average_word_vec')
Explanation: You can also download the model using the left sidebar in Colab.
After executing the 5 steps above, you can further use the TensorFlow Lite model file and label file in on-device applications like in a text classification reference app.
The following sections walk through the example step by step to show more detail.
Choose a model_spec that Represents a Model for Text Classifier
Each model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and [BERT-Base]((https://arxiv.org/pdf/1810.04805.pdf) models.
Supported Model | Name of model_spec | Model Description
--- | --- | ---
MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications.
BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks.
averaging word embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation.
This tutorial uses a smaller model, average_word_vec that you can retrain multiple times to demonstrate the process.
End of explanation
data_dir = tf.keras.utils.get_file(
fname='SST-2.zip',
origin='https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8',
extract=True)
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')
Explanation: Load Input Data Specific to an On-device ML App
The SST-2 (Stanford Sentiment Treebank) is one of the tasks in the GLUE benchmark . It contains 67,349 movie reviews for training and 872 movie reviews for validation. The dataset has two classes: positive and negative movie reviews.
Download the archived version of the dataset and extract it.
End of explanation
train_data = TextClassifierDataLoader.from_csv(
filename=os.path.join(os.path.join(data_dir, 'train.tsv')),
text_column='sentence',
label_column='label',
model_spec=spec,
delimiter='\t',
is_training=True)
test_data = TextClassifierDataLoader.from_csv(
filename=os.path.join(os.path.join(data_dir, 'dev.tsv')),
text_column='sentence',
label_column='label',
model_spec=spec,
delimiter='\t',
is_training=False)
Explanation: The SST-2 dataset has train.tsv for training and dev.tsv for validation. The files have the following format:
sentence | label
--- | ---
it 's a charming and often affecting journey . | 1
unflinchingly bleak and desperate | 0
A positive review is labeled 1 and a negative review is labeled 0.
Use the TestClassifierDataLoader.from_csv method to load the data.
End of explanation
model = text_classifier.create(train_data, model_spec=spec, epochs=10)
Explanation: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders.
Customize the TensorFlow Model
Create a custom text classifier model based on the loaded data.
End of explanation
model.summary()
Explanation: Examine the detailed model structure.
End of explanation
loss, acc = model.evaluate(test_data)
Explanation: Evaluate the Customized Model
Evaluate the result of the model and get the loss and accuracy of the model.
Evaluate the loss and accuracy in the test data.
End of explanation
model.export(export_dir='average_word_vec/')
Explanation: Export as a TensorFlow Lite Model
Convert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Save the text labels in a label file and vocabulary in a vocab file. The default TFLite filename is model.tflite, the default label filename is label.txt and the default vocab filename is vocab.
End of explanation
model.evaluate_tflite('average_word_vec/model.tflite', test_data)
Explanation: The TensorFlow Lite model file can be used in the text classification reference app by adding model.tflite to the assets directory. Do not forget to also change the filenames in the code.
You can evalute the tflite model with evaluate_tflite method.
End of explanation
new_model_spec = model_spec.AverageWordVecModelSpec(wordvec_dim=32)
Explanation: Advanced Usage
The create function is the driver function that the Model Maker library uses to create models. The model spec parameter defines the model specification. The AverageWordVecModelSpec and BertClassifierModelSpec classes are currently supported. The create function comprises of the following steps:
Creates the model for the text classifier according to model_spec.
Trains the classifier model. The default epochs and the default batch size are set by the default_training_epochs and default_batch_size variables in the model_spec object.
This section covers advanced usage topics like adjusting the model and the training hyperparameters.
Adjust the model
You can adjust the model infrastructure like the wordvec_dim and the seq_len variables in the AverageWordVecModelSpec class.
For example, you can train the model with a larger value of wordvec_dim. Note that you must construct a new model_spec if you modify the model.
End of explanation
new_train_data = TextClassifierDataLoader.from_csv(
filename=os.path.join(os.path.join(data_dir, 'train.tsv')),
text_column='sentence',
label_column='label',
model_spec=new_model_spec,
delimiter='\t',
is_training=True)
Explanation: Get the preprocessed data.
End of explanation
model = text_classifier.create(new_train_data, model_spec=new_model_spec)
Explanation: Train the new model.
End of explanation
new_model_spec = model_spec.get('mobilebert_classifier')
new_model_spec.seq_len = 256
Explanation: You can also adjust the MobileBERT model.
The model parameters you can adjust are:
seq_len: Length of the sequence to feed into the model.
initializer_range: The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
trainable: Boolean that specifies whether the pre-trained layer is trainable.
The training pipeline parameters you can adjust are:
model_dir: The location of the model checkpoint files. If not set, a temporary directory will be used.
dropout_rate: The dropout rate.
learning_rate: The initial learning rate for the Adam optimizer.
tpu: TPU address to connect to.
For instance, you can set the seq_len=256 (default is 128). This allows the model to classify longer text.
End of explanation
model = text_classifier.create(train_data, model_spec=spec, epochs=20)
Explanation: Tune the training hyperparameters
You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,
epochs: more epochs could achieve better accuracy, but may lead to overfitting.
batch_size: the number of samples to use in one training step.
For example, you can train with more epochs.
End of explanation
loss, accuracy = model.evaluate(test_data)
Explanation: Evaluate the newly retrained model with 20 training epochs.
End of explanation
spec = model_spec.get('bert_classifier')
Explanation: Change the Model Architecture
You can change the model by changing the model_spec. The following shows how to change to BERT-Base model.
Change the model_spec to BERT-Base model for the text classifier.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.