Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
10,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST MLP
You should already have gone through the GettingStartedSequentialModels notebook -- if not you'll be lost here!
Step1: We're going to use some examples from https
Step2: Typically it's good practice to specify your parameters together
Step3: Now get the data.
It's nicely split up between training and testing data which we'll see can be useful.
We'll also see that this data treats the images as matrices (row is an observation, column is a pixel).
However, the input data doesn't need to be a matrix.
Step4: The tutorial then makes a few changes to the data.
First, reshape it -- to make sure that the rows and columns are what we expect them to be.
Then, divide by 255 so that the values go from 0 to 1.
Such scaling is typically a good idea.
It also treats the $X$ values as float32 which you don't have to worry about too much but makes computation a bit faster (at the expense of non-critical numerical detail).
Step5: As before we use the to_categorical() function
Step6: Now define our model
Step7: What is a "dropout layer"?
See Quora
Step8: Now let's run our model.
Note that by giving it a name (history = model.fit(...) we'll be able to access some of its outputs.
We also use the validation_data argument to make it print out the model performance on validation data (which is not used for fitting the model/calculating the back-propagation).
The verbose=1 makes the model talk to us as it fits -- put 0 to make it run silently
Step9: Now we can score our model | Python Code:
import numpy as np
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
Explanation: MNIST MLP
You should already have gone through the GettingStartedSequentialModels notebook -- if not you'll be lost here!
End of explanation
import keras
from keras.datasets import mnist # load up the training data!
from keras.models import Sequential # our model
from keras.layers import Dense, Dropout # Dropout laters?!
from keras.optimizers import RMSprop # our optimizer
Explanation: We're going to use some examples from https://github.com/fchollet/keras/tree/master/examples.
There are tons more and you should check them out!
We'll use these examples to learn about some different sorts of layers, and strategies for our activation functions, loss functions, optimizers, etc.
Simple Deep NN on the MNIST Dataset
This examples is from https://github.com/fchollet/keras/blob/master/examples/mnist_mlp.py.
It's a good one to start with because it's not much more complex than what we have seen, but uses real data!
End of explanation
batch_size = 128
num_classes = 10
epochs = 10 # this is too low
Explanation: Typically it's good practice to specify your parameters together
End of explanation
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
Explanation: Now get the data.
It's nicely split up between training and testing data which we'll see can be useful.
We'll also see that this data treats the images as matrices (row is an observation, column is a pixel).
However, the input data doesn't need to be a matrix.
End of explanation
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
Explanation: The tutorial then makes a few changes to the data.
First, reshape it -- to make sure that the rows and columns are what we expect them to be.
Then, divide by 255 so that the values go from 0 to 1.
Such scaling is typically a good idea.
It also treats the $X$ values as float32 which you don't have to worry about too much but makes computation a bit faster (at the expense of non-critical numerical detail).
End of explanation
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
Explanation: As before we use the to_categorical() function
End of explanation
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax')) # remember y has 10 categories!
# comment this line if you don't have graphviz installed
SVG(model_to_dot(model).create(prog='dot', format='svg'))
Explanation: Now define our model
End of explanation
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
Explanation: What is a "dropout layer"?
See Quora:
Using “dropout", you randomly deactivate certain units (neurons) in a layer with a certain probability $p$. So, if you set half of the activations of a layer to zero, the neural network won’t be able to rely on particular activations in a given feed-forward pass during training. As a consequence, the neural network will learn different, redundant representations; the network can’t rely on the particular neurons and the combination (or interaction) of these to be present. Another nice side effect is that training will be faster.
We can use the summary() method to look at our model instead of the plot -- this will work on your computer.
End of explanation
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
Explanation: Now let's run our model.
Note that by giving it a name (history = model.fit(...) we'll be able to access some of its outputs.
We also use the validation_data argument to make it print out the model performance on validation data (which is not used for fitting the model/calculating the back-propagation).
The verbose=1 makes the model talk to us as it fits -- put 0 to make it run silently
End of explanation
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Explanation: Now we can score our model
End of explanation |
10,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Bootcamp Final Project
Step1: Here we have displayed the most basic statistics for each of the MVP canidates, such as points, assists, steals and rebounds a game. As we can see, Westbrook had some of the highests totals in these categories. Westbrook was the second player in histroy to average double digits numbers in points, rebounds and assists in NBA histroy. Many believe that this fact alone should award him the title of MVP. However it is important to know that players who are renowned for their defense, such as Kawhi Leonard arent usually the leaders in these categories, so these statistics can paint an imcomplete picture of how good a player is.
Step2: Player efficiency rating (PER) is a statistic meant to capture all aspects of a player's game to give a measure of overall performance. It is adjusted for pace and minutes and the league average is always 15.0 for comparison. Russell Westbrook leads all MVP candidates with a PER of 30.6. All candidates just about meet or surpass the historical MVP average of 27.42. However, PER is a little flawed as it is much more heavily weighted to offensive statistics. It only takes into account blocks and steals on the defensive side. This favors Westbrook and Harden, who put up stronger offensive numbers than Leonard and James. On the other hand, Westbrook and Harden are not known for being great defenders, while James, and especially Kawhi Leonard (who is a two-time Defensive Player of the Year winner), are two of the top defenders in the NBA.
Step3: According to Basketball Reference, Value over Replacement Player (VORP) provides an "estimate of each player's overall contribution to the team, measured vs. what a theoretical 'replacement player' would provide", where the 'replacement player' is defined as a player with a box plus/minus of -2. By this metric, Russell Westbrook contributes the most to his team, with a VORP of 12.4. Westbrook and James Harden are the only candidates with a VORP above the historical MVP average of 7.62.
Step4: Win shares is a measure of wins a player produces for his team. This statistic is calculated by taking into account how many wins a player has contributed to their team based off of their offensive play, as well as their defensive play.
We can see that this past season, none of the MVP canidates genereated as many wins for their teams as the average MVP has, which is 16.13 games. James Harden was the closest with a Win share value of just over 15. To understand how meaninful this statisitc this, we also have to keep in mind the production that the rest of the MVP canidates team is putting up. We have to ask the question that if a player is putting up great statistics, but so are other players on the same team, how much impact is that one player really having.
Step5: Here we try to compare the defensive proudction of each of the MVP canidates. Defensive Win Share is calculated by looking at how a player's respective defensive production translates to wins for a team. A player's estimated points allowed per 100 possesions, marginal defense added, as well as points added in a win are all taken into account to calculate this number. Because points added in a win is used in this calculation, even though it is supposed to be a defensive statistic, there is still some offensive bias. So players that score more points, and win more games could get higher values for this statistic. Despite these possible flaws, we see that Leonard and Westbrook lead the way with DWS of 4.7 and 4.6 respectively. All players, still fall short of the historical MVP average of 5.1.
Step6: Win Shares/ 48 Minutes is another statistic used to measure the wins attributed to a certain player. This statistic is slightly different because instead of just taking in to account how many games the team actually wins over the course of a season, this stat attempts to control for actual minutes played by the player. Here we see that Kawhi Leonard has the highest Win Share, and not Harden. We believe that this is due to the fact that Leonard plays significantly fewer minutes than the other canidates. Leonard is the only player whose WS/48 of .264 surpasses the MVP average of .261
Step7: Usage percentage is a measure of the percentage of team possessions a player uses per game. A higher percentage means a player handles the ball more per game. High usage percentages by one player can often lead to decreased overall efficiency for the team, as it means the offense is ran more through one player. In this case, Russell Westbrook's usage percentage is considerably higher than the other candidates and is the highest usage percentage in NBA history by about 3%. The other candidates are much closer to the historical average MVP usage percentage of 29.77%. | Python Code:
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics
import datetime as dt # date tools, used to note current date
import requests
from bs4 import BeautifulSoup
import urllib.request
from matplotlib.offsetbox import OffsetImage
%matplotlib inline
#per game statistics for MVP candidates
url = 'http://www.basketball-reference.com/play-index/pcm_finder.fcgi?request=1&sum=0&player_id1_hint=James+Harden&player_id1_select=James+Harden&player_id1=hardeja01&y1=2017&player_id2_hint=LeBron+James&player_id2_select=LeBron+James&y2=2017&player_id2=jamesle01&player_id3_hint=Kawhi+Leonard&player_id3_select=Kawhi+Leonard&y3=2017&player_id3=leonaka01&player_id4_hint=Russell+Westbrook&player_id4_select=Russell+Westbrook&y4=2017&player_id4=westbru01'
cl = requests.get(url)
soup = BeautifulSoup(cl.content, 'html.parser')
column_headers = [th.getText() for th in
soup.findAll('tr')[0].findAll('th')]
data_rows = soup.findAll('tr')[1:]
player_data = [[td.getText() for td in data_rows[i].findAll('td')]
for i in range(len(data_rows))]
df = pd.DataFrame(player_data, columns=column_headers[1:])
df = df.set_index('Player')
df = df.sort_index(ascending = True)
#getting advanced statistics for MVP candidates
url1 = 'http://www.basketball-reference.com/play-index/psl_finder.cgi?request=1&match=single&per_minute_base=36&per_poss_base=100&type=advanced&season_start=1&season_end=-1&lg_id=NBA&age_min=0&age_max=99&is_playoffs=N&height_min=0&height_max=99&year_min=2017&year_max=2017&birth_country_is=Y&as_comp=gt&pos_is_g=Y&pos_is_gf=Y&pos_is_f=Y&pos_is_fg=Y&pos_is_fc=Y&pos_is_c=Y&pos_is_cf=Y&force%3Apos_is=1&c6mult=1.0&order_by=ws'
bl = requests.get(url1)
soup1 = BeautifulSoup(bl.content, 'html.parser')
column_headers_adv = [th.getText() for th in
soup1.findAll('tr')[1].findAll('th')]
data_rows_adv = soup1.findAll('tr')[2:8]
player_data_adv = [[td.getText() for td in data_rows_adv[i].findAll('td')]
for i in range(len(data_rows_adv))]
df_adv = pd.DataFrame(player_data_adv, columns=column_headers_adv[1:])
df_adv = df_adv.set_index('Player')
#drop other players from list
df_adv = df_adv.drop(['Rudy Gobert', 'Jimmy Butler'])
#sort players alphabetically
df_adv = df_adv.sort_index(ascending = True)
#drop duplicate and unnecessary columns
df_adv = df_adv.drop(['Season', 'Age', 'Tm', 'Lg', 'G', 'GS', 'MP'], axis=1)
#combined table of per game and andvanced statistics
MVP = pd.concat([df, df_adv], axis=1)
MVP
#convert to proper dtypes
MVP = MVP.apply(pd.to_numeric, errors='ignore')
#get per game statistics for MVP winners since 1980
url2 = 'http://www.basketball-reference.com/play-index/psl_finder.cgi?request=1&match=single&type=per_game&per_minute_base=36&per_poss_base=100&season_start=1&season_end=-1&lg_id=NBA&age_min=0&age_max=99&is_playoffs=N&height_min=0&height_max=99&year_min=1981&year_max=2017&birth_country_is=Y&as_comp=gt&pos_is_g=Y&pos_is_gf=Y&pos_is_f=Y&pos_is_fg=Y&pos_is_fc=Y&pos_is_c=Y&pos_is_cf=Y&force%3Apos_is=1&award=mvp&c6mult=1.0&order_by=season'
al = requests.get(url2)
soup2 = BeautifulSoup(al.content, 'html.parser')
column_headers_past = [th.getText() for th in
soup2.findAll('tr')[1].findAll('th')]
data_rows_past = soup2.findAll('tr')[2:]
player_data_past = [[td.getText() for td in data_rows_past[i].findAll('td')]
for i in range(len(data_rows_past))]
df_past = pd.DataFrame(player_data_past, columns=column_headers_past[1:])
df_past = df_past.set_index('Player')
df_past = df_past.drop(['Tm', 'Lg'], axis=1)
#drop row of null values, which was used to separate decades on the Basketball Reference website
df_past = df_past.dropna(axis=0)
#get advanced statistics for MVP winners since 1980
url3 = 'http://www.basketball-reference.com/play-index/psl_finder.cgi?request=1&match=single&per_minute_base=36&per_poss_base=100&type=advanced&season_start=1&season_end=-1&lg_id=NBA&age_min=0&age_max=99&is_playoffs=N&height_min=0&height_max=99&year_min=1981&year_max=2017&birth_country_is=Y&as_comp=gt&pos_is_g=Y&pos_is_gf=Y&pos_is_f=Y&pos_is_fg=Y&pos_is_fc=Y&pos_is_c=Y&pos_is_cf=Y&force%3Apos_is=1&award=mvp&c6mult=1.0&order_by=season'
dl = requests.get(url3)
soup3 = BeautifulSoup(dl.content, 'html.parser')
column_headers_past_adv = [th.getText() for th in
soup3.findAll('tr')[1].findAll('th')]
data_rows_past_adv = soup3.findAll('tr')[2:]
player_data_past_adv = [[td.getText() for td in data_rows_past_adv[i].findAll('td')]
for i in range(len(data_rows_past_adv))]
df_past_adv = pd.DataFrame(player_data_past_adv, columns=column_headers_past_adv[1:])
df_past_adv = df_past_adv.set_index('Player')
#drop duplicate and unnecessary columns
df_past_adv = df_past_adv.drop(['Age', 'Tm', 'Lg', 'Season', 'G', 'GS', 'MP'], axis=1)
#drop row of null values
df_past_adv = df_past_adv.dropna(axis=0)
historical = pd.concat([df_past, df_past_adv], axis=1)
historical
#convert to proper data types
historical = historical.apply(pd.to_numeric, errors='ignore')
fig, axes = plt.subplots(nrows = 2, ncols = 2, figsize = (12,12), sharex=True, sharey=False)
MVP['PTS'].plot.bar(ax=axes[0,0], color = ['b', 'b', 'b', 'r']); axes[0,0].set_title('Points per Game')
MVP['eFG%'].plot.bar(ax=axes[1,0], color = ['b', 'b', 'r', 'b']); axes[1,0].set_title('Effective Field Goal Percentage')
MVP['AST'].plot.bar(ax=axes[0,1], color = ['r', 'b', 'b', 'b']); axes[0,1].set_title('Assists per Game')
MVP['TRB'].plot.bar(ax=axes[1,1], color = ['b', 'b', 'b', 'r']); axes[1,1].set_title('Rebounds per Game')
Explanation: Data Bootcamp Final Project: Who deserves to be NBA MVP?
Vincent Booth & Sam Praveen
Background:
The topic of which NBA player is most deserving of the MVP is always a contentious one. Fans will endlessly debate which of their favorite players has had a more tangible impact on their respective teams as well as who has the better stat lines. We recognize that statistics such as points, assists, and rebounds do not tell enough of a story or give enough information to definitively decide who is more deserving of the award.
Process:
For simplicity sake, we will focus on four players who we believe have earned the right to be in the conversation for MVP: James Harden, Lebron James, Kawhi Leonard, and Russell Westbrook. We will use the advanced statistics that we gather from sources such as espn.com and basketball-reference.com to compare the performance of these four players to see who is in fact more valuable. In addition, we will go back to 1980 and take the stats for each MVP from that season onwards and try to look for any patterns or trends that will be able to serve as predictors for who will win the award this season.
James Harden
James Harden is a guard for the Houston Rockets. He joined the Rockets in 2012 and has been a leader for the team ever since. He is known as a prolific scorer, being able to score in the paint and from three-point range alike. He lead the Rockets to a 3rd place finish in the Western Conference with a record of 55-27, and are currently in the conference semi-finals against the San Antonio Spurs
Kawhi Leonard
Kawhi Leonard is a forward for the San Antonio Spurs. He was drafted by San Antonio in 2011 and broke out as a star after the 2014 Finals, in which he lead his team to victory over the LeBron James led Miami Heat. Since then Kawhi has been known for being a very complete, consistent, and humble player - a reflection of the team and coach he plays for. Leonard led the Spurs to a 2nd place finish in the West with a record of 61-21 and is currently in the conference semis against the Houston Rockets
LeBron James
LeBron James is a forward for the Cleveland Cavaliers. He was drafted by the Cavaliers in 2003, and after a stint with the Miami Heat, returned to Cleveland in 2014. James excels in nearly every aspect of the game, as he has already put together a Hall of Fame career and put his name in the conversation for greatest player of all time, although there is some debate about that. James lead the Cavaliers to a 2nd place finish in the Eastern Conference, and is currently in the conference semi's agaisnt the Toronto Raptors.
Russell Westbrook
Russell Westbrook is a guard for the Oklahoma City Thunder. He was drafted by OKC in 2008, and become the sole leader of the team this season when Kevin Durant left for the Golden State Warriors. Westbrook is known for his athleticism as well as passion on the court. Westbrook has taken an expanded role on the court for the Thunder this year, and lead them in almost every statistical category. He lead OKC to a 6th place finish in the West with a record of 47-35. They were eliminated in the first round in 5 games to the Houston Rockets.
End of explanation
import seaborn as sns
fig, ax = plt.subplots()
MVP['PER'].plot(ax=ax, kind = 'bar', color = ['b', 'b', 'b', 'r'])
ax.set_ylabel('PER')
ax.set_xlabel('')
ax.axhline(historical['PER'].mean(), color = 'k', linestyle = '--', alpha = .4)
Explanation: Here we have displayed the most basic statistics for each of the MVP canidates, such as points, assists, steals and rebounds a game. As we can see, Westbrook had some of the highests totals in these categories. Westbrook was the second player in histroy to average double digits numbers in points, rebounds and assists in NBA histroy. Many believe that this fact alone should award him the title of MVP. However it is important to know that players who are renowned for their defense, such as Kawhi Leonard arent usually the leaders in these categories, so these statistics can paint an imcomplete picture of how good a player is.
End of explanation
fig, ax = plt.subplots()
MVP['VORP'].plot(ax=ax, kind = 'bar', color = ['b', 'b', 'b', 'r'])
ax.set_ylabel('Value Over Replacement Player')
ax.set_xlabel('')
ax.axhline(historical['VORP'].mean(), color = 'k', linestyle = '--', alpha = .4)
Explanation: Player efficiency rating (PER) is a statistic meant to capture all aspects of a player's game to give a measure of overall performance. It is adjusted for pace and minutes and the league average is always 15.0 for comparison. Russell Westbrook leads all MVP candidates with a PER of 30.6. All candidates just about meet or surpass the historical MVP average of 27.42. However, PER is a little flawed as it is much more heavily weighted to offensive statistics. It only takes into account blocks and steals on the defensive side. This favors Westbrook and Harden, who put up stronger offensive numbers than Leonard and James. On the other hand, Westbrook and Harden are not known for being great defenders, while James, and especially Kawhi Leonard (who is a two-time Defensive Player of the Year winner), are two of the top defenders in the NBA.
End of explanation
fig, ax = plt.subplots()
MVP['WS'].plot(ax=ax, kind = 'bar', color = ['r', 'b', 'b', 'b'])
ax.set_ylabel('Win Shares')
ax.set_xlabel('')
ax.axhline(historical['WS'].mean(), color = 'k', linestyle = '--', alpha = .4)
Explanation: According to Basketball Reference, Value over Replacement Player (VORP) provides an "estimate of each player's overall contribution to the team, measured vs. what a theoretical 'replacement player' would provide", where the 'replacement player' is defined as a player with a box plus/minus of -2. By this metric, Russell Westbrook contributes the most to his team, with a VORP of 12.4. Westbrook and James Harden are the only candidates with a VORP above the historical MVP average of 7.62.
End of explanation
fig, ax = plt.subplots()
MVP['DWS'].plot(ax=ax, kind = 'bar', color = ['b', 'r', 'b', 'b'])
ax.set_ylabel('Defensive Win Share')
ax.set_xlabel('')
ax.axhline(historical['DWS'].mean(), color = 'k', linestyle = '--', alpha = .4)
Explanation: Win shares is a measure of wins a player produces for his team. This statistic is calculated by taking into account how many wins a player has contributed to their team based off of their offensive play, as well as their defensive play.
We can see that this past season, none of the MVP canidates genereated as many wins for their teams as the average MVP has, which is 16.13 games. James Harden was the closest with a Win share value of just over 15. To understand how meaninful this statisitc this, we also have to keep in mind the production that the rest of the MVP canidates team is putting up. We have to ask the question that if a player is putting up great statistics, but so are other players on the same team, how much impact is that one player really having.
End of explanation
fig, ax = plt.subplots()
MVP['WS/48'].plot(ax=ax, kind = 'bar', color = ['b', 'r', 'b', 'b'])
ax.set_ylabel('Win Share/48 Minutes')
ax.set_xlabel('')
ax.axhline(historical['WS/48'].mean(), color = 'k', linestyle = '--', alpha = .4)
print(historical['WS/48'].mean())
Explanation: Here we try to compare the defensive proudction of each of the MVP canidates. Defensive Win Share is calculated by looking at how a player's respective defensive production translates to wins for a team. A player's estimated points allowed per 100 possesions, marginal defense added, as well as points added in a win are all taken into account to calculate this number. Because points added in a win is used in this calculation, even though it is supposed to be a defensive statistic, there is still some offensive bias. So players that score more points, and win more games could get higher values for this statistic. Despite these possible flaws, we see that Leonard and Westbrook lead the way with DWS of 4.7 and 4.6 respectively. All players, still fall short of the historical MVP average of 5.1.
End of explanation
fig, ax = plt.subplots()
MVP['USG%'].plot(ax=ax, kind = 'bar', color = ['b', 'b', 'b', 'r'])
ax.set_ylabel('Usage Percentage')
ax.set_xlabel('')
ax.axhline(historical['USG%'].mean(), color = 'k', linestyle = '--', alpha = .4)
print(historical['USG%'].mean())
Explanation: Win Shares/ 48 Minutes is another statistic used to measure the wins attributed to a certain player. This statistic is slightly different because instead of just taking in to account how many games the team actually wins over the course of a season, this stat attempts to control for actual minutes played by the player. Here we see that Kawhi Leonard has the highest Win Share, and not Harden. We believe that this is due to the fact that Leonard plays significantly fewer minutes than the other canidates. Leonard is the only player whose WS/48 of .264 surpasses the MVP average of .261
End of explanation
url4 ='http://www.basketball-reference.com/play-index/tsl_finder.cgi?request=1&match=single&type=team_totals&lg_id=NBA&year_min=2017&year_max=2017&order_by=wins'
e1 = requests.get(url4)
soup4 = BeautifulSoup(e1.content, 'html.parser')
column_headers_past_adv = [th.getText() for th in
soup4.findAll('tr')[1].findAll('th')]
data_rows_past_adv = soup4.findAll('tr')[2:]
column_headers_team = [th.getText() for th in
soup4.findAll('tr')[1].findAll('th')]
data_rows_team = soup4.findAll('tr')[3:12]
team_wins = [[td.getText() for td in data_rows_team[i].findAll('td')]
for i in range(len(data_rows_team))]
df_team = pd.DataFrame(team_wins, columns=column_headers_team[1:])
df_team = df_team.set_index('Tm')
df_team =df_team.drop(['TOR*','UTA*','LAC*','WAS*'])
Team =df_team
Team
Team['W']['SAS*']
Hou_wins = int((Team['W']['HOU*']))
Harden_Wins = int(MVP['WS']['James Harden'])
Harden_winpct = Harden_Wins/Hou_wins
Harden_nonwin = 1 - Harden_winpct
SAS_wins = int((Team['W']['SAS*']))
Leo_Wins = int(MVP['WS']['Kawhi Leonard'])
Leo_winpct = Leo_Wins/SAS_wins
Leo_nonwin = 1 - Leo_winpct
Cle_wins = int((Team['W']['CLE*']))
LeBron_Wins = int(MVP['WS']['LeBron James'])
LeBron_winpct = LeBron_Wins/Cle_wins
LeBron_nonwin = 1 - LeBron_winpct
OKC_wins = int((Team['W']['OKC*']))
Westbrook_Wins = int(MVP['WS']['Russell Westbrook'])
Westbrook_winpct = Westbrook_Wins/OKC_wins
Westbrook_nonwin = 1 - Westbrook_winpct
df1 = ([Harden_winpct, Leo_winpct, LeBron_winpct, Westbrook_winpct])
df2 = ([Harden_nonwin, Leo_nonwin, LeBron_nonwin, Westbrook_nonwin])
df3 = pd.DataFrame(df1)
df4 = pd.DataFrame(df2)
Win_Share_Per = pd.concat([df3, df4], axis =1)
Win_Share_Per.columns = ['% Wins Accounted For', 'Rest of Team']
Win_Share_Per = Win_Share_Per.T
Win_Share_Per.columns = ['James Harden', 'Kawhi Leonard', 'LeBron James', 'Russell Westbrook']
pic1 = urllib.request.urlretrieve("http://stats.nba.com/media/players/230x185/201935.png", "201935.png")
pic2 = urllib.request.urlretrieve("http://stats.nba.com/media/players/230x185/202695.png", "202695.png")
pic3 = urllib.request.urlretrieve("http://stats.nba.com/media/players/230x185/2544.png", "2544.png")
pic4 = urllib.request.urlretrieve("http://stats.nba.com/media/players/230x185/201566.png", "201566.png")
pic5 = urllib.request.urlretrieve("https://upload.wikimedia.org/wikipedia/en/thumb/2/28/Houston_Rockets.svg/410px-Houston_Rockets.svg.png", "410px-Houston_Rockets.svg.png")
pic6 = urllib.request.urlretrieve("https://upload.wikimedia.org/wikipedia/en/thumb/a/a2/San_Antonio_Spurs.svg/512px-San_Antonio_Spurs.svg.png", "512px-San_Antonio_Spurs.svg.png")
pic7 = urllib.request.urlretrieve("https://upload.wikimedia.org/wikipedia/en/thumb/f/f7/Cleveland_Cavaliers_2010.svg/295px-Cleveland_Cavaliers_2010.svg.png", "295px-Cleveland_Cavaliers_2010.svg.png")
pic8 = urllib.request.urlretrieve("https://upload.wikimedia.org/wikipedia/en/thumb/5/5d/Oklahoma_City_Thunder.svg/250px-Oklahoma_City_Thunder.svg.png", "250px-Oklahoma_City_Thunder.svg.png")
harden_pic = plt.imread(pic1[0])
leonard_pic = plt.imread(pic2[0])
james_pic = plt.imread(pic3[0])
westbrook_pic = plt.imread(pic4[0])
rockets_pic = plt.imread(pic5[0])
spurs_pic = plt.imread(pic6[0])
cavaliers_pic = plt.imread(pic7[0])
thunder_pic = plt.imread(pic8[0])
fig, axes = plt.subplots(nrows = 2, ncols = 2, figsize = (12,12))
Win_Share_Per['James Harden'].plot.pie(ax=axes[0,0], colors = ['r', 'yellow'])
Win_Share_Per['Kawhi Leonard'].plot.pie(ax=axes[0,1], colors = ['black', 'silver'])
Win_Share_Per['LeBron James'].plot.pie(ax=axes[1,0], colors = ['maroon', 'navy'])
Win_Share_Per['Russell Westbrook'].plot.pie(ax=axes[1,1], colors = ['blue', 'orangered'])
img1 = OffsetImage(harden_pic, zoom=0.4)
img1.set_offset((290,800))
a = axes[0,0].add_artist(img1)
a.set_zorder(10)
img2 = OffsetImage(leonard_pic, zoom=0.4)
img2.set_offset((800,800))
b= axes[0,1].add_artist(img2)
b.set_zorder(10)
img3 = OffsetImage(james_pic, zoom=0.4)
img3.set_offset((290,290))
c = axes[1,0].add_artist(img3)
c.set_zorder(10)
img4 = OffsetImage(westbrook_pic, zoom=0.4)
img4.set_offset((790,290))
d = axes[1,1].add_artist(img4)
d.set_zorder(10)
img5 = OffsetImage(rockets_pic, zoom=0.4)
img5.set_offset((150,620))
e = axes[1,1].add_artist(img5)
e.set_zorder(10)
img6 = OffsetImage(spurs_pic, zoom=0.3)
img6.set_offset((650,620))
f = axes[1,1].add_artist(img6)
f.set_zorder(10)
img7 = OffsetImage(cavaliers_pic, zoom=0.4)
img7.set_offset((150,130))
g = axes[1,1].add_artist(img7)
g.set_zorder(10)
img8 = OffsetImage(thunder_pic, zoom=0.4)
img8.set_offset((650,130))
h = axes[1,1].add_artist(img8)
h.set_zorder(10)
plt.show()
Explanation: Usage percentage is a measure of the percentage of team possessions a player uses per game. A higher percentage means a player handles the ball more per game. High usage percentages by one player can often lead to decreased overall efficiency for the team, as it means the offense is ran more through one player. In this case, Russell Westbrook's usage percentage is considerably higher than the other candidates and is the highest usage percentage in NBA history by about 3%. The other candidates are much closer to the historical average MVP usage percentage of 29.77%.
End of explanation |
10,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to PyTorch
PyTorch is a Python package for performing tensor computation, automatic differentiation, and dynamically defining neural networks. It makes it particularly easy to accelerate model training with a GPU. In recent years it has gained a large following in the NLP community.
Installing PyTorch
Instructions for installing PyTorch can be found on the home-page of their website
Step1: All of the basic arithmetic operations are supported.
Step2: Indexing/slicing also behaves the same.
Step3: Resizing and reshaping tensors is also quite simple
Step4: Changing a Tensor to and from an array is also quite simple
Step5: Moving Tensors to the GPU is also quite simple
Step6: Automatic Differentiation
https
Step7: If we look at the $x$ and $y$ values, you can see that the perfect values for our parameters are $m$=-1 and $b$=1
To obtain the gradient of the $L$ w.r.t $m$ and $b$ you need only run
Step10: Training Models
While automatic differentiation is in itself a useful feature, it can be quite tedious to keep track of all of the different parameters and gradients for more complicated models. In order to make life simple, PyTorch defines a torch.nn.Module class which handles all of these details for you. To paraphrase the PyTorch documentation, this is the base class for all neural network modules, and whenever you define a model it should be a subclass of this class.
There are two main functions you need to implement for a Module class
Step11: To train this model we need to pick an optimizer such as SGD, AdaDelta, ADAM, etc. There are many options in torch.optim. When initializing an optimizer, the first argument will be the collection of variables you want optimized. To obtain a list of all of the trainable parameters of a model you can call the nn.Module.parameters() method. For example, the following code initalizes a SGD optimizer for the model defined above
Step12: Training is done in a loop. The general structure is
Step13: Observe that the final parameters are what we expect
Step14: CASE STUDY
Step21: We will now be introducing three new components which are vital to training (NLP) models
Step22: Let's create a vocabulary with a small amount of words
Step25: Dataset
Next, we need a way to efficiently read in the data file and to process it into tensors. PyTorch provides an easy way to do this using the torch.utils.data.Dataset class. We will be creating our own class which inherits from this class.
Helpful link
Step26: Now let's create Dataset objects for our training and validation sets!
A key step here is creating the Vocabulary for these datasets.
We will use the list of words in the training set to intialize a Vocabulary object over the input words.
We will also use list of tags to intialize a Vocabulary over the tags.
Step27: Let's print out one data point of the tensorized data and see what it looks like
Step28: DataLoader
At this point our data is in a tensor, and we can create context windows using only PyTorch operations.
Now we need a way to generate batches of data for training and evaluation.
To do this, we will wrap our Dataset objects in a torch.utils.data.DataLoader object, which will automatically batch datapoints.
Step29: Now let's do one iteration over our training set to see what a batch looks like
Step30: Model
Now that we can read in the data, it is time to build our model.
We will build a very simple LSTM based tagger! Note that this is pretty similar to the code in simple_tagger.py in your homework, but with a lot of things hardcoded.
Useful links
Step31: Training
The training script essentially follows the same pattern that we used for the linear model above. However we have also added an evaluation step, and code for saving model checkpoints.
Step32: Loading Trained Models
Loading a pretrained model can be done easily. To learn more about saving/loading models see https
Step33: Feed in your own sentences! | Python Code:
import numpy as np
import torch
# Create a 3 x 2 array
np.ndarray((3, 2))
# Create a 3 x 2 Tensor
torch.Tensor(3, 2)
Explanation: Introduction to PyTorch
PyTorch is a Python package for performing tensor computation, automatic differentiation, and dynamically defining neural networks. It makes it particularly easy to accelerate model training with a GPU. In recent years it has gained a large following in the NLP community.
Installing PyTorch
Instructions for installing PyTorch can be found on the home-page of their website: http://pytorch.org/. The PyTorch developers recommended you use the conda package manager to install the library (in my experience pip works fine as well).
One thing to be aware of is that the package name will be different depending on whether or not you intend on using a GPU. If you do plan on using a GPU, then you will need to install CUDA and CUDNN before installing PyTorch. Detailed instructions can be found at NVIDIA's website: https://docs.nvidia.com/cuda/. The following versions of CUDA are supported: 7.5, 8, and 9.
PyTorch Basics
The PyTorch API is designed to very closely resemble NumPy. The central object for performing computation is the Tensor, which is PyTorch's version of NumPy's array.
End of explanation
a = torch.Tensor([1,2])
b = torch.Tensor([3,4])
print('a + b:', a + b)
print('a - b:', a - b)
print('a * b:', a * b)
print('a / b:', a / b)
Explanation: All of the basic arithmetic operations are supported.
End of explanation
a = torch.randint(0, 10, (4, 4))
print('a:', a, '\n')
# Slice using ranges
print('a[2:, :]', a[2:, :], '\n')
# Can count backwards using negative indices
print('a[:, -1]', a[:, -1])
Explanation: Indexing/slicing also behaves the same.
End of explanation
print('Turn tensor into a 1 dimensional array:')
a = torch.randint(0, 10, (3, 3))
print(f'Before size: {a.size()}')
print(a, '\n')
a = a.view(1, 9)
print(f'After size: {a.size()}')
print(a)
Explanation: Resizing and reshaping tensors is also quite simple
End of explanation
# Tensor from array
arr = np.array([1,2])
torch.from_numpy(arr)
# Tensor to array
t = torch.Tensor([1, 2])
t.numpy()
Explanation: Changing a Tensor to and from an array is also quite simple:
End of explanation
t = torch.Tensor([1, 2]) # on CPU
if torch.cuda.is_available():
t = t.cuda() # on GPU
Explanation: Moving Tensors to the GPU is also quite simple:
End of explanation
# Data
x = torch.tensor([1., 2, 3, 4]) # requires_grad = False by default
y = torch.tensor([0., -1, -2, -3])
# Initialize parameters
m = torch.rand(1, requires_grad=True)
b = torch.rand(1, requires_grad=True)
# Define regression function
y_hat = m * x + b
print(y_hat)
# Define loss
loss = torch.mean(0.5 * (y - y_hat)**2)
loss.backward() # Backprop the gradients of the loss w.r.t other variables
Explanation: Automatic Differentiation
https://pytorch.org/tutorials/beginner/basics/autograd_tutorial.html
Derivatives and gradients are critical to a large number of machine learning algorithms. One of the key benefits of PyTorch is that these can be computed automatically.
We'll demonstrate this using the following example. Suppose we have some data $x$ and $y$, and want to fit a model:
$$ \hat{y} = mx + b $$
by minimizing the loss function:
$$ L(y, \hat{y}) = \frac{1}{2}(y - \hat{y})^2 $$
End of explanation
# Gradients
print('Gradients:')
print('dL/dm: %0.4f' % m.grad)
print('dL/db: %0.4f' % b.grad)
Explanation: If we look at the $x$ and $y$ values, you can see that the perfect values for our parameters are $m$=-1 and $b$=1
To obtain the gradient of the $L$ w.r.t $m$ and $b$ you need only run:
End of explanation
import torch.nn as nn
class LinearModel(nn.Module):
def __init__(self):
This method is called when you instantiate a new LinearModel object.
You should use it to define the parameters/layers of your model.
# Whenever you define a new nn.Module you should start the __init__()
# method with the following line. Remember to replace `LinearModel`
# with whatever you are calling your model.
super(LinearModel, self).__init__()
# Now we define the parameters used by the model.
self.m = torch.nn.Parameter(torch.rand(1))
self.b = torch.nn.Parameter(torch.rand(1))
def forward(self, x):
This method computes the output of the model.
Args:
x: The input data.
return self.m * x + self.b
# Initialize model
model = LinearModel()
# Example forward pass. Note that we use model(x) not model.forward(x) !!!
y_hat = model(x)
print(x, y_hat)
Explanation: Training Models
While automatic differentiation is in itself a useful feature, it can be quite tedious to keep track of all of the different parameters and gradients for more complicated models. In order to make life simple, PyTorch defines a torch.nn.Module class which handles all of these details for you. To paraphrase the PyTorch documentation, this is the base class for all neural network modules, and whenever you define a model it should be a subclass of this class.
There are two main functions you need to implement for a Module class:
- $init$: Function first called when object is initialized. Used to set parameters, etc.
- $forward$: When the model is called, this forwards the inputs through the model.
Here is an example implementation of the simple linear model given above:
End of explanation
import torch.optim as optim
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
Explanation: To train this model we need to pick an optimizer such as SGD, AdaDelta, ADAM, etc. There are many options in torch.optim. When initializing an optimizer, the first argument will be the collection of variables you want optimized. To obtain a list of all of the trainable parameters of a model you can call the nn.Module.parameters() method. For example, the following code initalizes a SGD optimizer for the model defined above:
End of explanation
import time
for i in range(5001):
optimizer.zero_grad()
y_hat = model(x) # calling model() calls the forward function
loss = torch.mean(0.5 * (y - y_hat)**2)
loss.backward()
optimizer.step()
if i % 1000 == 0:
time.sleep(1) # DO NOT INCLUDE THIS IN YOUR CODE !!! Only for demo.
print(f'Iteration {i} - Loss: {loss.item():0.6f}')
Explanation: Training is done in a loop. The general structure is:
Clear the gradients.
Evaluate the model.
Calculate the loss.
Backpropagate.
Perform an optimization step.
(Once in a while) Print monitoring metrics.
For example, we can train our linear model by running:
End of explanation
print(model(x), y)
print('Final parameters:')
print('m: %0.2f' % model.m)
print('b: %0.2f' % model.b)
Explanation: Observe that the final parameters are what we expect:
End of explanation
print("First data point:")
with open('twitter_train.pos', 'r') as f:
for line in f:
line = line.strip()
print('\t', line)
if line == '':
break
Explanation: CASE STUDY: POS Tagging!
Now let's dive into an example that is more relevant to NLP and is relevant to your HW3, part-of-speech tagging! We will be building up code up until the point where you will be able to process the POS data into tensors, then train a simple model on it.
The code we are building up to forms the basis of the code in the homework assignment.
To start, we'll need some data to train and evaluate on. First download the train and dev POS data twitter_train.pos and twitter_dev.pos into the same directory as this notebook.
End of explanation
class Vocabulary():
Object holding vocabulary and mappings
Args:
word_list: ``list`` A list of words. Words assumed to be unique.
add_unk_token: ``bool` Whether to create an token for unknown tokens.
def __init__(self, word_list, add_unk_token=False):
# create special tokens for padding and unknown words
self.pad_token = '<pad>'
self.unk_token = '<unk>' if add_unk_token else None
self.special_tokens = [self.pad_token]
if self.unk_token:
self.special_tokens += [self.unk_token]
self.word_list = word_list
# maps from the token ID to the token
self.id_to_token = self.word_list + self.special_tokens
# maps from the token to its token ID
self.token_to_id = {token: id for id, token in
enumerate(self.id_to_token)}
def __len__(self):
Returns size of vocabulary
return len(self.token_to_id)
@property
def pad_token_id(self):
return self.map_token_to_id(self.pad_token)
def map_token_to_id(self, token: str):
Maps a single token to its token ID
if token not in self.token_to_id:
token = self.unk_token
return self.token_to_id[token]
def map_id_to_token(self, id: int):
Maps a single token ID to its token
return self.id_to_token[id]
def map_tokens_to_ids(self, tokens: list, max_length: int = None):
Maps a list of tokens to a list of token IDs
# truncate extra tokens and pad to `max_length`
if max_length:
tokens = tokens[:max_length]
tokens = tokens + [self.pad_token]*(max_length-len(tokens))
return [self.map_token_to_id(token) for token in tokens]
def map_ids_to_tokens(self, ids: list, filter_padding=True):
Maps a list of token IDs to a list of token
tokens = [self.map_id_to_token(id) for id in ids]
if filter_padding:
tokens = [t for t in tokens if t != self.pad_token]
return tokens
Explanation: We will now be introducing three new components which are vital to training (NLP) models:
1. a Vocabulary object which converts from tokens/labels to integers. This part should also be able to handle padding so that batches can be easily created.
2. a Dataset object which takes in the data file and produces data tensors
3. a DataLoader object which takes data tensors from Dataset and batches them
Vocabulary
Next, we need to get our data into Python and in a form that is usable by PyTorch. For text data this typically entails building a Vocabulary of all of the words, then mapping words to integers corresponding to their place in the sorted vocabulary. This can be done as follows:
End of explanation
word_list = ['i', 'like', 'dogs', '!']
vocab = Vocabulary(word_list, add_unk_token=True)
print('map from the token "i" to its token ID, then back again')
token_id = vocab.map_token_to_id('i')
print(token_id)
print(vocab.map_id_to_token(token_id))
print('what about a token not in our vocabulary like "you"?')
token_id = vocab.map_token_to_id('you')
print(token_id)
print(vocab.map_id_to_token(token_id))
token_ids = vocab.map_tokens_to_ids(['i', 'like', 'dogs', '!'], max_length=10)
print("mapping a sequence of tokens: \'['i', 'like', 'dogs', '!']\'")
print(token_ids)
print(vocab.map_ids_to_tokens(token_ids, filter_padding=False))
Explanation: Let's create a vocabulary with a small amount of words
End of explanation
class TwitterPOSDataset(torch.utils.data.Dataset):
def __init__(self, data_path, max_length=30):
self._max_length = max_length
self._dataset = []
# read the dataset file, extracting tokens and tags
with open(data_path, 'r') as f:
tokens, tags = [], []
for line in f:
elements = line.strip().split('\t')
# empty line means end of sentence
if elements == [""]:
self._dataset.append({'tokens': tokens, 'tags': tags})
tokens, tags = [], []
else:
tokens.append(elements[0].lower())
tags.append(elements[1])
# intiailize an empty vocabulary
self.token_vocab = None
self.tag_vocab = None
def __len__(self):
return len(self._dataset)
def __getitem__(self, item: int):
# get the sample corresponding to the index
instance = self._dataset[item]
# check the vocabulary has been set
assert self.token_vocab is not None
assert self.tag_vocab is not None
# Convert inputs to tensors, then return
return self.tensorize(instance['tokens'], instance['tags'], self._max_length)
def tensorize(self, tokens, tags=None, max_length=None):
# map the tokens and tags into their ID form
token_ids = self.token_vocab.map_tokens_to_ids(tokens, max_length)
tensor_dict = {'token_ids': torch.LongTensor(token_ids)}
if tags:
tag_ids = self.tag_vocab.map_tokens_to_ids(tags, max_length)
tensor_dict['tag_ids'] = torch.LongTensor(tag_ids)
return tensor_dict
def get_tokens_list(self):
Returns set of tokens in dataset
tokens = [token for d in self._dataset for token in d['tokens']]
return sorted(set(tokens))
def get_tags_list(self):
Returns set of tags in dataset
tags = [tag for d in self._dataset for tag in d['tags']]
return sorted(set(tags))
def set_vocab(self, token_vocab: Vocabulary, tag_vocab: Vocabulary):
self.token_vocab = token_vocab
self.tag_vocab = tag_vocab
Explanation: Dataset
Next, we need a way to efficiently read in the data file and to process it into tensors. PyTorch provides an easy way to do this using the torch.utils.data.Dataset class. We will be creating our own class which inherits from this class.
Helpful link: https://pytorch.org/tutorials/beginner/basics/data_tutorial.html
A custom Dataset class must implement three functions:
$init$: The init functions is run once when instantisting the Dataset object.
$len$: The len function returns the number of data points in our dataset.
$getitem$. The getitem function returns a sample from the dataset give the index of the sample. The output of this part should be a dictionary of (mostly) PyTorch tensors.
End of explanation
train_dataset = TwitterPOSDataset('twitter_train.pos')
dev_dataset = TwitterPOSDataset('twitter_dev.pos')
# Get list of tokens and tags seen in training set and use to create Vocabulary
token_list = train_dataset.get_tokens_list()
tag_list = train_dataset.get_tags_list()
token_vocab = Vocabulary(token_list, add_unk_token=True)
tag_vocab = Vocabulary(tag_list)
# Update the train/dev set with vocabulary. Notice we created the vocabulary using the training set
train_dataset.set_vocab(token_vocab, tag_vocab)
dev_dataset.set_vocab(token_vocab, tag_vocab)
print(f'Size of training set: {len(train_dataset)}')
print(f'Size of validation set: {len(dev_dataset)}')
Explanation: Now let's create Dataset objects for our training and validation sets!
A key step here is creating the Vocabulary for these datasets.
We will use the list of words in the training set to intialize a Vocabulary object over the input words.
We will also use list of tags to intialize a Vocabulary over the tags.
End of explanation
instance = train_dataset[2]
print(instance)
tokens = train_dataset.token_vocab.map_ids_to_tokens(instance['token_ids'])
tags = train_dataset.tag_vocab.map_ids_to_tokens(instance['tag_ids'])
print()
print(f'Tokens: {tokens}')
print(f'Tags: {tags}')
Explanation: Let's print out one data point of the tensorized data and see what it looks like
End of explanation
batch_size = 3
print(f'Setting batch_size to be {batch_size}')
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size)
dev_dataloader = torch.utils.data.DataLoader(dev_dataset, batch_size)
Explanation: DataLoader
At this point our data is in a tensor, and we can create context windows using only PyTorch operations.
Now we need a way to generate batches of data for training and evaluation.
To do this, we will wrap our Dataset objects in a torch.utils.data.DataLoader object, which will automatically batch datapoints.
End of explanation
for batch in train_dataloader:
print(batch, '\n')
print(f'Size of tag_ids: {batch["tag_ids"].size()}')
break
Explanation: Now let's do one iteration over our training set to see what a batch looks like:
End of explanation
class SimpleTagger(torch.nn.Module):
def __init__(self, token_vocab, tag_vocab):
super(SimpleTagger, self).__init__()
self.token_vocab = token_vocab
self.tag_vocab = tag_vocab
self.num_tags = len(self.tag_vocab)
# Initialize random embeddings of size 50 for each word in your token vocabulary
self._embeddings = torch.nn.Embedding(len(token_vocab), 50)
# Initialize a single-layer bidirectional LSTM encoder
self._encoder = torch.nn.LSTM(input_size=50, hidden_size=25, num_layers=1, bidirectional=True)
# _encoder a Linear layer which projects from the hidden state size to the number of tags
self._tag_projection = torch.nn.Linear(in_features=50, out_features=len(self.tag_vocab))
# Loss will be a Cross Entropy Loss over the tags (except the padding token)
self.loss = torch.nn.CrossEntropyLoss(ignore_index=self.tag_vocab.pad_token_id)
def forward(self, token_ids, tag_ids=None):
# Create mask over all the positions where the input is padded
mask = token_ids != self.token_vocab.pad_token_id
# Embed Inputs
embeddings = self._embeddings(token_ids).permute(1, 0, 2)
# Feed embeddings through LSTM
encoder_outputs = self._encoder(embeddings)[0].permute(1, 0, 2)
# Project output of LSTM through linear layer to get logits
tag_logits = self._tag_projection(encoder_outputs)
# Get the maximum score for each position as the predicted tag
pred_tag_ids = torch.max(tag_logits, dim=-1)[1]
output_dict = {
'pred_tag_ids': pred_tag_ids,
'tag_logits': tag_logits,
'tag_probs': torch.nn.functional.softmax(tag_logits, dim=-1) # covert logits to probs
}
# Compute loss and accuracy if gold tags are provided
if tag_ids is not None:
loss = self.loss(tag_logits.view(-1, self.num_tags), tag_ids.view(-1))
output_dict['loss'] = loss
correct = pred_tag_ids == tag_ids # 1's in positions where pred matches gold
correct *= mask # zero out positions where mask is zero
output_dict['accuracy'] = torch.sum(correct)/torch.sum(mask)
return output_dict
Explanation: Model
Now that we can read in the data, it is time to build our model.
We will build a very simple LSTM based tagger! Note that this is pretty similar to the code in simple_tagger.py in your homework, but with a lot of things hardcoded.
Useful links:
- Embedding Layer: https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html
- LSTMs: https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html
- Linear Layer: https://pytorch.org/docs/stable/generated/torch.nn.Linear.html?highlight=linear#torch.nn.Linear
End of explanation
from tqdm import tqdm
################################
# Setup
################################
# Create model
model = SimpleTagger(token_vocab=token_vocab, tag_vocab=tag_vocab)
if torch.cuda.is_available():
model = model.cuda()
# Initialize optimizer.
# Note: The learning rate is an important hyperparameters to tune
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
################################
# Training and Evaluation!
################################
num_epochs = 10
best_dev_loss = float('inf')
for epoch in range(num_epochs):
print('\nEpoch', epoch)
# Training loop
model.train() # THIS PART IS VERY IMPORTANT TO SET BEFORE TRAINING
train_loss = 0
train_acc = 0
for batch in train_dataloader:
batch_size = batch['token_ids'].size(0)
optimizer.zero_grad()
output_dict = model(**batch)
loss = output_dict['loss']
loss.backward()
optimizer.step()
train_loss += loss.item()*batch_size
accuracy = output_dict['accuracy']
train_acc += accuracy*batch_size
train_loss /= len(train_dataset)
train_acc /= len(train_dataset)
print(f'Train loss {train_loss} accuracy {train_acc}')
# Evaluation loop
model.eval() # THIS PART IS VERY IMPORTANT TO SET BEFORE EVALUATION
dev_loss = 0
dev_acc = 0
for batch in dev_dataloader:
batch_size = batch['token_ids'].size(0)
output_dict = model(**batch)
dev_loss += output_dict['loss'].item()*batch_size
dev_acc += output_dict['accuracy']*batch_size
dev_loss /= len(dev_dataset)
dev_acc /= len(dev_dataset)
print(f'Dev loss {dev_loss} accuracy {dev_acc}')
# Save best model
if dev_loss < best_dev_loss:
print('Best so far')
torch.save(model, 'model.pt')
best_dev_loss = dev_loss
Explanation: Training
The training script essentially follows the same pattern that we used for the linear model above. However we have also added an evaluation step, and code for saving model checkpoints.
End of explanation
model = torch.load('model.pt')
Explanation: Loading Trained Models
Loading a pretrained model can be done easily. To learn more about saving/loading models see https://pytorch.org/tutorials/beginner/saving_loading_models.html
End of explanation
sentence = 'i want to eat a pizza .'.lower().split()
# convert sentence to tensor dictionar
tensor_dict = train_dataset.tensorize(sentence)
# unsqueeze first dimesion so batch size is 1
tensor_dict['token_ids'] = tensor_dict['token_ids'].unsqueeze(0)
print(tensor_dict)
# feed through model
output_dict = model(**tensor_dict)
# get predicted tag IDs
pred_tag_ids = output_dict['pred_tag_ids'].squeeze().tolist()
print(pred_tag_ids)
# convert tag IDs to tag names
print(model.tag_vocab.map_ids_to_tokens(pred_tag_ids))
Explanation: Feed in your own sentences!
End of explanation |
10,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unpacking a Sequence into Separate Variables
Problem
You have an N-element tuple or sequence that you would like to unpack into a collection of N variables.
Solution
Any sequence (or iterable) can be unpacked into variables using a simple assignment operation. The only requirement is that the number of variables and structure match the sequence.
Example 1
Step1: Example 2
Step2: Example 3
If there is a mismatch in the number of elements, you’ll get an error
Step3: Example 4
Unpacking actually works with any object that happens to be iterable, not just tuples or lists. This includes strings, files, iterators, and generators.
Step4: Example 5
Discard certain values | Python Code:
# Example 1
p = (4, 5)
x, y = p
print x
print y
Explanation: Unpacking a Sequence into Separate Variables
Problem
You have an N-element tuple or sequence that you would like to unpack into a collection of N variables.
Solution
Any sequence (or iterable) can be unpacked into variables using a simple assignment operation. The only requirement is that the number of variables and structure match the sequence.
Example 1
End of explanation
# Example 2
data = ['ACME', 50, 91.1, (2012, 12, 21)]
name, shares, price, date = data
print name
print date
name, shares, price, (year, mon, day) = data
print name
print year
print mon
print day
Explanation: Example 2
End of explanation
# Example 3
# error with mismatch in number of elements
p = (4, 5)
x, y, z = p
Explanation: Example 3
If there is a mismatch in the number of elements, you’ll get an error
End of explanation
# Example 4: string
s = 'Hello'
a, b, c, d, e = s
print a
print b
print e
Explanation: Example 4
Unpacking actually works with any object that happens to be iterable, not just tuples or lists. This includes strings, files, iterators, and generators.
End of explanation
# Example 5
# discard certain values
data = [ 'ACME', 50, 91.1, (2012, 12, 21) ]
_, shares, price, _ = data
print shares
print price
Explanation: Example 5
Discard certain values
End of explanation |
10,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
El día Pi
Esta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Mi blog sobre Python. El contenido esta bajo la licencia BSD.
<img alt="día Pi" title="día Pi" src="http
Step3: Si queremos calcular alguna aproximación al valor de $\pi$, podríamos implementar por ejemplo la aproximación de Arquímedes de la siguiente manera.
Step5: Por último, también podriamos implementar las series infinitas de Gregory-Leibniz, lo cual es realmente bastante sencillo. | Python Code:
# Pi utilizando el módulo math
import math
math.pi
# Pi utiizando sympy, dps nos permite variar el número de dígitos de Pi
from sympy.mpmath import mp
mp.dps = 33 # número de dígitos
print(mp.pi)
Explanation: El día Pi
Esta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Mi blog sobre Python. El contenido esta bajo la licencia BSD.
<img alt="día Pi" title="día Pi" src="http://relopezbriega.github.io/images/pi-dia.jpg">
Hoy, 14 de Marzo se celebra el día de Pi($\pi$), esta celebración fue una ocurrencia del físico Larry Shaw, quien eligió esta fecha por su semejanza con el valor de dos dígitos de Pi. (en el formato de fecha de Estados Unidos, el 14 de Marzo se escribe 3/14). Particularmente este año, se dará el fenómeno de que será el día de Pi más preciso del siglo, ya que a las 9:26:53 se formaría el número Pi con 9 dígitos de precisión! (3/14/15 9:26:53). En honor a su día, voy a dedicar este artículo al número $\pi$.
¿Qué es el número $\pi$?
El número $\pi$ es uno de los más famosos de la matemática. Mide la relación que existe entre la longitud de una circunferencia y su diámetro. No importa cual sea el tamaño de la circunferencia, esta relación siempre va a ser la misma y va a estar representada por $\pi$. Este tipo de propiedades, que se mantienen sin cambios cuando otros atributos varian son llamadas constantes. $\pi$ es una de las constantes utilizadas con mayor frecuencia en matemática, física e ingeniería.
<img alt="día Pi" title="día Pi" src="http://upload.wikimedia.org/wikipedia/commons/2/2a/Pi-unrolled-720.gif">
Historia del número $\pi$
La primera referencia que se conoce de $\pi$ data aproximadamente del año 1650 adC en el Papiro de Ahmes, documento que contiene problemas matemáticos básicos, fracciones, cálculo de áreas, volúmenes, progresiones, repartos proporcionales, reglas de tres, ecuaciones lineales y trigonometría básica. El valor que se asigna a $\pi$ en ese documento es el de 28/34 aproximadamente 3,1605.
Una de las primeras aproximaciones fue la realizada por Arquímedes en el año 250 adC quien calculo que el valor estaba comprendido entre 3 10/71 y 3 1/7 (3,1408 y 3,1452) y utilizo para sus estudios el valor 211875/67441 aproximadamente 3,14163.
El matemático Leonhard Euler adoptó el conocido símbolo $\pi$ en 1737 en su obra Introducción al cálculo infinitesimal e instantáneamente se convirtió en una notación estándar hasta hoy en día.
¿Qué hace especial al número $\pi$?
Lo que convierte a $\pi$ en un número interesante, es que se trata de un número irracional, es decir, que el mismo no puede ser expresado como una fraccion de dos números enteros. Asimismo, tambien es un número trascendental, ya que no es <a target="_blank" href="http://es.wikipedia.org/wiki/Ra%C3%ADz_(matem%C3%A1ticas)">raíz</a> de ninguna ecuación algebraica con coeficientes enteros, lo que quiere decir que tampoco puede ser expresado algebraicamente.
Calculando el valor de $\pi$
Si bien el número $\pi$ puede ser observado con facilidad, su cálculo es uno de los problemas más dificiles de la matemática y ha mantenido a los matemáticos ocupados por años. Actualmente se conocen hasta 10 billones de decimales del número $\pi$, es decir, 10.000.000.000.000.
La aproximación de Arquímedes
Uno de los métodos más conocidos para la aproximación del número $\pi$ es la aproximación de Arquímedes; la cual consiste en circunscribir e inscribir polígonos regulares de n-lados en circunferencias y calcular el perímetro de dichos polígonos. Arquímedes empezó con hexágonos circunscritos e inscritos, y fue doblando el número de lados hasta llegar a polígonos de 96 lados.
<img alt="día Pi" title="día Pi" src="http://relopezbriega.github.io/images/pi_geometric_inscribed_polygons.png" high=300 width=300>
La serie de Leibniz
Otro método bastante popular para el cálculo de $\pi$, es la utilización de las series infinitas de Gregory-Leibniz.
Este método consiste en ir realizando operaciones matematicas sobre series infinitas de números hasta que la serie converge en el número $\pi$. Aunque no es muy eficiente, se acerca cada vez más al valor de Pi en cada repetición, produciendo con precisión hasta cinco mil decimales de Pi con 500000 repeticiones. Su formula es muy simple.
$$\pi=(4/1) - (4/3) + (4/5) - (4/7) + (4/9) - (4/11) + (4/13) - (4/15) ...$$
Calculando $\pi$ con Python
Como este blog lo tengo dedicado a Python, obviamente no podía concluir este artículo sin incluir distintas formas de calcular $\pi$ utilizando Python; el cual bien es sabido que se adapta más que bien para las matemáticas!.
Como $\pi$ es una constante con un gran número de sus dígitos ya conocidos, los principales módulos Matemáticos de Python ya incluyen su valor en una variable. Asi por ejemplo, podemos ver el valor de $\pi$ importando los módulos math o sympy.
End of explanation
# Implementacion de aproximación de Arquímedes
from decimal import Decimal, getcontext
def pi_archimedes(digitos):
Calcula pi utilizando el método de aproximacion de Arquímedes
en n iteraciones.
def pi_archimedes_iter(n):
funcion auxiliar utilizada en cada iteracion
polygon_edge_length_squared = Decimal(2)
polygon_sides = 2
for i in range(n):
polygon_edge_length_squared = 2 - 2 * (1 - polygon_edge_length_squared / 4).sqrt()
polygon_sides *= 2
return polygon_sides * polygon_edge_length_squared.sqrt()
#itera dependiendo de la cantidad de digitos
old_result = None
for n in range(10*digitos):
# Calcular con doble precision
getcontext().prec = 2*digitos
result = pi_archimedes_iter(n)
# Devolver resultados en precision simple.
getcontext().prec = digitos
result = +result # redondeo del resultado.
if result == old_result:
return result
old_result = result
# Aproximacion de Arquímedes con 33 dígitos
print(pi_archimedes(33))
Explanation: Si queremos calcular alguna aproximación al valor de $\pi$, podríamos implementar por ejemplo la aproximación de Arquímedes de la siguiente manera.
End of explanation
def pi_leibniz(precision):
Calcula Pi utilizando las series infinitas de Gregory-Leibniz
pi = 0
modificador = 1
for i in range(1, precision, 2):
pi += ((4 / i) * modificador)
modificador *= -1
return pi
# Pi con una precision de 10000000 repeticiones.
print(pi_leibniz(10000000))
Explanation: Por último, también podriamos implementar las series infinitas de Gregory-Leibniz, lo cual es realmente bastante sencillo.
End of explanation |
10,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Similarity Authors.
Step1: TensorFlow Similarity Supervised Learning Hello World
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Data preparation
We are going to load the MNIST dataset and restrict our training data to only N of the 10 classes (6 by default) to showcase how the model is able to find similar examples from classes unseen during training. The model's ability to generalize the matching to unseen classes, without retraining, is one of the main reason you would want to use metric learning.
WARNING
Step3: For a similarity model to learn efficiently, each batch must contains at least 2 examples of each class.
To make this easy, tf_similarity offers Samplers() that enable you to set both the number of classes and the minimum number of examples of each class per batch. Here we are creating a MultiShotMemorySampler() which allows you to sample an in-memory dataset and provides multiple examples per class.
TensorFlow Similarity provides various samplers to accomodate different requirements, including a SingleShotMemorySampler() for single-shot learning, a TFDatasetMultiShotMemorySampler() that integrate directly with the TensorFlow datasets catalogue, and a TFRecordDatasetSampler() that allows you to sample from very large datasets stored on disk as TFRecords shards.
Step4: Model setup
Model definition
SimilarityModel() models extend tensorflow.keras.model.Model with additional features and functionality that allow you to index and search for similar looking examples.
As visible in the model definition below, similarity models output a 64 dimensional float embedding using the MetricEmbedding() layers. This layer is a Dense layer with L2 normalization. Thanks to the loss, the model learns to minimize the distance between similar examples and maximize the distance between dissimilar examples. As a result, the distance between examples in the embedding space is meaningful; the smaller the distance the more similar the examples are.
Being able to use a distance as a meaningful proxy for how similar two examples are, is what enables the fast ANN (aproximate nearest neighbor) search. Using a sub-linear ANN search instead of a standard quadratic NN search is what allows deep similarity search to scale to millions of items. The built in memory index used in this notebook scales to a million indexed examples very easily... if you have enough RAM
Step5: Loss definition
Overall what makes Metric losses different from tradional losses is that
Step6: Compilation
Tensorflow similarity use an extended compile() method that allows you to optionally specify distance_metrics (metrics that are computed over the distance between the embeddings), and the distance to use for the indexer.
By default the compile() method tries to infer what type of distance you are using by looking at the first loss specified. If you use multiple losses, and the distance loss is not the first one, then you need to specify the distance function used as distance= parameter in the compile function.
Step7: Training
Similarity models are trained like normal models.
NOTE
Step8: Indexing
Indexing is where things get different from traditional classification models. Because the model learned to output an embedding that represent the example position within the learned metric space, we need a way to find which known example(s) are the closest to determine the class of the query example (aka nearest neighbors classication).
To do so, we are creating an index of known examples from all the classes present in the dataset. We do this by taking a total of 200 examples from the train dataset which amount to 20 examples per class and we use the index() method of the model to build the index.
we store the images (x_index) as data in the index (data=x_index) so that we can display them later. Here the images are small so its not an issue but in general, be careful while storing a lot of data in the index to avoid blewing up your memory. You might consider using a different Store() backend if you have to store and serve very large indexes.
Indexing more examples per class will help increase the accuracy/generalization, as having more variations improves the classifier "knowledge" of what variations to expect.
Reseting the index is not needed for the first run; however we always calling it to ensure we start the evaluation with a clean index in case of a partial re-run.
Step9: Querying
To "classify" examples, we need to lookup their k nearest neighbors in the index.
Here we going to query a single random example for each class from the test dataset using select_examples() and then find their nearest neighbors using the lookup() function.
NOTE By default the classes 8, 5, 0, and 4 were not seen during training, but we still get reasonable matches as visible in the image below.
Step10: Calibration
To be able to tell if an example matches a given class, we first need to calibrate() the model to find the optimal cut point. This cut point is the maximum distance below which returned neighbors are of the same class. Increasing the threshold improves the recall at the expense of the precision.
By default, the calibration uses the F-score classification metric to optimally balance out the precsion and recalll; however, you can speficy your own target and change the calibration metric to better suite your usecase.
Step11: Metrics ploting
Let's plot the performance metrics to see how they evolve as the distance threshold increases.
We clearly see an inflection point where the precision and recall intersect, however, this is not the optimal_cutpoint because the recall continues to increase faster than the precision decreases. Different usecases will have different performance profiles, which why each model needs to be calibrated.
Step12: Precision/Recall curve
We can see in the precision/recall curve below, that the curve is not smooth.
This is because the recall can improve independently of the precision causing a
seesaw pattern.
Additionally, the model does extremly well on known classes and less well on
the unseen ones, which contributes to the flat curve at the begining followed
by a sharp decline as the distance threshold increases and
examples are further away from the indexed examples.
Step13: Matching
The purpose of match() is to allow you to use your similarity models to make
classification predictions. It accomplishes this by finding the nearest neigbors
for a set of query examples and returning an infered label based on neighbors
labels and the matching strategy used (MatchNearest by default).
Note
Step14: confusion matrix
Now that we have a better sense of what the match() method does, let's scale up
to a few thousand samples per class and evaluate how good our model is at
predicting the correct classes.
As expected, while the model prediction performance is very good, its not
competitive with a classification model. However this lower accuracy comes with
the unique advantage that the model is able to classify classes
that were not seen during training.
NOTE tf.math.confusion_matrix doesn't support negative classes, so we are going to use class 10 as our unknown class. As mentioned earlier, unknown examples are
any testing example for which the closest neighbor distance is greater than the cutpoint threshold.
Step15: Index information
Following model.summary() you can get information about the index configuration and its performance using index_summary().
Step16: Saving and reloading
Saving and reloading the model works as you would expected
Step17: Reloading
Step18: Query reloaded model
Querying the reloaded model with its reload index works as expected | Python Code:
# @title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Similarity Authors.
End of explanation
import gc
import os
import numpy as np
from matplotlib import pyplot as plt
from tabulate import tabulate
# INFO messages are not printed.
# This must be run before loading other modules.
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "1"
import tensorflow as tf
# install TF similarity if needed
try:
import tensorflow_similarity as tfsim # main package
except ModuleNotFoundError:
!pip install tensorflow_similarity
import tensorflow_similarity as tfsim
tfsim.utils.tf_cap_memory()
# Clear out any old model state.
gc.collect()
tf.keras.backend.clear_session()
print("TensorFlow:", tf.__version__)
print("TensorFlow Similarity", tfsim.__version__)
Explanation: TensorFlow Similarity Supervised Learning Hello World
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/similarity/blob/master/examples/supervised_hello_world.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/similarity/blob/master/examples/supervised_hello_world.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
TensorFlow Similarity is a python package focused on making similarity learning quick and easy.
Notebook goal
This notebook demonstrates how to use TensorFlow Similarity to train a SimilarityModel() on a fraction of the MNIST classes, and yet the model is able to index and retrieve similar looking images for all MNIST classes.
You are going to learn about the main features offered by the SimilarityModel() and will:
train() a similarity model on a sub-set of the 10 MNIST classes that will learn how to project digits within a cosine space
index() a few examples of each of the 10 classes present in the train dataset (e.g 10 images per classes) to make them searchable
lookup() a few test images to check that the trained model, despite having only a few examples of seen and unseen classes in it's index, is able to efficiently retrieve similar looking examples for all classes.
calibrate() the model to estimate what is the best distance theshold to separate matching elements from elements belonging to other classes.
match() the test dataset to evaluate how well the calibrated model works for classification purpose.
Things to try
Along the way you can try the following things to improve the model performance:
- Adding more "seen" classes at training time.
- Use a larger embedding by increasing the size of the output.
- Add data augmentation pre-processing layers to the model.
- Include more examples in the index to give the models more points to choose from.
- Try a more challenging dataset, such as Fashion MNIST.
End of explanation
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
Explanation: Data preparation
We are going to load the MNIST dataset and restrict our training data to only N of the 10 classes (6 by default) to showcase how the model is able to find similar examples from classes unseen during training. The model's ability to generalize the matching to unseen classes, without retraining, is one of the main reason you would want to use metric learning.
WARNING: Tensorflow similarity expects y_train to be an IntTensor containing the class ids for each example instead of the standard categorical encoding traditionally used for multi-class classification.
End of explanation
CLASSES = [2, 3, 1, 7, 9, 6, 8, 5, 0, 4]
NUM_CLASSES = 6 # @param {type: "slider", min: 1, max: 10}
CLASSES_PER_BATCH = NUM_CLASSES
EXAMPLES_PER_CLASS = 10 # @param {type:"integer"}
STEPS_PER_EPOCH = 1000 # @param {type:"integer"}
sampler = tfsim.samplers.MultiShotMemorySampler(
x_train,
y_train,
classes_per_batch=CLASSES_PER_BATCH,
examples_per_class_per_batch=EXAMPLES_PER_CLASS,
class_list=CLASSES[:NUM_CLASSES], # Only use the first 6 classes for training.
steps_per_epoch=STEPS_PER_EPOCH,
)
Explanation: For a similarity model to learn efficiently, each batch must contains at least 2 examples of each class.
To make this easy, tf_similarity offers Samplers() that enable you to set both the number of classes and the minimum number of examples of each class per batch. Here we are creating a MultiShotMemorySampler() which allows you to sample an in-memory dataset and provides multiple examples per class.
TensorFlow Similarity provides various samplers to accomodate different requirements, including a SingleShotMemorySampler() for single-shot learning, a TFDatasetMultiShotMemorySampler() that integrate directly with the TensorFlow datasets catalogue, and a TFRecordDatasetSampler() that allows you to sample from very large datasets stored on disk as TFRecords shards.
End of explanation
def get_model():
inputs = tf.keras.layers.Input(shape=(28, 28, 1))
x = tf.keras.layers.experimental.preprocessing.Rescaling(1 / 255)(inputs)
x = tf.keras.layers.Conv2D(32, 3, activation="relu")(x)
x = tf.keras.layers.Conv2D(32, 3, activation="relu")(x)
x = tf.keras.layers.MaxPool2D()(x)
x = tf.keras.layers.Conv2D(64, 3, activation="relu")(x)
x = tf.keras.layers.Conv2D(64, 3, activation="relu")(x)
x = tf.keras.layers.Flatten()(x)
# smaller embeddings will have faster lookup times while a larger embedding will improve the accuracy up to a point.
outputs = tfsim.layers.MetricEmbedding(64)(x)
return tfsim.models.SimilarityModel(inputs, outputs)
model = get_model()
model.summary()
Explanation: Model setup
Model definition
SimilarityModel() models extend tensorflow.keras.model.Model with additional features and functionality that allow you to index and search for similar looking examples.
As visible in the model definition below, similarity models output a 64 dimensional float embedding using the MetricEmbedding() layers. This layer is a Dense layer with L2 normalization. Thanks to the loss, the model learns to minimize the distance between similar examples and maximize the distance between dissimilar examples. As a result, the distance between examples in the embedding space is meaningful; the smaller the distance the more similar the examples are.
Being able to use a distance as a meaningful proxy for how similar two examples are, is what enables the fast ANN (aproximate nearest neighbor) search. Using a sub-linear ANN search instead of a standard quadratic NN search is what allows deep similarity search to scale to millions of items. The built in memory index used in this notebook scales to a million indexed examples very easily... if you have enough RAM :)
End of explanation
distance = "cosine" # @param ["cosine", "L2", "L1"]{allow-input: false}
loss = tfsim.losses.MultiSimilarityLoss(distance=distance)
Explanation: Loss definition
Overall what makes Metric losses different from tradional losses is that:
- They expect different inputs. Instead of having the prediction equal the true values, they expect embeddings as y_preds and the id (as an int32) of the class as y_true.
- They require a distance. You need to specify which distance function to use to compute the distance between embeddings. cosine is usually a great starting point and the default.
In this example we are using the MultiSimilarityLoss(). This loss takes a weighted combination of all valid positive and negative pairs, making it one of the best loss that you can use for similarity training.
End of explanation
LR = 0.000005 # @param {type:"number"}
model.compile(optimizer=tf.keras.optimizers.Adam(LR), loss=loss)
Explanation: Compilation
Tensorflow similarity use an extended compile() method that allows you to optionally specify distance_metrics (metrics that are computed over the distance between the embeddings), and the distance to use for the indexer.
By default the compile() method tries to infer what type of distance you are using by looking at the first loss specified. If you use multiple losses, and the distance loss is not the first one, then you need to specify the distance function used as distance= parameter in the compile function.
End of explanation
EPOCHS = 10 # @param {type:"integer"}
history = model.fit(sampler, epochs=EPOCHS, validation_data=(x_test, y_test))
# expect loss: 0.14 / val_loss: 0.33
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.legend(["loss", "val_loss"])
plt.title(f"Loss: {loss.name} - LR: {LR}")
plt.show()
Explanation: Training
Similarity models are trained like normal models.
NOTE: don't expect the validation loss to decrease too much here because we only use a subset of the classes within the train data but include all classes in the validation data.
End of explanation
x_index, y_index = tfsim.samplers.select_examples(x_train, y_train, CLASSES, 20)
model.reset_index()
model.index(x_index, y_index, data=x_index)
Explanation: Indexing
Indexing is where things get different from traditional classification models. Because the model learned to output an embedding that represent the example position within the learned metric space, we need a way to find which known example(s) are the closest to determine the class of the query example (aka nearest neighbors classication).
To do so, we are creating an index of known examples from all the classes present in the dataset. We do this by taking a total of 200 examples from the train dataset which amount to 20 examples per class and we use the index() method of the model to build the index.
we store the images (x_index) as data in the index (data=x_index) so that we can display them later. Here the images are small so its not an issue but in general, be careful while storing a lot of data in the index to avoid blewing up your memory. You might consider using a different Store() backend if you have to store and serve very large indexes.
Indexing more examples per class will help increase the accuracy/generalization, as having more variations improves the classifier "knowledge" of what variations to expect.
Reseting the index is not needed for the first run; however we always calling it to ensure we start the evaluation with a clean index in case of a partial re-run.
End of explanation
# re-run to test on other examples
num_neighbors = 5
# select
x_display, y_display = tfsim.samplers.select_examples(x_test, y_test, CLASSES, 1)
# lookup nearest neighbors in the index
nns = model.lookup(x_display, k=num_neighbors)
# display
for idx in np.argsort(y_display):
tfsim.visualization.viz_neigbors_imgs(x_display[idx], y_display[idx], nns[idx], fig_size=(16, 2), cmap="Greys")
Explanation: Querying
To "classify" examples, we need to lookup their k nearest neighbors in the index.
Here we going to query a single random example for each class from the test dataset using select_examples() and then find their nearest neighbors using the lookup() function.
NOTE By default the classes 8, 5, 0, and 4 were not seen during training, but we still get reasonable matches as visible in the image below.
End of explanation
num_calibration_samples = 1000 # @param {type:"integer"}
calibration = model.calibrate(
x_train[:num_calibration_samples],
y_train[:num_calibration_samples],
extra_metrics=["precision", "recall", "binary_accuracy"],
verbose=1,
)
Explanation: Calibration
To be able to tell if an example matches a given class, we first need to calibrate() the model to find the optimal cut point. This cut point is the maximum distance below which returned neighbors are of the same class. Increasing the threshold improves the recall at the expense of the precision.
By default, the calibration uses the F-score classification metric to optimally balance out the precsion and recalll; however, you can speficy your own target and change the calibration metric to better suite your usecase.
End of explanation
fig, ax = plt.subplots()
x = calibration.thresholds["distance"]
ax.plot(x, calibration.thresholds["precision"], label="precision")
ax.plot(x, calibration.thresholds["recall"], label="recall")
ax.plot(x, calibration.thresholds["f1"], label="f1 score")
ax.legend()
ax.set_title("Metric evolution as distance increase")
ax.set_xlabel("Distance")
plt.show()
Explanation: Metrics ploting
Let's plot the performance metrics to see how they evolve as the distance threshold increases.
We clearly see an inflection point where the precision and recall intersect, however, this is not the optimal_cutpoint because the recall continues to increase faster than the precision decreases. Different usecases will have different performance profiles, which why each model needs to be calibrated.
End of explanation
fig, ax = plt.subplots()
ax.plot(calibration.thresholds["recall"], calibration.thresholds["precision"])
ax.set_title("Precision recall curve")
ax.set_xlabel("Recall")
ax.set_ylabel("Precision")
plt.show()
Explanation: Precision/Recall curve
We can see in the precision/recall curve below, that the curve is not smooth.
This is because the recall can improve independently of the precision causing a
seesaw pattern.
Additionally, the model does extremly well on known classes and less well on
the unseen ones, which contributes to the flat curve at the begining followed
by a sharp decline as the distance threshold increases and
examples are further away from the indexed examples.
End of explanation
num_matches = 10 # @param {type:"integer"}
matches = model.match(x_test[:num_matches], cutpoint="optimal")
rows = []
for idx, match in enumerate(matches):
rows.append([match, y_test[idx], match == y_test[idx]])
print(tabulate(rows, headers=["Predicted", "Expected", "Correct"]))
Explanation: Matching
The purpose of match() is to allow you to use your similarity models to make
classification predictions. It accomplishes this by finding the nearest neigbors
for a set of query examples and returning an infered label based on neighbors
labels and the matching strategy used (MatchNearest by default).
Note: unlike traditional models, the match() method potentially returns -1
when there are no indexed examples below the cutpoint threshold. The -1 class
should be treated as "unknown".
Matching in practice
Let's now match a 10 examples to see how you can use the model match() method
in practice.
End of explanation
# used to label in images in the viz_neighbors_imgs plots
# note we added a 11th classes for unknown
labels = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "Unknown"]
num_examples_per_class = 1000
cutpoint = "optimal"
x_confusion, y_confusion = tfsim.samplers.select_examples(x_test, y_test, CLASSES, num_examples_per_class)
matches = model.match(x_confusion, cutpoint=cutpoint, no_match_label=10)
cm = tfsim.visualization.confusion_matrix(
matches,
y_confusion,
labels=labels,
title="Confusin matrix for cutpoint:%s" % cutpoint,
)
Explanation: confusion matrix
Now that we have a better sense of what the match() method does, let's scale up
to a few thousand samples per class and evaluate how good our model is at
predicting the correct classes.
As expected, while the model prediction performance is very good, its not
competitive with a classification model. However this lower accuracy comes with
the unique advantage that the model is able to classify classes
that were not seen during training.
NOTE tf.math.confusion_matrix doesn't support negative classes, so we are going to use class 10 as our unknown class. As mentioned earlier, unknown examples are
any testing example for which the closest neighbor distance is greater than the cutpoint threshold.
End of explanation
model.index_summary()
Explanation: Index information
Following model.summary() you can get information about the index configuration and its performance using index_summary().
End of explanation
# save the model and the index
save_path = "models/hello_world" # @param {type:"string"}
model.save(save_path, save_index=True)
Explanation: Saving and reloading
Saving and reloading the model works as you would expected:
model.save(path, save_index=True): save the model and the index on disk. By default the index is compressed but this can be disabled by setting compressed=False
model = tf.keras.model.load_model(path, custom_objects={"SimilarityModel": tfsim.models.SimilarityModel}) reload the model.
NOTE: We need to pass SimilarityModel as a custom object to ensure that Keras knows about the index methods.
model.load_index(path) Is requried to reload the index.
model.save_index(path) and model.load_index(path) allows to save/reload an index independently of saving/loading a model.
Saving
End of explanation
# reload the model
reloaded_model = tf.keras.models.load_model(
save_path,
custom_objects={"SimilarityModel": tfsim.models.SimilarityModel},
)
# reload the index
reloaded_model.load_index(save_path)
# check the index is back
reloaded_model.index_summary()
Explanation: Reloading
End of explanation
# re-run to test on other examples
num_neighbors = 5
# select
x_display, y_display = tfsim.samplers.select_examples(x_test, y_test, CLASSES, 1)
# lookup the nearest neighbors
nns = model.lookup(x_display, k=num_neighbors)
# display
for idx in np.argsort(y_display):
tfsim.visualization.viz_neigbors_imgs(x_display[idx], y_display[idx], nns[idx], fig_size=(16, 2), cmap="Greys")
Explanation: Query reloaded model
Querying the reloaded model with its reload index works as expected
End of explanation |
10,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variational AutoEncoder
Author
Step2: Create a sampling layer
Step3: Build the encoder
Step4: Build the decoder
Step5: Define the VAE as a Model with a custom train_step
Step6: Train the VAE
Step7: Display a grid of sampled digits
Step8: Display how the latent space clusters different digit classes | Python Code:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: Variational AutoEncoder
Author: fchollet<br>
Date created: 2020/05/03<br>
Last modified: 2020/05/03<br>
Description: Convolutional Variational AutoEncoder (VAE) trained on MNIST digits.
Setup
End of explanation
class Sampling(layers.Layer):
Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
Explanation: Create a sampling layer
End of explanation
latent_dim = 2
encoder_inputs = keras.Input(shape=(28, 28, 1))
x = layers.Conv2D(32, 3, activation="relu", strides=2, padding="same")(encoder_inputs)
x = layers.Conv2D(64, 3, activation="relu", strides=2, padding="same")(x)
x = layers.Flatten()(x)
x = layers.Dense(16, activation="relu")(x)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = Sampling()([z_mean, z_log_var])
encoder = keras.Model(encoder_inputs, [z_mean, z_log_var, z], name="encoder")
encoder.summary()
Explanation: Build the encoder
End of explanation
latent_inputs = keras.Input(shape=(latent_dim,))
x = layers.Dense(7 * 7 * 64, activation="relu")(latent_inputs)
x = layers.Reshape((7, 7, 64))(x)
x = layers.Conv2DTranspose(64, 3, activation="relu", strides=2, padding="same")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu", strides=2, padding="same")(x)
decoder_outputs = layers.Conv2DTranspose(1, 3, activation="sigmoid", padding="same")(x)
decoder = keras.Model(latent_inputs, decoder_outputs, name="decoder")
decoder.summary()
Explanation: Build the decoder
End of explanation
class VAE(keras.Model):
def __init__(self, encoder, decoder, **kwargs):
super(VAE, self).__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
self.total_loss_tracker = keras.metrics.Mean(name="total_loss")
self.reconstruction_loss_tracker = keras.metrics.Mean(
name="reconstruction_loss"
)
self.kl_loss_tracker = keras.metrics.Mean(name="kl_loss")
@property
def metrics(self):
return [
self.total_loss_tracker,
self.reconstruction_loss_tracker,
self.kl_loss_tracker,
]
def train_step(self, data):
with tf.GradientTape() as tape:
z_mean, z_log_var, z = self.encoder(data)
reconstruction = self.decoder(z)
reconstruction_loss = tf.reduce_mean(
tf.reduce_sum(
keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)
)
)
kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
total_loss = reconstruction_loss + kl_loss
grads = tape.gradient(total_loss, self.trainable_weights)
self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
self.total_loss_tracker.update_state(total_loss)
self.reconstruction_loss_tracker.update_state(reconstruction_loss)
self.kl_loss_tracker.update_state(kl_loss)
return {
"loss": self.total_loss_tracker.result(),
"reconstruction_loss": self.reconstruction_loss_tracker.result(),
"kl_loss": self.kl_loss_tracker.result(),
}
Explanation: Define the VAE as a Model with a custom train_step
End of explanation
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
mnist_digits = np.concatenate([x_train, x_test], axis=0)
mnist_digits = np.expand_dims(mnist_digits, -1).astype("float32") / 255
vae = VAE(encoder, decoder)
vae.compile(optimizer=keras.optimizers.Adam())
vae.fit(mnist_digits, epochs=30, batch_size=128)
Explanation: Train the VAE
End of explanation
import matplotlib.pyplot as plt
def plot_latent_space(vae, n=30, figsize=15):
# display a n*n 2D manifold of digits
digit_size = 28
scale = 1.0
figure = np.zeros((digit_size * n, digit_size * n))
# linearly spaced coordinates corresponding to the 2D plot
# of digit classes in the latent space
grid_x = np.linspace(-scale, scale, n)
grid_y = np.linspace(-scale, scale, n)[::-1]
for i, yi in enumerate(grid_y):
for j, xi in enumerate(grid_x):
z_sample = np.array([[xi, yi]])
x_decoded = vae.decoder.predict(z_sample)
digit = x_decoded[0].reshape(digit_size, digit_size)
figure[
i * digit_size : (i + 1) * digit_size,
j * digit_size : (j + 1) * digit_size,
] = digit
plt.figure(figsize=(figsize, figsize))
start_range = digit_size // 2
end_range = n * digit_size + start_range
pixel_range = np.arange(start_range, end_range, digit_size)
sample_range_x = np.round(grid_x, 1)
sample_range_y = np.round(grid_y, 1)
plt.xticks(pixel_range, sample_range_x)
plt.yticks(pixel_range, sample_range_y)
plt.xlabel("z[0]")
plt.ylabel("z[1]")
plt.imshow(figure, cmap="Greys_r")
plt.show()
plot_latent_space(vae)
Explanation: Display a grid of sampled digits
End of explanation
def plot_label_clusters(vae, data, labels):
# display a 2D plot of the digit classes in the latent space
z_mean, _, _ = vae.encoder.predict(data)
plt.figure(figsize=(12, 10))
plt.scatter(z_mean[:, 0], z_mean[:, 1], c=labels)
plt.colorbar()
plt.xlabel("z[0]")
plt.ylabel("z[1]")
plt.show()
(x_train, y_train), _ = keras.datasets.mnist.load_data()
x_train = np.expand_dims(x_train, -1).astype("float32") / 255
plot_label_clusters(vae, x_train, y_train)
Explanation: Display how the latent space clusters different digit classes
End of explanation |
10,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step11: Vertex constants
Setup up the following constants for Vertex
Step12: CustomJob constants
Set constants unique to CustomJob training
Step13: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step14: Container (Docker) image
Next, we will set the Docker container images for training and prediction
TensorFlow 1.15
gcr.io/cloud-aiplatform/training/tf-cpu.1-15
Step15: Machine Type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step16: Tutorial
Now you are ready to start creating your own custom model and training for IMDB Movie Reviews.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving.
Step17: Train a model
There are two ways you can train a custom model using a container image
Step18: Prepare your disk specification
(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.
boot_disk_type
Step19: Define the worker pool specification
Next, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following
Step20: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step21: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary
Step22: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step23: Train the model using a TrainingPipeline resource
Now start training of your custom training job using a training pipeline on Vertex. To train the your custom model, do the following steps
Step24: Create the training pipeline
Use this helper function create_pipeline, which takes the following parameter
Step25: Now save the unique identifier of the training pipeline you created.
Step26: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter
Step27: Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
Step28: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step29: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the IMDB Movie Review test (holdout) data from tfds.datasets, using the method load(). This will return the dataset as a tuple of two elements. The first element is the dataset and the second is information on the dataset, which will contain the predefined vocabulary encoder. The encoder will convert words into a numerical embedding, which was pretrained and used in the custom training script.
When you trained the model, you needed to set a fix input length for your text. For forward feeding batches, the padded_batch() property of the corresponding tf.dataset was set to pad each input sequence into the same shape for a batch.
For the test data, you also need to set the padded_batch() property accordingly.
Step30: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step31: Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.
The serving function consists of two parts
Step32: Upload the model
Use this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.
The helper function takes the following parameters
Step33: Get Model resource information
Now let's get the model information for just your model. Use this helper function get_model, with the following parameter
Step34: Deploy the Model resource
Now deploy the trained Vertex custom Model resource. This requires two steps
Step35: Now get the unique identifier for the Endpoint resource you created.
Step36: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step37: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters
Step38: Make a online prediction request
Now do a online prediction to your deployed model.
Prepare the request content
Since the dataset is a tf.dataset, which acts as a generator, we must use it as an iterator to access the data items in the test data. We do the following to get a single data item from the test data
Step39: Send the prediction request
Ok, now you have a test data item. Use this helper function predict_data, which takes the following parameters
Step40: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step41: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: Custom training text binary classification model with pipeline for online prediction with training pipeline
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_text_binary_classification_online_pipeline.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_text_binary_classification_online_pipeline.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom text binary classification model for online prediction, using a training pipeline.
Dataset
The dataset used for this tutorial is the IMDB Movie Reviews from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts whether a review is positive or negative in sentiment.
Objective
In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex client library, and then do a prediction on the deployed model by sending data. You can alternatively create custom models using gcloud command-line tool or online using Google Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Create a TrainingPipeline resource.
Train a TensorFlow model with the TrainingPipeline resource.
Retrieve and load the model artifacts.
View the model evaluation.
Upload the model as a Vertex Model resource.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
CUSTOM_TASK_GCS_PATH = (
"gs://google-cloud-aiplatform/schema/trainingjob/definition/custom_task_1.0.0.yaml"
)
Explanation: CustomJob constants
Set constants unique to CustomJob training:
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Container (Docker) image
Next, we will set the Docker container images for training and prediction
TensorFlow 1.15
gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest
TensorFlow 2.4
gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest
XGBoost
gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1
Scikit-learn
gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest
Pytorch
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest
For the latest list, see Pre-built containers for training.
TensorFlow 1.15
gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest
XGBoost
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest
Scikit-learn
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest
For the latest list, see Pre-built containers for prediction
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Machine Type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for IMDB Movie Reviews.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving.
End of explanation
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
Explanation: Train a model
There are two ways you can train a custom model using a container image:
Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.
Use your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.
Prepare your custom job specification
Now that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:
worker_pool_spec : The specification of the type of machine(s) you will use for training and how many (single or distributed)
python_package_spec : The specification of the Python package to be installed with the pre-built container.
Prepare your machine specification
Now define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training.
- machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.
- accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.
- accelerator_count: The number of accelerators.
End of explanation
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
Explanation: Prepare your disk specification
(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.
boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.
boot_disk_size_gb: Size of disk in GB.
End of explanation
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_imdb.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
Explanation: Define the worker pool specification
Next, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:
replica_count: The number of instances to provision of this machine type.
machine_spec: The hardware specification.
disk_spec : (optional) The disk storage specification.
python_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.
Let's dive deeper now into the python package specification:
-executor_image_spec: This is the docker image which is configured for your custom training job.
-package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.
-python_module: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix.
-args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:
- "--model-dir=" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts:
- direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
- indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The training distribution strategy to use for single or distributed training.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: IMDB Movie Reviews text binary classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for IMDB
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=1e-4, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print(device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets():
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True,
as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
encoder = info.features['text'].encoder
padded_shapes = ([None],())
return train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE, padded_shapes), encoder
train_dataset, encoder = make_datasets()
# Build the Keras model
def build_and_compile_rnn_model(encoder):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(args.lr),
metrics=['accuracy'])
return model
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_rnn_model(encoder)
# Train the model
model.fit(train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:
Gets the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads IMDB Movie Reviews dataset from TF Datasets (tfds).
Builds a simple RNN model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs specified by args.epochs.
Saves the trained model (save(args.model_dir)) to the specified model directory.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_imdb.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
MODEL_NAME = "custom_pipeline-" + TIMESTAMP
PIPELINE_DISPLAY_NAME = "custom-training-pipeline" + TIMESTAMP
training_task_inputs = json_format.ParseDict(
{"workerPoolSpecs": worker_pool_spec}, Value()
)
pipeline = {
"display_name": PIPELINE_DISPLAY_NAME,
"training_task_definition": CUSTOM_TASK_GCS_PATH,
"training_task_inputs": training_task_inputs,
"model_to_upload": {
"display_name": PIPELINE_DISPLAY_NAME + "-model",
"artifact_uri": MODEL_DIR,
"container_spec": {"image_uri": DEPLOY_IMAGE},
},
}
print(pipeline)
Explanation: Train the model using a TrainingPipeline resource
Now start training of your custom training job using a training pipeline on Vertex. To train the your custom model, do the following steps:
Create a Vertex TrainingPipeline resource for the Dataset resource.
Execute the pipeline to start the training.
Create a TrainingPipeline resource
You may ask, what do we use a pipeline for? We typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
The training_pipeline specification
First, you need to describe a pipeline specification. Let's look into the minimal requirements for constructing a training_pipeline specification for a custom job:
display_name: A human readable name for the pipeline job.
training_task_definition: The training task schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A dictionary describing the specification for the (uploaded) Vertex custom Model resource.
display_name: A human readable name for the Model resource.
artificat_uri: The Cloud Storage path where the model artifacts are stored in SavedModel format.
container_spec: This is the specification for the Docker container that will be installed on the Endpoint resource, from which the custom model will serve predictions.
End of explanation
def create_pipeline(training_pipeline):
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
response = create_pipeline(pipeline)
Explanation: Create the training pipeline
Use this helper function create_pipeline, which takes the following parameter:
training_pipeline: the full specification for the pipeline training job.
The helper function calls the pipeline client service's create_pipeline method, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: The full specification for the pipeline training job.
The helper function will return the Vertex fully qualified identifier assigned to the training pipeline, which is saved as pipeline.name.
End of explanation
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
Explanation: Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import tensorflow_datasets as tfds
dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
test_dataset = dataset["test"]
encoder = info.features["text"].encoder
BATCH_SIZE = 64
padded_shapes = ([None], ())
test_dataset = test_dataset.padded_batch(BATCH_SIZE, padded_shapes)
Explanation: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the IMDB Movie Review test (holdout) data from tfds.datasets, using the method load(). This will return the dataset as a tuple of two elements. The first element is the dataset and the second is information on the dataset, which will contain the predefined vocabulary encoder. The encoder will convert words into a numerical embedding, which was pretrained and used in the custom training script.
When you trained the model, you needed to set a fix input length for your text. For forward feeding batches, the padded_batch() property of the corresponding tf.dataset was set to pad each input sequence into the same shape for a batch.
For the test data, you also need to set the padded_batch() property accordingly.
End of explanation
model.evaluate(test_dataset)
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
Explanation: Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.
The serving function consists of two parts:
preprocessing function:
Converts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph).
Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.
post-processing function:
Converts the model output to format expected by the receiving application -- e.q., compresses the output.
Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.
Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.
One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.
Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
End of explanation
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model("imdb-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy)
Explanation: Upload the model
Use this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.
The helper function takes the following parameters:
display_name: A human readable name for the Endpoint service.
image_uri: The container image for the model deployment.
model_uri: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the trainer/task.py saved the model artifacts, which we specified in the variable MODEL_DIR.
The helper function calls the Model client service's method upload_model, which takes the following parameters:
parent: The Vertex location root path for Dataset, Model and Endpoint resources.
model: The specification for the Vertex Model resource instance.
Let's now dive deeper into the Vertex model specification model. This is a dictionary object that consists of the following fields:
display_name: A human readable name for the Model resource.
metadata_schema_uri: Since your model was built without an Vertex Dataset resource, you will leave this blank ('').
artificat_uri: The Cloud Storage path where the model is stored in SavedModel format.
container_spec: This is the specification for the Docker container that will be installed on the Endpoint resource, from which the Model resource will serve predictions. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
Uploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.
The helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
End of explanation
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
Explanation: Get Model resource information
Now let's get the model information for just your model. Use this helper function get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
This helper function calls the Vertex Model client service's method get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
End of explanation
ENDPOINT_NAME = "imdb_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex custom Model resource. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "imdb_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
dedicated_resources: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
min_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
import tensorflow_datasets as tfds
dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
test_dataset = dataset["test"]
test_dataset.take(1)
for data in test_dataset:
print(data)
break
test_item = data[0].numpy()
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Prepare the request content
Since the dataset is a tf.dataset, which acts as a generator, we must use it as an iterator to access the data items in the test data. We do the following to get a single data item from the test data:
Set the property for the number of batches to draw per iteration to one using the method take(1).
Iterate once through the test data -- i.e., we do a break within the for loop.
In the single iteration, we save the data item which is in the form of a tuple.
The data item will be the first element of the tuple, which you then will convert from an tensor to a numpy array -- data[0].numpy().
End of explanation
def predict_data(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{serving_input: data.tolist()}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", prediction)
predict_data(test_item, endpoint_id, None)
Explanation: Send the prediction request
Ok, now you have a test data item. Use this helper function predict_data, which takes the following parameters:
data: The test data item is a 64 padded numpy 1D array.
endpoint: The Vertex AI fully qualified identifier for the endpoint where the model was deployed.
parameters_dict: Additional parameters for serving.
This function uses the prediction client service and calls the predict method with the following parameters:
endpoint: The Vertex AI fully qualified identifier for the endpoint where the model was deployed.
instances: A list of instances (data items) to predict.
parameters: Additional parameters for serving.
To pass the test data to the prediction service, you must package it for transmission to the serving binary as follows:
1. Convert the data item from a 1D numpy array to a 1D Python list.
2. Convert the prediction request to a serialized Google protobuf (`json_format.ParseDict()`)
Each instance in the prediction request is a dictionary entry of the form:
{input_name: content}
input_name: the name of the input layer of the underlying model.
content: The data item as a 1D Python list.
Since the predict() service can take multiple data items (instances), you will send your single data item as a list of one data item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() service.
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:
predictions -- the predicated binary sentiment between 0 (negative) and 1 (positive).
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
10,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trajectory Recommendation - Test Evaluation Protocol
Step1: Run notebook ssvm.ipynb
Step2: Sanity check for evaluation protocol
70/30 split for trajectories conform to each query. | Python Code:
% matplotlib inline
import os, sys, time
import math, random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from joblib import Parallel, delayed
Explanation: Trajectory Recommendation - Test Evaluation Protocol
End of explanation
%run 'ssvm.ipynb'
check_protocol = True
Explanation: Run notebook ssvm.ipynb
End of explanation
traj_group_test = dict()
test_ratio = 0.3
for key in sorted(TRAJ_GROUP_DICT.keys()):
group = sorted(TRAJ_GROUP_DICT[key])
num = int(test_ratio * len(group))
if num > 0:
np.random.shuffle(group)
traj_group_test[key] = set(group[:num])
if check_protocol == True:
nnrand_dict = dict()
ssvm_dict = dict()
# train set
trajid_set_train = set(trajid_set_all)
for key in traj_group_test.keys():
trajid_set_train = trajid_set_train - traj_group_test[key]
# train ssvm
poi_info = calc_poi_info(list(trajid_set_train), traj_all, poi_all)
# build POI_ID <--> POI__INDEX mapping for POIs used to train CRF
# which means only POIs in traj such that len(traj) >= 2 are included
poi_set = set()
for x in trajid_set_train:
if len(traj_dict[x]) >= 2:
poi_set = poi_set | set(traj_dict[x])
poi_ix = sorted(poi_set)
poi_id_dict, poi_id_rdict = dict(), dict()
for idx, poi in enumerate(poi_ix):
poi_id_dict[poi] = idx
poi_id_rdict[idx] = poi
# generate training data
train_traj_list = [traj_dict[x] for x in trajid_set_train if len(traj_dict[x]) >= 2]
node_features_list = Parallel(n_jobs=N_JOBS)\
(delayed(calc_node_features)\
(tr[0], len(tr), poi_ix, poi_info, poi_clusters=POI_CLUSTERS, \
cats=POI_CAT_LIST, clusters=POI_CLUSTER_LIST) for tr in train_traj_list)
edge_features = calc_edge_features(list(trajid_set_train), poi_ix, traj_dict, poi_info)
assert(len(train_traj_list) == len(node_features_list))
X_train = [(node_features_list[x], edge_features.copy(), \
(poi_id_dict[train_traj_list[x][0]], len(train_traj_list[x]))) for x in range(len(train_traj_list))]
y_train = [np.array([poi_id_dict[x] for x in tr]) for tr in train_traj_list]
assert(len(X_train) == len(y_train))
# train
sm = MyModel()
verbose = 0 #5
ssvm = OneSlackSSVM(model=sm, C=SSVM_C, n_jobs=N_JOBS, verbose=verbose)
ssvm.fit(X_train, y_train, initialize=True)
print('SSVM training finished, start predicting.'); sys.stdout.flush()
# predict for each query
for query in sorted(traj_group_test.keys()):
ps, L = query
# start should be in training set
if ps not in poi_set: continue
assert(L <= poi_info.shape[0])
# prediction of ssvm
node_features = calc_node_features(ps, L, poi_ix, poi_info, poi_clusters=POI_CLUSTERS, \
cats=POI_CAT_LIST, clusters=POI_CLUSTER_LIST)
# normalise test features
unaries, pw = scale_features_linear(node_features, edge_features, node_max=sm.node_max, node_min=sm.node_min, \
edge_max=sm.edge_max, edge_min=sm.edge_min)
X_test = [(unaries, pw, (poi_id_dict[ps], L))]
# test
y_pred = ssvm.predict(X_test)
rec = [poi_id_rdict[x] for x in y_pred[0]] # map POIs back
rec1 = [ps] + rec[1:]
ssvm_dict[query] = rec1
# prediction of nearest neighbour
candidates_id = sorted(TRAJ_GROUP_DICT[query] - traj_group_test[query])
assert(len(candidates_id) > 0)
np.random.shuffle(candidates_id)
nnrand_dict[query] = traj_dict[candidates_id[0]]
if check_protocol == True:
F1_ssvm = []; pF1_ssvm = []; Tau_ssvm = []
F1_nn = []; pF1_nn = []; Tau_nn = []
for key in sorted(ssvm_dict.keys()):
assert(key in nnrand_dict)
F1, pF1, tau = evaluate(ssvm_dict[key], traj_group_test[key])
F1_ssvm.append(F1); pF1_ssvm.append(pF1); Tau_ssvm.append(tau)
F1, pF1, tau = evaluate(nnrand_dict[key], traj_group_test[key])
F1_nn.append(F1); pF1_nn.append(pF1); Tau_nn.append(tau)
print('SSVM: F1 (%.3f, %.3f), pairsF1 (%.3f, %.3f) Tau (%.3f, %.3f)' % \
(np.mean(F1_ssvm), np.std(F1_ssvm)/np.sqrt(len(F1_ssvm)), \
np.mean(pF1_ssvm), np.std(pF1_ssvm)/np.sqrt(len(pF1_ssvm)),
np.mean(Tau_ssvm), np.std(Tau_ssvm)/np.sqrt(len(Tau_ssvm))))
print('NNRAND: F1 (%.3f, %.3f), pairsF1 (%.3f, %.3f), Tau (%.3f, %.3f)' % \
(np.mean(F1_nn), np.std(F1_nn)/np.sqrt(len(F1_nn)), \
np.mean(pF1_nn), np.std(pF1_nn)/np.sqrt(len(pF1_nn)), \
np.mean(Tau_nn), np.std(Tau_nn)/np.sqrt(len(Tau_nn))))
Explanation: Sanity check for evaluation protocol
70/30 split for trajectories conform to each query.
End of explanation |
10,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We will first break out a training and testing set with an 80/20 split
Step1: Random Forests
Step2: A 94% accuracy with an 95% Precision - recall - f1score is quite high, but how does it compare to other methods?
Ada Boost
Step3: The scores are just slightly worse than the Random Forest
K-nearest neighbors
Step4: Very poor performance compared to the previous two. Also, so far it seems that the methods have a better performance identifying Non-spam than spam as noted by the lower precision, recall and f1 scores
Support Vector Machines
Step5: Not great performance either
Naive Bayes
Step6: Also not great performance either
Decision Trees
Step7: Decision tree is decent, though not as good as the Random Forest
So the Random Forest is the best model for Spam detection, but most models seems to have poor precision when it comes to Spam and poor recall when it comes to non-spam
We can use all the models to create one voting model to try and increase the accuracy. | Python Code:
spamTesting, spamTrain = train_test_split(
data, test_size=0.8, random_state=5)
len(spamTesting)
len(spamTrain)
Explanation: We will first break out a training and testing set with an 80/20 split
End of explanation
RndForClf = RandomForestClassifier(n_jobs=2, n_estimators=100, max_features="auto",random_state=620)
RndForPred = RndForClf.fit(spamTrain[names[0:57]],spamTrain[names[-1]]).predict(spamTesting[names[0:57]])
accuracy_score(spamTesting[names[-1]], RndForPred)
print(classification_report(spamTesting[names[-1]], RndForPred,target_names=['Not Spam','Spam']))
Explanation: Random Forests
End of explanation
AdaBooClf = AdaBoostClassifier()
AdaBooPred = AdaBooClf.fit(spamTrain[names[0:57]],spamTrain[names[-1]]).predict(spamTesting[names[0:57]])
accuracy_score(spamTesting[names[-1]], AdaBooPred)
print(classification_report(spamTesting[names[-1]], AdaBooPred,target_names=['Not Spam','Spam']))
Explanation: A 94% accuracy with an 95% Precision - recall - f1score is quite high, but how does it compare to other methods?
Ada Boost
End of explanation
KNNClf = KNeighborsClassifier(7)
KNNPred = KNNClf.fit(spamTrain[names[0:57]],spamTrain[names[-1]]).predict(spamTesting[names[0:57]])
accuracy_score(spamTesting[names[-1]], KNNPred)
print(classification_report(spamTesting[names[-1]], KNNPred,target_names=['Not Spam','Spam']))
Explanation: The scores are just slightly worse than the Random Forest
K-nearest neighbors
End of explanation
#RBF
SVMRBFClf = SVC()
SVMRBFPred = SVMRBFClf.fit(spamTrain[names[0:57]],spamTrain[names[-1]]).predict(spamTesting[names[0:57]])
accuracy_score(spamTesting[names[-1]], SVMRBFPred)
print(classification_report(spamTesting[names[-1]], SVMRBFPred,target_names=['Not Spam','Spam']))
Explanation: Very poor performance compared to the previous two. Also, so far it seems that the methods have a better performance identifying Non-spam than spam as noted by the lower precision, recall and f1 scores
Support Vector Machines
End of explanation
GaussClf = GaussianNB()
GaussPred = GaussClf.fit(spamTrain[names[0:57]],spamTrain[names[-1]]).predict(spamTesting[names[0:57]])
accuracy_score(spamTesting[names[-1]], GaussPred)
print(classification_report(spamTesting[names[-1]], GaussPred,target_names=['Not Spam','Spam']))
Explanation: Not great performance either
Naive Bayes
End of explanation
DecTreeClf = DecisionTreeClassifier(random_state=620)
DecTreePred = DecTreeClf.fit(spamTrain[names[0:57]],spamTrain[names[-1]]).predict(spamTesting[names[0:57]])
accuracy_score(spamTesting[names[-1]], DecTreePred)
print(classification_report(spamTesting[names[-1]], GaussPred,target_names=['Not Spam','Spam']))
Explanation: Also not great performance either
Decision Trees
End of explanation
VotingHardClf = VotingClassifier(estimators = [('RF',RndForClf),('Ada',AdaBooClf),('KNN',KNNClf),('SVNRBF',SVMRBFClf),('NB',GaussClf),('DecTree',DecTreeClf)],voting = 'hard')
VotingHardPred = VotingHardClf.fit(spamTrain[names[0:57]],spamTrain[names[-1]]).predict(spamTesting[names[0:57]])
accuracy_score(spamTesting[names[-1]], VotingHardPred)
print(classification_report(spamTesting[names[-1]], VotingHardPred,target_names=['Not Spam','Spam']))
Explanation: Decision tree is decent, though not as good as the Random Forest
So the Random Forest is the best model for Spam detection, but most models seems to have poor precision when it comes to Spam and poor recall when it comes to non-spam
We can use all the models to create one voting model to try and increase the accuracy.
End of explanation |
10,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chromosphere model
The chromosphere model, pytransit.ChromosphereModel, implements a transit over a thin-walled sphere, as described by Schlawin et al. (ApJL 722, 2010). The model is parallelised using numba, and the number of threads can be set using the NUMBA_NUM_THREADS environment variable. An OpenCL version for GPU computation is implemented by pytransit.ChromosphereModelCL.
Step1: Model initialization
The chromosphere model doesn't take any special initialization arguments, so the initialization is straightforward.
Step2: Data setup
Homogeneous time series
The model needs to be set up by calling set_data() before it can be used. At its simplest, set_data takes the mid-exposure times of the time series to be modelled.
Step3: Model use
Evaluation
The transit model can be evaluated using either a set of scalar parameters, a parameter vector (1D ndarray), or a parameter vector array (2D ndarray). The model flux is returned as a 1D ndarray in the first two cases, and a 2D ndarray in the last (one model per parameter vector).
tm.evaluate_ps(k, t0, p, a, i, e=0, w=0) evaluates the model for a set of scalar parameters, where k is the radius ratio, t0 the zero epoch, p the orbital period, a the semi-major axis divided by the stellar radius, i the inclination in radians, e the eccentricity, and w the argument of periastron. Eccentricity and argument of periastron are optional, and omitting them defaults to a circular orbit.
tm.evaluate_pv(pv) evaluates the model for a 1D parameter vector, or 2D array of parameter vectors. In the first case, the parameter vector should be array-like with elements [k, t0, p, a, i, e, w]. In the second case, the parameter vectors should be stored in a 2d ndarray with shape (npv, 7) as
[[k1, t01, p1, a1, i1, e1, w1],
[k2, t02, p2, a2, i2, e2, w2],
...
[kn, t0n, pn, an, in, en, wn]]
The reason for the different options is that the model implementations may have optimisations that make the model evaluation for a set of parameter vectors much faster than if computing them separately. This is especially the case for the OpenCL models.
Step4: Supersampling
The transit model can be supersampled by setting the nsamples and exptimes arguments in set_data.
Step5: Heterogeneous time series
PyTransit allows for heterogeneous time series, that is, a single time series can contain several individual light curves (with, e.g., different time cadences and required supersampling rates) observed (possibly) in different passbands.
If a time series contains several light curves, it also needs the light curve indices for each exposure. These are given through lcids argument, which should be an array of integers. If the time series contains light curves observed in different passbands, the passband indices need to be given through pbids argument as an integer array, one per light curve. Supersampling can also be defined on per-light curve basis by giving the nsamplesand exptimes as arrays with one value per light curve.
For example, a set of three light curves, two observed in one passband and the third in another passband
times_1 (lc = 0, pb = 0, sc) = [1, 2, 3, 4]
times_2 (lc = 1, pb = 0, lc) = [3, 4]
times_3 (lc = 2, pb = 1, sc) = [1, 5, 6]
Would be set up as
tm.set_data(time = [1, 2, 3, 4, 3, 4, 1, 5, 6],
lcids = [0, 0, 0, 0, 1, 1, 2, 2, 2],
pbids = [0, 0, 1],
nsamples = [ 1, 10, 1],
exptimes = [0.1, 1.0, 0.1])
Example | Python Code:
%pylab inline
sys.path.append('..')
from pytransit import ChromosphereModel
seed(0)
times_sc = linspace(0.85, 1.15, 1000) # Short cadence time stamps
times_lc = linspace(0.85, 1.15, 100) # Long cadence time stamps
k, t0, p, a, i, e, w = 0.1, 1., 2.1, 3.2, 0.5*pi, 0.3, 0.4*pi
pvp = tile([k, t0, p, a, i, e, w], (50,1))
pvp[1:,0] += normal(0.0, 0.005, size=pvp.shape[0]-1)
pvp[1:,1] += normal(0.0, 0.02, size=pvp.shape[0]-1)
Explanation: Chromosphere model
The chromosphere model, pytransit.ChromosphereModel, implements a transit over a thin-walled sphere, as described by Schlawin et al. (ApJL 722, 2010). The model is parallelised using numba, and the number of threads can be set using the NUMBA_NUM_THREADS environment variable. An OpenCL version for GPU computation is implemented by pytransit.ChromosphereModelCL.
End of explanation
tm = ChromosphereModel()
Explanation: Model initialization
The chromosphere model doesn't take any special initialization arguments, so the initialization is straightforward.
End of explanation
tm.set_data(times_sc)
Explanation: Data setup
Homogeneous time series
The model needs to be set up by calling set_data() before it can be used. At its simplest, set_data takes the mid-exposure times of the time series to be modelled.
End of explanation
def plot_transits(tm, fmt='k'):
fig, axs = subplots(1, 3, figsize = (13,3), constrained_layout=True, sharey=True)
flux = tm.evaluate_ps(k, t0, p, a, i, e, w)
axs[0].plot(tm.time, flux, fmt)
axs[0].set_title('Individual parameters')
flux = tm.evaluate_pv(pvp[0])
axs[1].plot(tm.time, flux, fmt)
axs[1].set_title('Parameter vector')
flux = tm.evaluate_pv(pvp)
axs[2].plot(tm.time, flux.T, 'k', alpha=0.2);
axs[2].set_title('Parameter vector array')
setp(axs[0], ylabel='Normalised flux')
setp(axs, xlabel='Time [days]', xlim=tm.time[[0,-1]])
tm.set_data(times_sc)
plot_transits(tm)
Explanation: Model use
Evaluation
The transit model can be evaluated using either a set of scalar parameters, a parameter vector (1D ndarray), or a parameter vector array (2D ndarray). The model flux is returned as a 1D ndarray in the first two cases, and a 2D ndarray in the last (one model per parameter vector).
tm.evaluate_ps(k, t0, p, a, i, e=0, w=0) evaluates the model for a set of scalar parameters, where k is the radius ratio, t0 the zero epoch, p the orbital period, a the semi-major axis divided by the stellar radius, i the inclination in radians, e the eccentricity, and w the argument of periastron. Eccentricity and argument of periastron are optional, and omitting them defaults to a circular orbit.
tm.evaluate_pv(pv) evaluates the model for a 1D parameter vector, or 2D array of parameter vectors. In the first case, the parameter vector should be array-like with elements [k, t0, p, a, i, e, w]. In the second case, the parameter vectors should be stored in a 2d ndarray with shape (npv, 7) as
[[k1, t01, p1, a1, i1, e1, w1],
[k2, t02, p2, a2, i2, e2, w2],
...
[kn, t0n, pn, an, in, en, wn]]
The reason for the different options is that the model implementations may have optimisations that make the model evaluation for a set of parameter vectors much faster than if computing them separately. This is especially the case for the OpenCL models.
End of explanation
tm.set_data(times_lc, nsamples=10, exptimes=0.01)
plot_transits(tm)
Explanation: Supersampling
The transit model can be supersampled by setting the nsamples and exptimes arguments in set_data.
End of explanation
times_1 = linspace(0.85, 1.0, 500)
times_2 = linspace(1.0, 1.15, 10)
times = concatenate([times_1, times_2])
lcids = concatenate([full(times_1.size, 0, 'int'), full(times_2.size, 1, 'int')])
nsamples = [1, 10]
exptimes = [0, 0.0167]
tm.set_data(times, lcids, nsamples=nsamples, exptimes=exptimes)
plot_transits(tm, 'k.-')
Explanation: Heterogeneous time series
PyTransit allows for heterogeneous time series, that is, a single time series can contain several individual light curves (with, e.g., different time cadences and required supersampling rates) observed (possibly) in different passbands.
If a time series contains several light curves, it also needs the light curve indices for each exposure. These are given through lcids argument, which should be an array of integers. If the time series contains light curves observed in different passbands, the passband indices need to be given through pbids argument as an integer array, one per light curve. Supersampling can also be defined on per-light curve basis by giving the nsamplesand exptimes as arrays with one value per light curve.
For example, a set of three light curves, two observed in one passband and the third in another passband
times_1 (lc = 0, pb = 0, sc) = [1, 2, 3, 4]
times_2 (lc = 1, pb = 0, lc) = [3, 4]
times_3 (lc = 2, pb = 1, sc) = [1, 5, 6]
Would be set up as
tm.set_data(time = [1, 2, 3, 4, 3, 4, 1, 5, 6],
lcids = [0, 0, 0, 0, 1, 1, 2, 2, 2],
pbids = [0, 0, 1],
nsamples = [ 1, 10, 1],
exptimes = [0.1, 1.0, 0.1])
Example: two light curves with different cadences
End of explanation |
10,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sebastian Raschka, 2015
Python Machine Learning Essentials
Chapter 6 - Learning Best Practices for Model Evaluation and Hyperparameter Tuning
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
Step1: <br>
<br>
Sections
Streamlining workflows with pipelines
Loading the Breast Cancer Wisconsin dataset
Combining transformers and estimators in a pipeline
Using k-fold cross validation to assess model performances
Debugging algorithms with learning curves
Diagnosing bias and variance problems with learning curves
Addressing over- and underfitting with validation curves
Fine-tuning machine learning models via grid search
Tuning hyperparameters via grid search
Algorithm selection with nested cross-validation
Looking at different performance evaluation metrics
Reading a confusion matrix
Optimizing the precision and recall of a classification model
Plotting a receiver operating characteristic
Scoring metrics for multiclass classification
<br>
<br>
[back to top]
Streamlining workflows with pipelines
[back to top]
Loading the Breast Cancer Wisconsin dataset
Step2: <br>
<br>
Combining transformers and estimators in a pipeline
[back to top]
Step3: <br>
<br>
Using k-fold cross validation to assess model performances
[back to top]
Step4: <br>
<br>
Debugging algorithms with learning curves
[back to top]
<br>
<br>
Diagnosing bias and variance problems with learning curves
[back to top]
Step5: <br>
<br>
Addressing over- and underfitting with validation curves
[back to top]
Step6: <br>
<br>
Fine-tuning machine learning models via grid search
[back to top]
<br>
<br>
Tuning hyperparameters via grid search
[back to top]
Step7: <br>
<br>
Algorithm selection with nested cross-validation
[back to top]
Step8: <br>
<br>
Looking at different performance evaluation metrics
[back to top]
<br>
<br>
Reading a confusion matrix
[back to top]
Step9: <br>
<br>
Optimizing the precision and recall of a classification model
[back to top]
Step10: <br>
<br>
Plotting a receiver operating characteristic
[back to top]
Step11: <br>
<br>
Scoring metrics for multiclass classification
[back to top] | Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
Explanation: Sebastian Raschka, 2015
Python Machine Learning Essentials
Chapter 6 - Learning Best Practices for Model Evaluation and Hyperparameter Tuning
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data', header=None)
df.head()
df.shape
from sklearn.preprocessing import LabelEncoder
X = df.loc[:, 2:].values
y = df.loc[:, 1].values
le = LabelEncoder()
y = le.fit_transform(y)
le.transform(['M', 'B'])
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.20, random_state=1)
Explanation: <br>
<br>
Sections
Streamlining workflows with pipelines
Loading the Breast Cancer Wisconsin dataset
Combining transformers and estimators in a pipeline
Using k-fold cross validation to assess model performances
Debugging algorithms with learning curves
Diagnosing bias and variance problems with learning curves
Addressing over- and underfitting with validation curves
Fine-tuning machine learning models via grid search
Tuning hyperparameters via grid search
Algorithm selection with nested cross-validation
Looking at different performance evaluation metrics
Reading a confusion matrix
Optimizing the precision and recall of a classification model
Plotting a receiver operating characteristic
Scoring metrics for multiclass classification
<br>
<br>
[back to top]
Streamlining workflows with pipelines
[back to top]
Loading the Breast Cancer Wisconsin dataset
End of explanation
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
pipe_lr = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', LogisticRegression(random_state=1))])
pipe_lr.fit(X_train, y_train)
print('Test Accuracy: %.3f' % pipe_lr.score(X_test, y_test))
y_pred = pipe_lr.predict(X_test)
Explanation: <br>
<br>
Combining transformers and estimators in a pipeline
[back to top]
End of explanation
import numpy as np
from sklearn.cross_validation import StratifiedKFold
kfold = StratifiedKFold(y=y_train,
n_folds=10,
random_state=1)
scores = []
for k, (train, test) in enumerate(kfold):
pipe_lr.fit(X_train[train], y_train[train])
score = pipe_lr.score(X_train[test], y_train[test])
scores.append(score)
print('Fold: %s, Class dist.: %s, Acc: %.3f' % (k+1, np.bincount(y_train[train]), score))
print('\nCV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(estimator=pipe_lr,
X=X_train,
y=y_train,
cv=10,
n_jobs=1)
print('CV accuracy scores: %s' % scores)
print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
Explanation: <br>
<br>
Using k-fold cross validation to assess model performances
[back to top]
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.learning_curve import learning_curve
pipe_lr = Pipeline([('scl', StandardScaler()),
('clf', LogisticRegression(penalty='l2', random_state=0))])
train_sizes, train_scores, test_scores =\
learning_curve(estimator=pipe_lr,
X=X_train,
y=y_train,
train_sizes=np.linspace(0.1, 1.0, 10),
cv=10,
n_jobs=1)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean,
color='blue', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(train_sizes,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(train_sizes, test_mean,
color='green', linestyle='--',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(train_sizes,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim([0.8, 1.0])
plt.tight_layout()
# plt.savefig('./figures/learning_curve.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Debugging algorithms with learning curves
[back to top]
<br>
<br>
Diagnosing bias and variance problems with learning curves
[back to top]
End of explanation
from sklearn.learning_curve import validation_curve
param_range = [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]
train_scores, test_scores = validation_curve(
estimator=pipe_lr,
X=X_train,
y=y_train,
param_name='clf__C',
param_range=param_range,
cv=10)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean,
color='blue', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(param_range, train_mean + train_std,
train_mean - train_std, alpha=0.15,
color='blue')
plt.plot(param_range, test_mean,
color='green', linestyle='--',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(param_range,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xscale('log')
plt.legend(loc='lower right')
plt.xlabel('Parameter C')
plt.ylabel('Accuracy')
plt.ylim([0.8, 1.0])
plt.tight_layout()
# plt.savefig('./figures/validation_curve.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Addressing over- and underfitting with validation curves
[back to top]
End of explanation
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
pipe_svc = Pipeline([('scl', StandardScaler()),
('clf', SVC(random_state=1))])
param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
param_grid = [{'clf__C': param_range,
'clf__kernel': ['linear']},
{'clf__C': param_range,
'clf__gamma': param_range,
'clf__kernel': ['rbf']}]
gs = GridSearchCV(estimator=pipe_svc,
param_grid=param_grid,
scoring='accuracy',
cv=10,
n_jobs=-1)
gs = gs.fit(X_train, y_train)
print(gs.best_score_)
print(gs.best_params_)
clf = gs.best_estimator_
clf.fit(X_train, y_train)
print('Test accuracy: %.3f' % clf.score(X_test, y_test))
Explanation: <br>
<br>
Fine-tuning machine learning models via grid search
[back to top]
<br>
<br>
Tuning hyperparameters via grid search
[back to top]
End of explanation
gs = GridSearchCV(estimator=pipe_svc,
param_grid=param_grid,
scoring='accuracy',
cv=5)
scores = cross_val_score(gs, X_train, y_train, scoring='accuracy', cv=5)
print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
from sklearn.tree import DecisionTreeClassifier
gs = GridSearchCV(estimator=DecisionTreeClassifier(random_state=0),
param_grid=[{'max_depth': [1, 2, 3, 4, 5, 6, 7, None]}],
scoring='accuracy',
cv=5)
scores = cross_val_score(gs, X_train, y_train, scoring='accuracy', cv=5)
print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
Explanation: <br>
<br>
Algorithm selection with nested cross-validation
[back to top]
End of explanation
from sklearn.metrics import confusion_matrix
pipe_svc.fit(X_train, y_train)
y_pred = pipe_svc.predict(X_test)
confmat = confusion_matrix(y_true=y_test, y_pred=y_pred)
print(confmat)
fig, ax = plt.subplots(figsize=(2.5, 2.5))
ax.matshow(confmat, cmap=plt.cm.Blues, alpha=0.3)
for i in range(confmat.shape[0]):
for j in range(confmat.shape[1]):
ax.text(x=j, y=i, s=confmat[i, j], va='center', ha='center')
plt.xlabel('predicted label')
plt.ylabel('true label')
plt.tight_layout()
# plt.savefig('./figures/confusion_matrix.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Looking at different performance evaluation metrics
[back to top]
<br>
<br>
Reading a confusion matrix
[back to top]
End of explanation
from sklearn.metrics import precision_score, recall_score, f1_score
print('Precision: %.3f' % precision_score(y_true=y_test, y_pred=y_pred))
print('Recall: %.3f' % recall_score(y_true=y_test, y_pred=y_pred))
print('F1: %.3f' % f1_score(y_true=y_test, y_pred=y_pred))
from sklearn.metrics import make_scorer, f1_score
scorer = make_scorer(f1_score, pos_label=0)
c_gamma_range = [0.01, 0.1, 1.0, 10.0]
param_grid = [{'clf__C': c_gamma_range,
'clf__kernel': ['linear']},
{'clf__C': c_gamma_range,
'clf__gamma': c_gamma_range,
'clf__kernel': ['rbf'],}]
gs = GridSearchCV(estimator=pipe_svc,
param_grid=param_grid,
scoring=scorer,
cv=10,
n_jobs=-1)
gs = gs.fit(X_train, y_train)
print(gs.best_score_)
print(gs.best_params_)
Explanation: <br>
<br>
Optimizing the precision and recall of a classification model
[back to top]
End of explanation
from sklearn.metrics import roc_curve, auc
from scipy import interp
X_train2 = X_train[:, [4, 14]]
cv = StratifiedKFold(y_train, n_folds=3, random_state=1)
fig = plt.figure(figsize=(7, 5))
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
for i, (train, test) in enumerate(cv):
probas = pipe_lr.fit(X_train2[train],
y_train[train]).predict_proba(X_train2[test])
fpr, tpr, thresholds = roc_curve(y_train[test],
probas[:, 1],
pos_label=1)
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr,
tpr,
lw=1,
label='ROC fold %d (area = %0.2f)'
% (i+1, roc_auc))
plt.plot([0, 1],
[0, 1],
linestyle='--',
color=(0.6, 0.6, 0.6),
label='random guessing')
mean_tpr /= len(cv)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--',
label='mean ROC (area = %0.2f)' % mean_auc, lw=2)
plt.plot([0, 0, 1],
[0, 1, 1],
lw=2,
linestyle=':',
color='black',
label='perfect performance')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.title('Receiver Operator Characteristic')
plt.legend(loc="lower right")
plt.tight_layout()
# plt.savefig('./figures/roc.png', dpi=300)
plt.show()
pipe_svc = pipe_svc.fit(X_train2, y_train)
y_pred2 = pipe_svc.predict(X_test[:, [4, 14]])
from sklearn.metrics import roc_auc_score, accuracy_score
print('ROC AUC: %.3f' % roc_auc_score(y_true=y_test, y_score=y_pred2))
print('Accuracy: %.3f' % accuracy_score(y_true=y_test, y_pred=y_pred2))
Explanation: <br>
<br>
Plotting a receiver operating characteristic
[back to top]
End of explanation
pre_scorer = make_scorer(score_func=precision_score,
pos_label=1,
greater_is_better=True,
average='micro')
Explanation: <br>
<br>
Scoring metrics for multiclass classification
[back to top]
End of explanation |
10,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.activation_function(final_inputs) # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = error - (y - hidden_outputs)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error * final_outputs * (1 - final_outputs)
hidden_error_term = np.dot(output_error_term, self.weights_hidden_to_output) * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += learning_rate * hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += learning_rate * output_error_term * hidden_outputs
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += delta_weights_h_o # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += delta_weights_i_h # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.activation_function(final_inputs) # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 100
learning_rate = 0.1
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
10,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to use topic model by gensim
This document will show you how to use topic model by gensim.
The data for this tutorial is from Recruit HotPepper Beauty API. So you need api key of it.
If you get api key, then execute below scripts.
scripts/download_data.py
scripts/make_corpus.py
python download_data.py your_api_key
It is for downloading the json data from api (extract hair salons data near the Tokyo).
python make_corpus path_to_downloaded_json_file
It is for making the corpus from json data. You can set some options to restrict the words in corpus. Please see the help of this script.
After executing above scripts, you will have corpus and dictionary in your data folder.
Then, execute this notebook.
Preparation
Step1: Load Corpus
Step2: Make Topic Model
Step3: Evaluate/Visualize Topic Model
Check the distance between each topic
If we success to categorize the documents well, then the distance of each topic should be far apart.
Step4: Check the topics in documents
If we success to categorize the documents well, each document has one mainly topic.
Step5: Visualize words in topics
To consider about the name of topic, show the words in topics. | Python Code:
# enable showing matplotlib image inline
%matplotlib inline
# autoreload module
%load_ext autoreload
%autoreload 2
PROJECT_ROOT = "/"
def load_local_package():
import os
import sys
root = os.path.join(os.getcwd(), "./")
sys.path.append(root) # load project root
return root
PROJECT_ROOT = load_local_package()
Explanation: How to use topic model by gensim
This document will show you how to use topic model by gensim.
The data for this tutorial is from Recruit HotPepper Beauty API. So you need api key of it.
If you get api key, then execute below scripts.
scripts/download_data.py
scripts/make_corpus.py
python download_data.py your_api_key
It is for downloading the json data from api (extract hair salons data near the Tokyo).
python make_corpus path_to_downloaded_json_file
It is for making the corpus from json data. You can set some options to restrict the words in corpus. Please see the help of this script.
After executing above scripts, you will have corpus and dictionary in your data folder.
Then, execute this notebook.
Preparation
End of explanation
prefix = "salons"
def load_corpus(p):
import os
import json
from gensim import corpora
s_path = os.path.join(PROJECT_ROOT, "./data/{0}.json".format(p))
d_path = os.path.join(PROJECT_ROOT, "./data/{0}_dict.dict".format(p))
c_path = os.path.join(PROJECT_ROOT, "./data/{0}_corpus.mm".format(p))
s = []
with open(s_path, "r", encoding="utf-8") as f:
s = json.load(f)
d = corpora.Dictionary.load(d_path)
c = corpora.MmCorpus(c_path)
return s, d, c
salons, dictionary, corpus = load_corpus(prefix)
print(dictionary)
print(corpus)
Explanation: Load Corpus
End of explanation
from gensim import models
topic_range = range(2, 5)
test_rate = 0.2
def split_corpus(c, rate_or_size):
import math
size = 0
if isinstance(rate_or_size, float):
size = math.floor(len(c) * rate_or_size)
else:
size = rate_or_size
# simple split, not take sample randomly
left = c[:-size]
right = c[-size:]
return left, right
def calc_perplexity(m, c):
import numpy as np
return np.exp(-m.log_perplexity(c))
def search_model(c, rate_or_size):
most = [1.0e6, None]
training, test = split_corpus(c, rate_or_size)
print("dataset: training/test = {0}/{1}".format(len(training), len(test)))
for t in topic_range:
m = models.LdaModel(corpus=training, id2word=dictionary, num_topics=t, iterations=250, passes=5)
p1 = calc_perplexity(m, training)
p2 = calc_perplexity(m, test)
print("{0}: perplexity is {1}/{2}".format(t, p1, p2))
if p2 < most[0]:
most[0] = p2
most[1] = m
return most[0], most[1]
perplexity, model = search_model(corpus, test_rate)
print("Best model: topics={0}, perplexity={1}".format(model.num_topics, perplexity))
Explanation: Make Topic Model
End of explanation
def calc_topic_distances(m, topic):
import numpy as np
def kldiv(p, q):
distance = np.sum(p * np.log(p / q))
return distance
# get probability of each words
# https://github.com/piskvorky/gensim/blob/develop/gensim/models/ldamodel.py#L733
t = m.state.get_lambda()
for i, p in enumerate(t):
t[i] = t[i] / t[i].sum()
base = t[topic]
distances = [(i_p[0], kldiv(base, i_p[1])) for i_p in enumerate(t) if i_p[0] != topic]
return distances
def plot_distance_matrix(m):
import numpy as np
import matplotlib.pylab as plt
# make distance matrix
mt = []
for i in range(m.num_topics):
d = calc_topic_distances(m, i)
d.insert(i, (i, 0)) # distance between same topic
d = [_d[1] for _d in d]
mt.append(d)
mt = np.array(mt)
# plot matrix
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.set_aspect("equal")
plt.imshow(mt, interpolation="nearest", cmap=plt.cm.ocean)
plt.yticks(range(mt.shape[0]))
plt.xticks(range(mt.shape[1]))
plt.colorbar()
plt.show()
plot_distance_matrix(model)
Explanation: Evaluate/Visualize Topic Model
Check the distance between each topic
If we success to categorize the documents well, then the distance of each topic should be far apart.
End of explanation
def show_document_topics(c, m, sample_size=200, width=1):
import random
import numpy as np
import matplotlib.pylab as plt
# make document/topics matrix
d_topics = []
t_documents = {}
samples = random.sample(range(len(c)), sample_size)
for s in samples:
ts = m.__getitem__(corpus[s], -1)
d_topics.append([v[1] for v in ts])
max_topic = max(ts, key=lambda x: x[1])
if max_topic[0] not in t_documents:
t_documents[max_topic[0]] = []
t_documents[max_topic[0]] += [(s, max_topic[1])]
d_topics = np.array(d_topics)
for t in t_documents:
t_documents[t] = sorted(t_documents[t], key=lambda x: x[1], reverse=True)
# draw cumulative bar chart
fig = plt.figure(figsize=(20, 3))
N, K = d_topics.shape
indices = np.arange(N)
height = np.zeros(N)
bar = []
for k in range(K):
color = plt.cm.coolwarm(k / K, 1)
p = plt.bar(indices, d_topics[:, k], width, bottom=None if k == 0 else height, color=color)
height += d_topics[:, k]
bar.append(p)
plt.ylim((0, 1))
plt.xlim((0, d_topics.shape[0]))
topic_labels = ['Topic #{}'.format(k) for k in range(K)]
plt.legend([b[0] for b in bar], topic_labels)
plt.show(bar)
return d_topics, t_documents
document_topics, topic_documents = show_document_topics(corpus, model)
num_show_ranks = 5
for t in topic_documents:
print("Topic #{0} salons".format(t) + " " + "*" * 100)
for i, v in topic_documents[t][:num_show_ranks]:
print("{0}({1}):{2}".format(salons[i]["name"], v, salons[i]["urls"]["pc"]))
Explanation: Check the topics in documents
If we success to categorize the documents well, each document has one mainly topic.
End of explanation
def visualize_topic(m, word_count=10, fontsize_base=10):
import matplotlib.pylab as plt
from matplotlib.font_manager import FontProperties
font = lambda s: FontProperties(fname=r'C:\Windows\Fonts\meiryo.ttc', size=s)
# get words in topic
topic_words = []
for t in range(m.num_topics):
words = m.show_topic(t, topn=word_count)
topic_words.append(words)
# plot words
fig = plt.figure(figsize=(8, 5))
for i, ws in enumerate(topic_words):
sub = fig.add_subplot(1, m.num_topics, i + 1)
plt.ylim(0, word_count + 0.5)
plt.xticks([])
plt.yticks([])
plt.title("Topic #{}".format(i))
for j, (share, word) in enumerate(ws):
size = fontsize_base + (fontsize_base * share * 2)
w = "%s(%1.3f)" % (word, share)
plt.text(0.1, word_count-j-0.5, w, ha="left", fontproperties=font(size))
plt.tight_layout()
plt.show()
visualize_topic(model)
Explanation: Visualize words in topics
To consider about the name of topic, show the words in topics.
End of explanation |
10,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continuous training pipeline with Kubeflow Pipeline and AI Platform
Learning Objectives
Step6: The pipeline uses a mix of custom and pre-build components.
Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution
Step7: The custom components execute in a container image defined in base_image/Dockerfile.
Step8: The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in trainer_image/Dockerfile.
Step9: Building and deploying the pipeline
Before deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on Argo Workflow, which is expressed in YAML.
Configure environment settings
Update the below constants with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default.
ENDPOINT - set the ENDPOINT constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.
Open the SETTINGS for your instance
Use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.
Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
Step10: HINT
Step11: Build the trainer image
Step12: Note
Step13: Build the base image for custom components
Step14: Compile the pipeline
You can compile the DSL using an API from the KFP SDK or using the KFP compiler.
To compile the pipeline DSL using the KFP compiler.
Set the pipeline's compile time settings
The pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the user-gcp-sa secret of the Kubernetes namespace hosting KFP. If you want to use the user-gcp-sa service account you change the value of USE_KFP_SA to True.
Note that the default AI Platform Pipelines configuration does not define the user-gcp-sa secret.
Step15: Use the CLI compiler to compile the pipeline
Exercise
Compile the covertype_training_pipeline.py with the dsl-compile command line
Step16: The result is the covertype_training_pipeline.yaml file.
Step17: Deploy the pipeline package
Exercise
Upload the pipeline to the Kubeflow cluster using the kfp command line
Step18: Submitting pipeline runs
You can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.
List the pipelines in AI Platform Pipelines
Step19: Submit a run
Find the ID of the covertype_continuous_training pipeline you uploaded in the previous step and update the value of PIPELINE_ID .
Step20: Exercise
Run the pipeline using the kfp command line. Here are some of the variable
you will have to use to pass to the pipeline | Python Code:
!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
Explanation: Continuous training pipeline with Kubeflow Pipeline and AI Platform
Learning Objectives:
1. Learn how to use Kubeflow Pipeline (KFP) pre-build components (BiqQuery, AI Platform training and predictions)
1. Learn how to use KFP lightweight python components
1. Learn how to build a KFP with these components
1. Learn how to compile, upload, and run a KFP with the command line
In this lab, you will build, deploy, and run a KFP pipeline that orchestrates BigQuery and AI Platform services to train, tune, and deploy a scikit-learn model.
Understanding the pipeline design
The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the covertype_training_pipeline.py file that we will generate below.
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
End of explanation
%%writefile ./pipeline/covertype_training_pipeline.py
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
KFP orchestrating BigQuery and Cloud AI Platform services.
import os
from helper_components import evaluate_model
from helper_components import retrieve_best_run
from jinja2 import Template
import kfp
from kfp.components import func_to_container_op
from kfp.dsl.types import Dict
from kfp.dsl.types import GCPProjectID
from kfp.dsl.types import GCPRegion
from kfp.dsl.types import GCSPath
from kfp.dsl.types import String
from kfp.gcp import use_gcp_secret
# Defaults and environment settings
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
TRAINING_FILE_PATH = 'datasets/training/data.csv'
VALIDATION_FILE_PATH = 'datasets/validation/data.csv'
TESTING_FILE_PATH = 'datasets/testing/data.csv'
# Parameter defaults
SPLITS_DATASET_ID = 'splits'
HYPERTUNE_SETTINGS =
{
"hyperparameters": {
"goal": "MAXIMIZE",
"maxTrials": 6,
"maxParallelTrials": 3,
"hyperparameterMetricTag": "accuracy",
"enableTrialEarlyStopping": True,
"params": [
{
"parameterName": "max_iter",
"type": "DISCRETE",
"discreteValues": [500, 1000]
},
{
"parameterName": "alpha",
"type": "DOUBLE",
"minValue": 0.0001,
"maxValue": 0.001,
"scaleType": "UNIT_LINEAR_SCALE"
}
]
}
}
# Helper functions
def generate_sampling_query(source_table_name, num_lots, lots):
Prepares the data sampling query.
sampling_query_template =
SELECT *
FROM
`{{ source_table }}` AS cover
WHERE
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})
query = Template(sampling_query_template).render(
source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])
return query
# Create component factories
component_store = # TO DO: Complete the command
bigquery_query_op = # TO DO: Use the pre-build bigquery/query component
mlengine_train_op = # TO DO: Use the pre-build ml_engine/train
mlengine_deploy_op = # TO DO: Use the pre-build ml_engine/deploy component
retrieve_best_run_op = # TO DO: Package the retrieve_best_run function into a lightweight component
evaluate_model_op = # TO DO: Package the evaluate_model function into a lightweight component
@kfp.dsl.pipeline(
name='Covertype Classifier Training',
description='The pipeline training and deploying the Covertype classifierpipeline_yaml'
)
def covertype_train(project_id,
region,
source_table_name,
gcs_root,
dataset_id,
evaluation_metric_name,
evaluation_metric_threshold,
model_id,
version_id,
replace_existing_version,
hypertune_settings=HYPERTUNE_SETTINGS,
dataset_location='US'):
Orchestrates training and deployment of an sklearn model.
# Create the training split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])
training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)
create_training_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=training_file_path,
dataset_location=dataset_location)
# Create the validation split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[8])
validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)
create_validation_split = # TODO - use the bigquery_query_op
# Create the testing split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[9])
testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)
create_testing_split = # TO DO: Use the bigquery_query_op
# Tune hyperparameters
tune_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'
]
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',
kfp.dsl.RUN_ID_PLACEHOLDER)
hypertune = # TO DO: Use the mlengine_train_op
# Retrieve the best trial
get_best_trial = retrieve_best_run_op(
project_id, hypertune.outputs['job_id'])
# Train the model on a combined training and validation datasets
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)
train_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--alpha',
get_best_trial.outputs['alpha'], '--max_iter',
get_best_trial.outputs['max_iter'], '--hptune', 'False'
]
train_model = # TO DO: Use the mlengine_train_op
# Evaluate the model on the testing split
eval_model = evaluate_model_op(
dataset_path=str(create_testing_split.outputs['output_gcs_path']),
model_path=str(train_model.outputs['job_dir']),
metric_name=evaluation_metric_name)
# Deploy the model if the primary metric is better than threshold
with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):
deploy_model = mlengine_deploy_op(
model_uri=train_model.outputs['job_dir'],
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
replace_existing_version=replace_existing_version)
# Configure the pipeline to run using the service account defined
# in the user-gcp-sa k8s secret
if USE_KFP_SA == 'True':
kfp.dsl.get_pipeline_conf().add_op_transformer(
use_gcp_secret('user-gcp-sa'))
Explanation: The pipeline uses a mix of custom and pre-build components.
Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution:
BigQuery query component
AI Platform Training component
AI Platform Deploy component
Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's Lightweight Python Components mechanism. The code for the components is in the helper_components.py file:
Retrieve Best Run. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job.
Evaluate Model. This component evaluates a sklearn trained model using a provided metric and a testing dataset.
Exercise
Complete TO DOs the pipeline file below.
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.
</ql-infobox>
End of explanation
!cat base_image/Dockerfile
Explanation: The custom components execute in a container image defined in base_image/Dockerfile.
End of explanation
!cat trainer_image/Dockerfile
Explanation: The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in trainer_image/Dockerfile.
End of explanation
!gsutil ls
Explanation: Building and deploying the pipeline
Before deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on Argo Workflow, which is expressed in YAML.
Configure environment settings
Update the below constants with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default.
ENDPOINT - set the ENDPOINT constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.
Open the SETTINGS for your instance
Use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.
Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
End of explanation
REGION = 'us-central1'
ENDPOINT = '337dd39580cbcbd2-dot-us-central2.pipelines.googleusercontent.com' # TO DO: REPLACE WITH YOUR ENDPOINT
ARTIFACT_STORE_URI = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' # TO DO: REPLACE WITH YOUR ARTIFACT_STORE NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
Explanation: HINT:
For ENDPOINT, use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SDK section of the SETTINGS window.
For ARTIFACT_STORE_URI, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output. Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default'
End of explanation
IMAGE_NAME='trainer_image'
TAG='latest'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
Explanation: Build the trainer image
End of explanation
!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image
Explanation: Note: Please ignore any incompatibility ERROR that may appear for the packages visions as it will not affect the lab's functionality.
End of explanation
IMAGE_NAME='base_image'
TAG='latest'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image
Explanation: Build the base image for custom components
End of explanation
USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
Explanation: Compile the pipeline
You can compile the DSL using an API from the KFP SDK or using the KFP compiler.
To compile the pipeline DSL using the KFP compiler.
Set the pipeline's compile time settings
The pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the user-gcp-sa secret of the Kubernetes namespace hosting KFP. If you want to use the user-gcp-sa service account you change the value of USE_KFP_SA to True.
Note that the default AI Platform Pipelines configuration does not define the user-gcp-sa secret.
End of explanation
# TO DO: Your code goes here
Explanation: Use the CLI compiler to compile the pipeline
Exercise
Compile the covertype_training_pipeline.py with the dsl-compile command line:
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.
</ql-infobox>
End of explanation
!head covertype_training_pipeline.yaml
Explanation: The result is the covertype_training_pipeline.yaml file.
End of explanation
PIPELINE_NAME='covertype_continuous_training'
# TO DO: Your code goes here
Explanation: Deploy the pipeline package
Exercise
Upload the pipeline to the Kubeflow cluster using the kfp command line:
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.
</ql-infobox>
End of explanation
!kfp --endpoint $ENDPOINT pipeline list
Explanation: Submitting pipeline runs
You can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.
List the pipelines in AI Platform Pipelines
End of explanation
PIPELINE_ID='0918568d-758c-46cf-9752-e04a4403cd84' # TO DO: REPLACE WITH YOUR PIPELINE ID
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_001'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'splits'
EVALUATION_METRIC = 'accuracy'
EVALUATION_METRIC_THRESHOLD = '0.69'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)
Explanation: Submit a run
Find the ID of the covertype_continuous_training pipeline you uploaded in the previous step and update the value of PIPELINE_ID .
End of explanation
# TO DO: Your code goes here
Explanation: Exercise
Run the pipeline using the kfp command line. Here are some of the variable
you will have to use to pass to the pipeline:
EXPERIMENT_NAME is set to the experiment used to run the pipeline. You can choose any name you want. If the experiment does not exist it will be created by the command
RUN_ID is the name of the run. You can use an arbitrary name
PIPELINE_ID is the id of your pipeline. Use the value retrieved by the kfp pipeline list command
GCS_STAGING_PATH is the URI to the Cloud Storage location used by the pipeline to store intermediate files. By default, it is set to the staging folder in your artifact store.
REGION is a compute region for AI Platform Training and Prediction.
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.
</ql-infobox>
End of explanation |
10,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Got Scotch?
In this notebook, we're going to create a dashboard that recommends scotches based on their taste profiles.
Step1: Load Data <span style="float
Step2: We now define a get_similar( ) function to return the data of the top n similar scotches to a given scotch.
Step3: We also need a function on_pick_scotch that will display a table of the top 5 similar scotches that Radar View watches, based on a given selected Scotch. | Python Code:
%matplotlib widget
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import os
import ipywidgets as widgets
from traitlets import Unicode, List, Instance, link, HasTraits
from IPython.display import display, clear_output, HTML, Javascript
display(widgets.Button())
Explanation: Got Scotch?
In this notebook, we're going to create a dashboard that recommends scotches based on their taste profiles.
End of explanation
features = [[2, 2, 2, 0, 0, 2, 1, 2, 2, 2, 2, 2],
[3, 3, 1, 0, 0, 4, 3, 2, 2, 3, 3, 2],
[1, 3, 2, 0, 0, 2, 0, 0, 2, 2, 3, 1],
[4, 1, 4, 4, 0, 0, 2, 0, 1, 2, 1, 0],
[2, 2, 2, 0, 0, 1, 1, 1, 2, 3, 1, 3],
[2, 3, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1],
[0, 2, 0, 0, 0, 1, 1, 0, 2, 2, 3, 1],
[2, 3, 1, 0, 0, 2, 1, 2, 2, 2, 2, 2],
[2, 2, 1, 0, 0, 1, 0, 0, 2, 2, 2, 1],
[2, 3, 2, 1, 0, 0, 2, 0, 2, 1, 2, 3],
[4, 3, 2, 0, 0, 2, 1, 3, 3, 0, 1, 2],
[3, 2, 1, 0, 0, 3, 2, 1, 0, 2, 2, 2],
[4, 2, 2, 0, 0, 2, 2, 0, 2, 2, 2, 2],
[2, 2, 1, 0, 0, 2, 2, 0, 0, 2, 3, 1],
[3, 2, 2, 0, 0, 3, 1, 1, 2, 3, 2, 2],
[2, 2, 2, 0, 0, 2, 2, 1, 2, 2, 2, 2],
[1, 2, 1, 0, 0, 0, 1, 1, 0, 2, 2, 1],
[2, 2, 2, 0, 0, 1, 2, 2, 2, 2, 2, 2],
[2, 2, 3, 1, 0, 2, 2, 1, 1, 1, 1, 3],
[1, 1, 2, 2, 0, 2, 2, 1, 2, 2, 2, 3],
[1, 2, 1, 1, 0, 1, 1, 1, 1, 2, 2, 1],
[3, 1, 4, 2, 1, 0, 2, 0, 2, 1, 1, 0],
[1, 3, 1, 0, 0, 1, 1, 0, 2, 2, 2, 1],
[3, 2, 3, 3, 1, 0, 2, 0, 1, 1, 2, 0],
[2, 2, 2, 0, 1, 2, 2, 1, 2, 2, 1, 2],
[2, 3, 2, 1, 0, 0, 1, 0, 2, 2, 2, 1],
[4, 2, 2, 0, 0, 1, 2, 2, 2, 2, 2, 2],
[3, 2, 2, 1, 0, 1, 2, 2, 1, 2, 3, 2],
[2, 2, 2, 0, 0, 2, 1, 0, 1, 2, 2, 1],
[2, 2, 1, 0, 0, 2, 1, 1, 1, 3, 2, 2],
[2, 3, 1, 1, 0, 0, 0, 0, 1, 2, 2, 1],
[2, 3, 1, 0, 0, 2, 1, 1, 4, 2, 2, 2],
[2, 3, 1, 1, 1, 1, 1, 2, 0, 2, 0, 3],
[2, 3, 1, 0, 0, 2, 1, 1, 1, 1, 2, 1],
[2, 1, 3, 0, 0, 0, 3, 1, 0, 2, 2, 3],
[1, 2, 0, 0, 0, 1, 0, 1, 2, 1, 2, 1],
[2, 3, 1, 0, 0, 1, 2, 1, 2, 1, 2, 2],
[1, 2, 1, 0, 0, 1, 2, 1, 2, 2, 2, 1],
[3, 2, 1, 0, 0, 1, 2, 1, 1, 2, 2, 2],
[2, 2, 2, 2, 0, 1, 0, 1, 2, 2, 1, 3],
[1, 3, 1, 0, 0, 0, 1, 1, 1, 2, 0, 1],
[1, 3, 1, 0, 0, 1, 1, 0, 1, 2, 2, 1],
[4, 2, 2, 0, 0, 2, 1, 4, 2, 2, 2, 2],
[3, 2, 1, 0, 0, 2, 1, 2, 1, 2, 3, 2],
[2, 4, 1, 0, 0, 1, 2, 3, 2, 3, 2, 2],
[1, 3, 1, 0, 0, 0, 0, 0, 0, 2, 2, 1],
[1, 2, 0, 0, 0, 1, 1, 1, 2, 2, 3, 1],
[1, 2, 1, 0, 0, 1, 2, 0, 0, 2, 2, 1],
[2, 3, 1, 0, 0, 2, 2, 2, 1, 2, 2, 2],
[1, 2, 1, 0, 0, 1, 2, 0, 1, 2, 2, 1],
[2, 2, 1, 1, 0, 1, 2, 0, 2, 1, 2, 1],
[2, 3, 1, 0, 0, 1, 1, 2, 1, 2, 2, 2],
[2, 3, 1, 0, 0, 2, 2, 2, 2, 2, 1, 2],
[2, 2, 3, 1, 0, 2, 1, 1, 1, 2, 1, 3],
[1, 3, 1, 1, 0, 2, 2, 0, 1, 2, 1, 1],
[2, 1, 2, 2, 0, 1, 1, 0, 2, 1, 1, 3],
[2, 3, 1, 0, 0, 2, 2, 1, 2, 1, 2, 2],
[4, 1, 4, 4, 1, 0, 1, 2, 1, 1, 1, 0],
[4, 2, 4, 4, 1, 0, 0, 1, 1, 1, 0, 0],
[2, 3, 1, 0, 0, 1, 1, 2, 0, 1, 3, 1],
[1, 1, 1, 1, 0, 1, 1, 0, 1, 2, 1, 1],
[3, 2, 1, 0, 0, 1, 1, 1, 3, 3, 2, 2],
[4, 3, 1, 0, 0, 2, 1, 4, 2, 2, 3, 2],
[2, 1, 1, 0, 0, 1, 1, 1, 2, 1, 2, 1],
[2, 4, 1, 0, 0, 1, 0, 0, 2, 1, 1, 1],
[3, 2, 2, 0, 0, 2, 3, 3, 2, 1, 2, 2],
[2, 2, 2, 2, 0, 0, 2, 0, 2, 2, 2, 3],
[1, 2, 2, 0, 1, 2, 2, 1, 2, 3, 1, 3],
[2, 1, 2, 2, 1, 0, 1, 1, 2, 2, 2, 3],
[2, 3, 2, 1, 1, 1, 2, 1, 0, 2, 3, 1],
[3, 2, 2, 0, 0, 2, 2, 2, 2, 2, 3, 2],
[2, 2, 1, 1, 0, 2, 1, 1, 2, 2, 2, 2],
[2, 4, 1, 0, 0, 2, 1, 0, 0, 2, 1, 1],
[2, 2, 1, 0, 0, 1, 0, 1, 2, 2, 2, 1],
[2, 2, 2, 2, 0, 2, 2, 1, 2, 1, 0, 3],
[2, 2, 1, 0, 0, 2, 2, 2, 3, 3, 3, 2],
[2, 3, 1, 0, 0, 0, 2, 0, 2, 1, 3, 1],
[4, 2, 3, 3, 0, 1, 3, 0, 1, 2, 2, 0],
[1, 2, 1, 0, 0, 2, 0, 1, 1, 2, 2, 1],
[1, 3, 2, 0, 0, 0, 2, 0, 2, 1, 2, 1],
[2, 2, 2, 1, 0, 0, 2, 0, 0, 0, 2, 3],
[1, 1, 1, 0, 0, 1, 0, 0, 1, 2, 2, 1],
[2, 3, 2, 0, 0, 2, 2, 1, 1, 2, 0, 3],
[0, 3, 1, 0, 0, 2, 2, 1, 1, 2, 1, 1],
[2, 2, 1, 0, 0, 1, 0, 1, 2, 1, 0, 3],
[2, 3, 0, 0, 1, 0, 2, 1, 1, 2, 2, 1]]
feature_names = ['Body', 'Sweetness', 'Smoky',
'Medicinal', 'Tobacco', 'Honey',
'Spicy', 'Winey', 'Nutty',
'Malty', 'Fruity', 'cluster']
brand_names = ['Aberfeldy',
'Aberlour',
'AnCnoc',
'Ardbeg',
'Ardmore',
'ArranIsleOf',
'Auchentoshan',
'Auchroisk',
'Aultmore',
'Balblair',
'Balmenach',
'Belvenie',
'BenNevis',
'Benriach',
'Benrinnes',
'Benromach',
'Bladnoch',
'BlairAthol',
'Bowmore',
'Bruichladdich',
'Bunnahabhain',
'Caol Ila',
'Cardhu',
'Clynelish',
'Craigallechie',
'Craigganmore',
'Dailuaine',
'Dalmore',
'Dalwhinnie',
'Deanston',
'Dufftown',
'Edradour',
'GlenDeveronMacduff',
'GlenElgin',
'GlenGarioch',
'GlenGrant',
'GlenKeith',
'GlenMoray',
'GlenOrd',
'GlenScotia',
'GlenSpey',
'Glenallachie',
'Glendronach',
'Glendullan',
'Glenfarclas',
'Glenfiddich',
'Glengoyne',
'Glenkinchie',
'Glenlivet',
'Glenlossie',
'Glenmorangie',
'Glenrothes',
'Glenturret',
'Highland Park',
'Inchgower',
'Isle of Jura',
'Knochando',
'Lagavulin',
'Laphroig',
'Linkwood',
'Loch Lomond',
'Longmorn',
'Macallan',
'Mannochmore',
'Miltonduff',
'Mortlach',
'Oban',
'OldFettercairn',
'OldPulteney',
'RoyalBrackla',
'RoyalLochnagar',
'Scapa',
'Speyburn',
'Speyside',
'Springbank',
'Strathisla',
'Strathmill',
'Talisker',
'Tamdhu',
'Tamnavulin',
'Teaninich',
'Tobermory',
'Tomatin',
'Tomintoul',
'Tormore',
'Tullibardine']
features_df = pd.DataFrame(features, columns=feature_names, index=brand_names)
features_df = features_df.drop('cluster', axis=1)
norm = (features_df ** 2).sum(axis=1).apply('sqrt')
normed_df = features_df.divide(norm, axis=0)
sim_df = normed_df.dot(normed_df.T)
def radar(df, ax=None):
# calculate evenly-spaced axis angles
num_vars = len(df.columns)
theta = 2*np.pi * np.linspace(0, 1-1./num_vars, num_vars)
# rotate theta such that the first axis is at the top
theta += np.pi/2
if not ax:
fig = plt.figure(figsize=(4, 4))
ax = fig.add_subplot(1,1,1, projection='polar')
else:
ax.clear()
for d, color in zip(df.itertuples(), sns.color_palette()):
ax.plot(theta, d[1:], color=color, alpha=0.7)
ax.fill(theta, d[1:], facecolor=color, alpha=0.5)
ax.set_xticklabels(df.columns)
legend = ax.legend(df.index, loc=(0.9, .95))
return ax
class RadarWidget(HasTraits):
factors_keys = List(['Aberfeldy'])
def __init__(self, df, **kwargs):
self.df = df
super(RadarWidget, self).__init__(**kwargs)
self.ax = None
self.factors_keys_changed()
def factors_keys_changed(self):
new_value = self.factors_keys
if self.ax:
self.ax.clear()
self.ax = radar(self.df.loc[new_value], self.ax)
Explanation: Load Data <span style="float: right; font-size: 0.5em"><a href="#Got-Scotch?">Top</a></span>
End of explanation
def get_similar(name, n, top=True):
a = sim_df[name].sort_values(ascending=False)
a.name = 'Similarity'
df = pd.DataFrame(a) #.join(features_df).iloc[start:end]
return df.head(n) if top else df.tail(n)
Explanation: We now define a get_similar( ) function to return the data of the top n similar scotches to a given scotch.
End of explanation
def on_pick_scotch(Scotch):
name = Scotch
# Get top 6 similar whiskeys, and remove this one
top_df = get_similar(name, 6).iloc[1:]
# Get bottom 5 similar whiskeys
df = top_df
# Make table index a set of links that the radar widget will watch
df.index = ['''<a class="scotch" href="#" data-factors_keys='["{}","{}"]'>{}</a>'''.format(name, i, i) for i in df.index]
tmpl = f'''<p>If you like {name} you might want to try these five brands. Click one to see how its taste profile compares.</p>'''
prompt_w.value = tmpl
table.value = df.to_html(escape=False)
radar_w.factors_keys = [name]
plot = radar_w.factors_keys_changed()
prompt_w = widgets.HTML(value='Aberfeldy')
display(prompt_w)
table = widgets.HTML(
value="Hello <b>World</b>"
)
display(table)
radar_w = RadarWidget(df=features_df)
picker_w = widgets.interact(on_pick_scotch, Scotch=list(sim_df.index))
radar_w.factors_keys
Explanation: We also need a function on_pick_scotch that will display a table of the top 5 similar scotches that Radar View watches, based on a given selected Scotch.
End of explanation |
10,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Analysis
One important area of application for information theory is time series analysis. Here, we will demonstrate how to compute the modes of information flow --- intrinsic, shared, and synergistic --- between the two dimensions of the tinkerbell attractor.
Step1: Here we define a few constants for this notebook
Step2: Generating the Time Series
We write a generator for our two time series
Step3: And then we generate the time series
Step4: And we plot the attractor because it's pretty
Step5: Discretizing the Time Series
Step6: Constructing a Distribution from the Time Series
Step7: Finally, we assign helpful variable names to the indicies of the distribution
Step8: Measuring the Modes of Information Flow | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import dit
from dit.inference import binned, dist_from_timeseries
from dit.multivariate import total_correlation as I, intrinsic_total_correlation as IMI
dit.ditParams['repr.print'] = True
Explanation: Time Series Analysis
One important area of application for information theory is time series analysis. Here, we will demonstrate how to compute the modes of information flow --- intrinsic, shared, and synergistic --- between the two dimensions of the tinkerbell attractor.
End of explanation
TRANSIENTS = 1000
ITERATIONS = 1000000
BINS = 3
HISTORY_LENGTH = 2
Explanation: Here we define a few constants for this notebook:
End of explanation
def tinkerbell(x=None, y=None, a=0.9, b=-0.6013, c=2.0, d=0.5):
if x is None:
x = np.random.random() - 1
if y is None:
y = np.random.random() - 1
while True:
x, y = x**2 - y**2 + a*x + b*y, 2*x*y + c*x + d*y
yield x, y
Explanation: Generating the Time Series
We write a generator for our two time series:
End of explanation
tb = tinkerbell()
# throw away transients
[next(tb) for _ in range(TRANSIENTS)]
time_series = np.asarray([next(tb) for _ in range(ITERATIONS)])
Explanation: And then we generate the time series:
End of explanation
plt.figure(figsize=(8, 6))
plt.scatter(time_series[:,0], time_series[:,1], alpha=0.1, s=0.01)
Explanation: And we plot the attractor because it's pretty:
End of explanation
binary_time_series = binned(time_series, bins=BINS)
print(binary_time_series[:10])
Explanation: Discretizing the Time Series
End of explanation
time_series_distribution = dist_from_timeseries(binary_time_series, history_length=HISTORY_LENGTH)
time_series_distribution
Explanation: Constructing a Distribution from the Time Series
End of explanation
x_past = [0]
y_past = [1]
x_pres = [2]
y_pres = [3]
Explanation: Finally, we assign helpful variable names to the indicies of the distribution:
End of explanation
intrinsic_x_to_y = IMI(time_series_distribution, [x_past, y_pres], y_past)
time_delayed_mutual_information_x_to_y = I(time_series_distribution, [x_past, y_pres])
transfer_entropy_x_to_y = I(time_series_distribution, [x_past, y_pres], y_past)
shared_x_to_y = time_delayed_mutual_information_x_to_y - intrinsic_x_to_y
synergistic_x_to_y = transfer_entropy_x_to_y - intrinsic_x_to_y
print(f"Flows from x to y:\n\tIntrinsic: {intrinsic_x_to_y}\n\tShared: {shared_x_to_y}\n\tSynergistic: {synergistic_x_to_y}")
intrinsic_y_to_x = IMI(time_series_distribution, [y_past, x_pres], x_past)
time_delayed_mutual_informtaion_y_to_x = I(time_series_distribution, [y_past, x_pres])
transfer_entropy_y_to_x = I(time_series_distribution, [y_past, x_pres], x_past)
shared_y_to_x = time_delayed_mutual_informtaion_y_to_x - intrinsic_y_to_x
synergistic_y_to_x = transfer_entropy_y_to_x - intrinsic_y_to_x
print(f"Flows from y to x:\n\tIntrinsic: {intrinsic_y_to_x}\n\tShared: {shared_y_to_x}\n\tSynergistic: {synergistic_y_to_x}")
Explanation: Measuring the Modes of Information Flow
End of explanation |
10,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute induced power in the source space with dSPM
Returns STC files ie source estimates of induced power
for different bands in the source space. The inverse method
is linear based on dSPM inverse operator.
Step1: Set parameters
Step2: plot mean power | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, source_band_induced_power
print(__doc__)
Explanation: Compute induced power in the source space with dSPM
Returns STC files ie source estimates of induced power
for different bands in the source space. The inverse method
is linear based on dSPM inverse operator.
End of explanation
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_raw.fif'
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
tmin, tmax, event_id = -0.2, 0.5, 1
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
events = events[:10] # take 10 events to keep the computation time low
# Use linear detrend to reduce any edge artifacts
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),
preload=True, detrend=1)
# Compute a source estimate per frequency band
bands = dict(alpha=[9, 11], beta=[18, 22])
stcs = source_band_induced_power(epochs, inverse_operator, bands, n_cycles=2,
use_fft=False, n_jobs=None)
for b, stc in stcs.items():
stc.save('induced_power_%s' % b, overwrite=True)
Explanation: Set parameters
End of explanation
plt.plot(stcs['alpha'].times, stcs['alpha'].data.mean(axis=0), label='Alpha')
plt.plot(stcs['beta'].times, stcs['beta'].data.mean(axis=0), label='Beta')
plt.xlabel('Time (ms)')
plt.ylabel('Power')
plt.legend()
plt.title('Mean source induced power')
plt.show()
Explanation: plot mean power
End of explanation |
10,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring Climate Data
Step1: Above
Step2: One way to interface with the GDP is with the interactive web interface, shown below. In this interface, you can upload a shapefile or draw on the screen to define a polygon region, then you specify the statistics and datasets you want to use via dropdown menus.
Step3: Here we use the python interface to the GDP, called PyGDP, which allows for scripting. You can get the code and documentation at https
Step4: Now just to show that we can access more than climate model time series, let's extract precipitation data from a dry winter (1936-1937) and a normal winter (2009-2010) for Texas County and look at the spatial patterns.
We'll use the netCDF4-Python library, which allows us to open OPeNDAP datasets just as if they were local NetCDF files. | Python Code:
from IPython.core.display import Image
Image('http://www-tc.pbs.org/kenburns/dustbowl/media/photos/s2571-lg.jpg')
Explanation: Exploring Climate Data: Past and Future
Roland Viger, Rich Signell, USGS
First presented at the 2012 Unidata Workshop: Navigating Earth System Science Data, 9-13 July.
What if you were watching Ken Burns's "The Dust Bowl", saw the striking image below, and wondered: "How much precipitation there really was back in the dustbowl years?" How easy is it to access and manipulate climate data in a scientific analysis? Here we'll show some powerful tools that make it easy.
End of explanation
import numpy as np
import matplotlib.dates as mdates
import matplotlib.pyplot as plt
import urllib
import os
from IPython.core.display import HTML
import time
import datetime
import pandas as pd
%matplotlib inline
import pyGDP
import numpy as np
import matplotlib.dates as mdates
import owslib
owslib.__version__
pyGDP.__version__
Explanation: Above:Dust storm hits Hooker, OK, June 4, 1937.
To find out how much rainfall was there during the dust bowl years, we can use the USGS/CIDA GeoDataPortal (GDP) which can compute statistics of a gridded field within specified shapes, such as county outlines. Hooker is in Texas County, Oklahoma, so here we use the GDP to compute a historical time series of mean precipitation in Texas County using the PRISM dataset. We then compare to climate forecast projections to see if similar droughts are predicted to occur in the future, and what the impact of different climate scenarios might be.
End of explanation
HTML('<iframe src=http://screencast.com/t/K7KTcaFrSUc width=800 height=600></iframe>')
Explanation: One way to interface with the GDP is with the interactive web interface, shown below. In this interface, you can upload a shapefile or draw on the screen to define a polygon region, then you specify the statistics and datasets you want to use via dropdown menus.
End of explanation
# Create a pyGDP object
myGDP = pyGDP.pyGDPwebProcessing()
# Let's see what shapefiles are already available on the GDP server
# this changes with time, since uploaded shapefiles are kept for a few days
shapefiles = myGDP.getShapefiles()
print 'Available Shapefiles:'
for s in shapefiles:
print s
# Is our shapefile there already?
# If not, upload it.
OKshapeFile = 'upload:OKCNTYD'
if not OKshapeFile in shapefiles:
shpfile = myGDP.uploadShapeFile('OKCNTYD.zip')
# Let's check the attributes of the shapefile
attributes = myGDP.getAttributes(OKshapeFile)
print "Shapefile attributes:"
for a in attributes:
print a
# In this particular example, we are interested in attribute = 'DESCRIP',
# which provides the County names for Oklahoma
user_attribute = 'DESCRIP'
values = myGDP.getValues(OKshapeFile, user_attribute)
print "Shapefile attribute values:"
for v in values:
print v
# we want Texas County, Oklahoma, which is where Hooker is located
user_value = 'Texas'
# Let's see what gridded datasets are available for the GDP to operate on
dataSets = myGDP.getDataSetURI()
print "Available gridded datasets:"
for d in dataSets:
print d[0]
dataSets[0][0]
df = pd.DataFrame(dataSets[1:],columns=['title','abstract','urls'])
df.head()
print df['title']
df.ix[20].urls
# If you choose a DAP URL, use the "dods:" prefix, even
# if the list above has a "http:" prefix.
# For example: dods://cida.usgs.gov/qa/thredds/dodsC/prism
# Let's see what data variables are in our dataset
dataSetURI = 'dods://cida.usgs.gov/thredds/dodsC/prism'
dataTypes = myGDP.getDataType(dataSetURI)
print "Available variables:"
for d in dataTypes:
print d
# Let's see what the available time range is for our data variable
variable = 'ppt' # precip
timeRange = myGDP.getTimeRange(dataSetURI, variable)
for t in timeRange:
print t
timeBegin = '1900-01-01T00:00:00Z'
timeEnd = '2012-08-01T00:00:00Z'
# Once we have our shapefile, attribute, value, dataset, datatype, and timerange as inputs, we can go ahead
# and submit our request.
name1='gdp_texas_county_prism.csv'
if not os.path.exists(name1):
url_csv = myGDP.submitFeatureWeightedGridStatistics(OKshapeFile, dataSetURI, variable,
timeBegin, timeEnd, user_attribute, user_value, delim='COMMA', stat='MEAN' )
f = urllib.urlretrieve(url_csv,name1)
# load historical PRISM precip
df1=pd.read_csv(name1,skiprows=3,parse_dates=True,index_col=0,
names=['date','observed precip'])
df1.plot(figsize=(12,2),
title='Average Precip for Texas County, Oklahoma, calculated via GDP using PRISM data ');
df1 = pd.stats.moments.rolling_mean(df1,36,center=True)
df1.plot(figsize=(12,2),
title='Average Precip for Texas County, Oklahoma, calculated via GDP using PRISM data ');
HTML('<iframe src=http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html width=900 height=350></iframe>')
#hayhoe_URI ='dods://cida-eros-thredds1.er.usgs.gov:8082/thredds/dodsC/dcp/conus_grid.w_meta.ncml'
dataset ='dods://cida.usgs.gov/thredds/dodsC/maurer/maurer_brekke_w_meta.ncml'
variable = 'sresa2_gfdl-cm2-1_1_Prcp'
timeRange = myGDP.getTimeRange(dataset, variable)
timeRange
# retrieve the GFDL model A2 more "Business-as-Usual" scenario:
time0=time.time();
name2='sresa2_gfdl-cm2-1_1_Prcp.csv'
if not os.path.exists(name2):
variable = 'sresa2_gfdl-cm2-1_1_Prcp'
result2 = myGDP.submitFeatureWeightedGridStatistics(OKshapeFile, dataset, variable,
timeRange[0],timeRange[1],user_attribute,user_value, delim='COMMA', stat='MEAN' )
f = urllib.urlretrieve(result2,name2)
print('elapsed time=%d s' % (time.time()-time0))
# now retrieve the GFDL model B1 "Eco-Friendly" scenario:
time0=time.time();
name3='sresb1_gfdl-cm2-1_1_Prcp.csv'
if not os.path.exists(name3):
variable = 'sresb1_gfdl-cm2-1_1_Prcp'
result3 = myGDP.submitFeatureWeightedGridStatistics(OKshapeFile, dataset, variable,
timeRange[0],timeRange[1],user_attribute,user_value, delim='COMMA', stat='MEAN' )
f = urllib.urlretrieve(result3,name3)
print('elapsed time=%d s' % (time.time()-time0))
# Load the GDP result for: "Business-as-Usual" scenario:
# load historical PRISM precip
df2=pd.read_csv(name2,skiprows=3,parse_dates=True,index_col=0,
names=['date','GFDL A2'])
# Load the GDP result for: "Eco-Friendly" scenario:
df3=pd.read_csv(name3,skiprows=3,parse_dates=True,index_col=0,
names=['date','GFDL B1'])
# convert mm/day to mm/month (approximate):
ts_rng = pd.date_range(start='1/1/1900',end='1/1/2100',freq='30D')
ts = pd.DataFrame(index=ts_rng)
df2['GFDL B1'] = df3['GFDL B1']*30.
df2['GFDL A2'] = df2['GFDL A2']*30.
df2 = pd.stats.moments.rolling_mean(df2,36,center=True)
df2 = pd.concat([df2,ts],axis=1).interpolate(limit=1)
df2['OBS'] = pd.concat([df1,ts],axis=1).interpolate(limit=1)['observed precip']
# interpolate
ax=df2.plot(figsize=(12,2),legend=False,
title='Average Precip for Texas County, Oklahoma, calculated via GDP using PRISM data ');
ax.legend(loc='upper right');
Explanation: Here we use the python interface to the GDP, called PyGDP, which allows for scripting. You can get the code and documentation at https://github.com/USGS-CIDA/pyGDP.
End of explanation
import netCDF4
url='http://cida.usgs.gov/thredds/dodsC/prism'
box = [-102,36.5,-100.95,37] # Bounding box for Texas County, Oklahoma
box = [-104,36.,-100,39.0] # Bounding box for larger dust bowl region
# define a mean precipitation function, here hard-wired for the PRISM data
def mean_precip(nc,bbox=None,start=None,stop=None):
lon=nc.variables['lon'][:]
lat=nc.variables['lat'][:]
tindex0=netCDF4.date2index(start,nc.variables['time'],select='nearest')
tindex1=netCDF4.date2index(stop,nc.variables['time'],select='nearest')
bi=(lon>=box[0])&(lon<=box[2])
bj=(lat>=box[1])&(lat<=box[3])
p=nc.variables['ppt'][tindex0:tindex1,bj,bi]
latmin=np.min(lat[bj])
p=np.mean(p,axis=0)
lon=lon[bi]
lat=lat[bj]
return p,lon,lat
nc = netCDF4.Dataset(url)
p,lon,lat = mean_precip(nc,bbox=box,start=datetime.datetime(1936,11,1,0,0),
stop=datetime.datetime(1937,4,1,0,0))
p2,lon,lat = mean_precip(nc,bbox=box,start=datetime.datetime(1940,11,1,0,0),
stop=datetime.datetime(1941,4,1,0,0))
latmin = np.min(lat)
import cartopy.crs as ccrs
import cartopy.feature as cfeature
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none')
fig = plt.figure(figsize=(12,5))
ax = fig.add_axes([0.1, 0.15, 0.3, 0.8],projection=ccrs.PlateCarree())
pc = ax.pcolormesh(lon, lat, p, cmap=plt.cm.jet_r,vmin=0,vmax=40)
plt.title('Precip in Dust Bowl Region: Winter 1936-1937')
ax.add_feature(states_provinces,edgecolor='gray')
ax.text(-101,36.86,'Hooker')
ax.plot(-101,36.86,'o')
cb = plt.colorbar(pc, orientation='horizontal')
cb.set_label('Precip (mm/month)')
ax2 = fig.add_axes([0.6, 0.15, 0.3, 0.8],projection=ccrs.PlateCarree())
pc2 = ax2.pcolormesh(lon, lat, p2, cmap=plt.cm.jet_r,vmin=0,vmax=40)
plt.title('Precip in Dust Bowl Region: Winter 1940-1941')
ax2.add_feature(states_provinces,edgecolor='gray')
ax2.text(-101,36.86,'Hooker')
ax2.plot(-101,36.86,'o')
cb2 = plt.colorbar(pc2, orientation='horizontal')
cb2.set_label('Precip (mm/month)')
plt.show()
Explanation: Now just to show that we can access more than climate model time series, let's extract precipitation data from a dry winter (1936-1937) and a normal winter (2009-2010) for Texas County and look at the spatial patterns.
We'll use the netCDF4-Python library, which allows us to open OPeNDAP datasets just as if they were local NetCDF files.
End of explanation |
10,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Datasets to download
Here we list a few datasets that might be interesting to explore with vaex.
New York taxi dataset
The very well known dataset containing trip infromation from the iconic Yellow Taxi company in NYC.
The raw data is curated by the Taxi & Limousine Commission (TLC).
See for instance Analyzing 1.1 Billion NYC Taxi and Uber Trips, with a Vengeance for some ideas.
Year
Step1: Gaia - European Space Agency
Gaia is an ambitious mission to chart a three-dimensional map of our Galaxy, the Milky Way, in the process revealing the composition, formation and evolution of the Galaxy.
See the Gaia Science Homepage for details, and you may want to try the Gaia Archive for ADQL (SQL like) queries.
Step2: U.S. Airline Dataset
This dataset contains information on flights within the United States between 1988 and 2018.
The original data can be downloaded from United States Department of Transportation.
Year 1988-2018 - 180 million rows - 17GB
One can also stream it from S3
Step3: Sloan Digital Sky Survey (SDSS)
The data is public and can be queried from the SDSS archive.
The original query at SDSS archive was (although split in small parts)
Step4: Helmi & de Zeeuw 2000
Result of an N-body simulation of the accretion of 33 satellite galaxies into a Milky Way dark matter halo.
* 3 million rows - 252MB | Python Code:
import vaex
import warnings; warnings.filterwarnings("ignore")
df = vaex.open('/data/yellow_taxi_2009_2015_f32.hdf5')
print(f'number of rows: {df.shape[0]:,}')
print(f'number of columns: {df.shape[1]}')
long_min = -74.05
long_max = -73.75
lat_min = 40.58
lat_max = 40.90
df.plot(df.pickup_longitude, df.pickup_latitude, f="log1p", limits=[[-74.05, -73.75], [40.58, 40.90]], show=True);
Explanation: Datasets to download
Here we list a few datasets that might be interesting to explore with vaex.
New York taxi dataset
The very well known dataset containing trip infromation from the iconic Yellow Taxi company in NYC.
The raw data is curated by the Taxi & Limousine Commission (TLC).
See for instance Analyzing 1.1 Billion NYC Taxi and Uber Trips, with a Vengeance for some ideas.
Year: 2015 - 146 million rows - 12GB
Year 2009-2015 - 1 billion rows - 107GB
One can also stream the data directly from S3. Only the data that is necessary will be streamed, and it will cached locally:
import vaex
df = vaex.open('s3://vaex/taxi/yellow_taxi_2015_f32s.hdf5?anon=true')
End of explanation
df = vaex.open('/data/gaia-dr2-sort-by-source_id.hdf5')
print(f'number of rows: {df.shape[0]:,}')
print(f'number of columns: {df.shape[1]}')
df.plot("ra", "dec", f="log", limits=[[360, 0], [-90, 90]], show=True);
Explanation: Gaia - European Space Agency
Gaia is an ambitious mission to chart a three-dimensional map of our Galaxy, the Milky Way, in the process revealing the composition, formation and evolution of the Galaxy.
See the Gaia Science Homepage for details, and you may want to try the Gaia Archive for ADQL (SQL like) queries.
End of explanation
df = vaex.open('/data/airline/us_airline_data_1988_2018.hd5')
print(f'number of rows: {df.shape[0]:,}')
print(f'number of columns: {df.shape[1]}')
df.head(5)
Explanation: U.S. Airline Dataset
This dataset contains information on flights within the United States between 1988 and 2018.
The original data can be downloaded from United States Department of Transportation.
Year 1988-2018 - 180 million rows - 17GB
One can also stream it from S3:
import vaex
df = vaex.open('s3://vaex/airline/us_airline_data_1988_2018.hdf5?anon=true')
End of explanation
df = vaex.open('/data/sdss/sdss-clean-stars-dered.hdf5')
print(f'number of rows: {df.shape[0]:,}')
print(f'number of columns: {df.shape[1]}')
df.healpix_plot(df.healpix9, show=True, f="log1p", healpix_max_level=9, healpix_level=9,
healpix_input='galactic', healpix_output='galactic', rotation=(0,45)
)
Explanation: Sloan Digital Sky Survey (SDSS)
The data is public and can be queried from the SDSS archive.
The original query at SDSS archive was (although split in small parts):
SELECT ra, dec, g, r from PhotoObjAll WHERE type = 6 and clean = 1 and r>=10.0 and r<23.5;
End of explanation
df = vaex.datasets.helmi_de_zeeuw.fetch() # this will download it on the fly
print(f'number of rows: {df.shape[0]:,}')
print(f'number of columns: {df.shape[1]}')
df.plot([["x", "y"], ["Lz", "E"]], f="log", figsize=(12,5), show=True, limits='99.99%');
Explanation: Helmi & de Zeeuw 2000
Result of an N-body simulation of the accretion of 33 satellite galaxies into a Milky Way dark matter halo.
* 3 million rows - 252MB
End of explanation |
10,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Übung 7
Step1: Aufbau der Basisdaten
Laden der Matlab Daten
Step2: Darstellung in einer Adjazenzmatrix
Zur Erinnerung, innerhalb einer Adjazenzmatrix wird für jede Kante zwischen zwei Knoten eine 1 eingetragen, ansonsten eine 0.
In der folgenden Grafik sind einige große Rechtecke zu sehen. D.h. dass es in der Matrix einige Bereiche gibt in denen die Knoten stark untereinander verbunden sind. In den jeweiligen Ecken befinden sich wenige Knoten, die eine "Verbindung" zum nächsten Bereich besitzen. Das bedeutet, dass es in dem zu Grunde liegenden Graph einige wenige Knoten gibt, die als Verbindungsstück zu einem anderen Bereich fungieren. Tendenziell wird viel Datenverkehr über diese Knoten laufen, wenn Knoten aus unterschiedlichen Bereichen miteinander kommunizieren wollen. Diese Knoten sind ein Nadelöhr im gesamten Graph. Wenn Sie wegfallen entstehen einzelne Inseln die untereinander nicht kommunizieren können.
Step3: Darstellung als Graph
Im folgenden Graph sind die einzelnen Inseln ganz gut zu erkennen.
Step4: nx.dfs_successors
In den Aufgaben wird als Wurzel immer die 42 verwendet. Über die Funktion nx.dfs_successors wird aus dem ursprünglichen ungereichteten nx.MultiGraph ein gerichteter nx.DiGraph erzeugt. Die der Wurzel folgenden Knoten usw. enthalten deren Nachfolger nach depth first search-Mechanik. Die Datenstruktur ist ein dict mit den Ids der Knoten und einer Liste der Nachfolger.
Step5: nx.dfs_tree
Im Gegensatz zu nx.dfs_successors liefert diese Methode kein dict sondern den eigentlichen Graphen. Wichtige Methoden zu Analyse sind an dieser Stelle
Step6: Zähler der Nachbarn
Die folgende Methode dient zum Zählen der Nachbarn, wenn der Graph rekursiv durchgegangen wird.
Step7: Limit der Rekursionstiefe
Zum aktuellen Zeitpunkt ist der Standardwert der Rekursionstiefe auf 1000 begrenzt. In der folgenden Beispielen wird eine höhere Tiefe erforderlich.
Step10: Aufgabe 7.1 Crawlen unstrukturierter Netze
7.1.1 Vergleich der mittleren Knotengrade
Implementierung der Algorithmen
An dieser Stelle sind zunächst Tiefen- und Breitensuche implementiert und erklärt. Der reine Vergleich folgt später im Dokument.
Step11: Man kann an den einzeln ermittelten Knotengraden sehen, dass selbst bei einer hohen Anzahl an Knoten (>= 800) ein ungenaues Vergleichsergebnis zum tatsächlichen mittleren Knotengrad ergibt. Eine Annäherung ist trotzdem zu erkennen.
Neben der Ermittlung der Knotengrade kann man sehen, dass ein Crawling mittels BFS schneller ist als die DFS Umsetzung. Bei einer sehr hohen Anzahl an durchsuchten Knoten empfiehlt sich die iterative Implementierung, da ansonsten wieder Heap-Probleme bei einer Rekursion auftreten.
Der tatsächliche mittlere Knotengrad berechnet sich auch der Summe aller Nachbarn aller Knoten des ungerichteten Graphs $g$ durch die Anzahl der Knoten.
$\dfrac{\sum_{i=0}^{n} neighbors(g[i])}{n}$
Step13: Plot der (C)CDF
Step15: Man kann im obigen Beispiel sehen, dass es nur wenige Knoten gibt, die mehr als ca. 250 Nachbarn besitzen. Daher ist die Grenze von 100% schnell erreicht und verfälscht den Bereich 0 < x < 250. Dieser soll näher betrachtet werden.
Step16: Hier kann man sehen, dass noch rund 80% aller Knoten weniger als 70 Nachbarn besitzen. Das plotten in einem doppelt logarithmischen Maßstab macht an dieser Stelle keinen Sinn.
Step18: Der folgende Code ist definitiv verbesserungswürdig. Es sollen die einzelnen Anzahlen an Nachbarn, über Breiten- und Tiefensuche mit einer maximalen Anzahl von 800 und 1600 und des gesamte Graphs ermittelt werden. Danach erfolgt eine Darstellung der CDFs nebeneinander.
Vergleich der Knotengrade der einzelnen Algorithmen
Step20: 7.1.2 Random Walk
Hier wird der Graph nicht mehr anhand seiner Struktur durchlaufen, sondern es werden zufällig Knoten ausgewählt und deren Nachbarn untersucht.
Step21: 7.2 Angriffe auf unstrukturierte Overlays
Step22: 7.3 Graphen-Metriken
Step23: 7.3.1 Globaler Cluster Koeffizient
Siehe NetworkX API Doc
Wenn man den Clustering Koeffizienten eines einzelnen Knotens per Hand ausrechnen möchte benötigt man die Anzahl an Nachbarn des Knoten (Knotengrad) und die Anzahl der Verbindungen unter diesen Nachbarn $N_v$.
$CC(v)$
Step24: Beispiel für den Knoten 3.
4 Nachbarn
Step25: 7.3.2 Betweenness Centrality
Siehe NetworkX API Doc
Zur Berechnung der Betweenness Centrality eines einzelnen Knotens $v$, müssen zunächst alle kürzesten Pfade gefunden werden, die durch diesen Knoten $v$ verlaufen.
Man bildet die Paare der kürzesten Pfade.
Für jedes Paar $(s, t)$ werden die Anzahlen der kürzesten Pfade $\sigma_{(s, t)}$ zueinander gezählt.
Davon werden die Anzahl der kürzesten Pfade, die durch den gewählten Knoten verlaufen $\sigma_{(s, t|v)}$ geteilt durch die Anzahl der kürzestens Pfade zwischen den Knoten = $betweenness_i$.
Den Ablauf 2. und 3. nimmt man für jedes in 1. ermittelte Paar vor. Die Summe aus den einzelnen Ergebnissen aus 3. bildet die Betweenness Centrality des Knotens $betweenness = \sum_{i = 0}^{n} betweenness_i$, wobei $n$ die Anzahl der Paare ist.
Siehe auch
Step26: Im folgenden werden alle kürzesten Pfade gesammelt.
Step28: Beispiel für Knoten 7
Kürzeste Pfade durch den Knoten
Step30: Ermittlung der einzelnen Summanden zur Betweenness centrality
Step31: Beispiel für Knoten 8
Kürzeste Pfade durch den Knoten
Step32: Beispiel für Knoten 6
Kürzeste Pfade durch den Knoten | Python Code:
import matplotlib
import matplotlib.pylab as pl
import matplotlib.pyplot as plt
import numpy as np
import scipy.io
import networkx as nx
def print_divider(separator='-', length=80):
print(''.join([separator for _ in range(length)]))
print()
def print_heading(msg='', separator='-'):
print(' ')
print(msg)
print_divider(separator=separator, length=len(msg))
plt.style.use('ggplot')
matplotlib.rcParams['figure.figsize'] = [12., 8.]
Explanation: Übung 7
End of explanation
# https://docs.scipy.org/doc/scipy/reference/tutorial/io.html
mat = scipy.io.loadmat('matrix.mat')
print('loadmat:\n%s' % str(mat))
# 'A': <4039x4039 sparse matrix of type '<class 'numpy.float64'>'
# with 176468 stored elements in Compressed Sparse Column format>
sparse_matrix = mat['A']
print('\nShape:\n%s' % str(sparse_matrix.shape))
print('\nType:\n%s' % type(sparse_matrix))
print('\nSparse Matrix:\n%s' % sparse_matrix)
print('\nCoordinate Matrix:\n%s' % sparse_matrix.tocoo())
Explanation: Aufbau der Basisdaten
Laden der Matlab Daten
End of explanation
# http://stackoverflow.com/questions/18651869/scipy-equivalent-for-matlab-spy
pl.spy(sparse_matrix, precision=0.01, markersize=.1)
pl.show()
# https://jakevdp.github.io/blog/2012/10/14/scipy-sparse-graph-module-word-ladders/
# from scipy.sparse import csgraph
# bft = csgraph.breadth_first_tree(sparse_matrix, 0, directed=False)
# print(bft)
graph = nx.from_scipy_sparse_matrix(sparse_matrix, create_using=nx.MultiGraph())
Explanation: Darstellung in einer Adjazenzmatrix
Zur Erinnerung, innerhalb einer Adjazenzmatrix wird für jede Kante zwischen zwei Knoten eine 1 eingetragen, ansonsten eine 0.
In der folgenden Grafik sind einige große Rechtecke zu sehen. D.h. dass es in der Matrix einige Bereiche gibt in denen die Knoten stark untereinander verbunden sind. In den jeweiligen Ecken befinden sich wenige Knoten, die eine "Verbindung" zum nächsten Bereich besitzen. Das bedeutet, dass es in dem zu Grunde liegenden Graph einige wenige Knoten gibt, die als Verbindungsstück zu einem anderen Bereich fungieren. Tendenziell wird viel Datenverkehr über diese Knoten laufen, wenn Knoten aus unterschiedlichen Bereichen miteinander kommunizieren wollen. Diese Knoten sind ein Nadelöhr im gesamten Graph. Wenn Sie wegfallen entstehen einzelne Inseln die untereinander nicht kommunizieren können.
End of explanation
nx.draw(graph, node_shape='o', node_size=12, node_color='#000000', edge_color='#aaaaaa')
plt.show()
Explanation: Darstellung als Graph
Im folgenden Graph sind die einzelnen Inseln ganz gut zu erkennen.
End of explanation
root_id = 42
print_heading('Wurzel %d' % root_id, separator='=')
print('Nachbarn: %s\n' % nx.neighbors(graph, root_id))
print_heading('``nx.dfs_successors`` der Wurzel')
node = nx.dfs_successors(graph, root_id)
for _id in sorted(node.keys())[:10]:
print(' id: %d' % _id)
print('successors: %s' % node[_id])
Explanation: nx.dfs_successors
In den Aufgaben wird als Wurzel immer die 42 verwendet. Über die Funktion nx.dfs_successors wird aus dem ursprünglichen ungereichteten nx.MultiGraph ein gerichteter nx.DiGraph erzeugt. Die der Wurzel folgenden Knoten usw. enthalten deren Nachfolger nach depth first search-Mechanik. Die Datenstruktur ist ein dict mit den Ids der Knoten und einer Liste der Nachfolger.
End of explanation
dfs_tree = nx.dfs_tree(graph, 42)
print_heading('Knoten des Baums der Tiefensuche')
print(dfs_tree.nodes()[:10])
root_id = 42
dfs_tree = nx.dfs_tree(graph, root_id)
edges = dfs_tree.edges()[:10]
print_heading('Kanten des Baums der Tiefensuche')
for edge in edges:
print(edge)
Explanation: nx.dfs_tree
Im Gegensatz zu nx.dfs_successors liefert diese Methode kein dict sondern den eigentlichen Graphen. Wichtige Methoden zu Analyse sind an dieser Stelle:
G.nodes()
G.edges()
nodes liefert alle Knoten des Graphen. edges liefert alle Kanten. Natürlich kann der Fall eintreten, dass ausgehend von einem Knoten keine weitere Verbindung zu anderen Knoten existiert, da es sich um einen gerichteten Graphen handelt. Sollte also eine Analyse der Nachbarn erfolgen (nx.neighbors(graph, node) muss diese Analyse auf dem ursprünglich geladenen Graphen erfolgen.
End of explanation
neighbor_count = 0
def neighbor_counter(node, debug=False):
global neighbor_count
if debug:
print('node {:5d}, neighbors: {:5d}'.format(node, len(nx.neighbors(graph, node))))
neighbor_count += len(nx.neighbors(graph, node))
Explanation: Zähler der Nachbarn
Die folgende Methode dient zum Zählen der Nachbarn, wenn der Graph rekursiv durchgegangen wird.
End of explanation
import sys
sys.setrecursionlimit(2000)
Explanation: Limit der Rekursionstiefe
Zum aktuellen Zeitpunkt ist der Standardwert der Rekursionstiefe auf 1000 begrenzt. In der folgenden Beispielen wird eine höhere Tiefe erforderlich.
End of explanation
from datetime import datetime
root_id = 42
max_visits = (100, 200, 400, 800, 1600)
def timed(func):
start = datetime.now()
def wrapper(*args, **kwargs):
func(*args, **kwargs)
end = datetime.now()
print(' Dauer: %s' % str(end - start), end='\n\n')
return wrapper
def crawl_dfs(edges: list, node: int, max_visits: int, S: set, callback):
Crawlt einen Graph anhand seiner Kanten mit Hilfe von
depth first search in rekursiver Variante.
:param edges: Liste mit 2-Tupeln, die alle Kanten des Graphen beinhalten
:param node: Id des Ausgangsknotens
:param max_visits: Anzahl der maximal zu besuchenden Knoten
:param S: Menge an Knoten, die bereits besucht wurden
:param callback: Funktion mit einem Parameter, der bei Aufruf
den besuchten Knoten enthält
S.add(node)
callback(node)
neighbors = [e[1] for e in edges if e[0] == node]
for neighbor in neighbors:
if neighbor not in S and len(S) < max_visits:
crawl_dfs(edges, neighbor, max_visits, S, callback)
print_heading('Crawl des Graphs über Tiefensuche')
g = nx.dfs_tree(graph, root_id)
edges = g.edges()
@timed
def timed_crawl_dfs(max_visits):
global neighbor_count
neighbor_count = 0
crawl_dfs(edges, root_id, max_visits=count, S=set(), callback=neighbor_counter)
print('Knotengrad {:.4f} bei {:5d} besuchten Knoten'.format((neighbor_count / count), count))
for count in max_visits:
timed_crawl_dfs(count)
def crawl_dfs_iter(edges, root, max_visits=10):
Generator der einen Graph anhand seiner Kanten mit Hilfe von
depth first search in iterativer Variante crawlt. Die Id des
besuchten Knoten wird dabei geliefert.
:param edges: Liste mit 2-Tupeln, die alle Kanten des Graphen beinhalten
:param root: Id des Ausgangsknotens
:param max_visits: Anzahl der maximal zu besuchenden Knoten
visited = 0
S = set()
Q = []
Q.append(root)
while Q and visited < max_visits:
node = Q.pop()
yield node
visited += 1
if node not in S and visited < max_visits:
S.add(node)
neighbors = [e[1] for e in edges if e[0] == node]
# umdrehen der nachbarn, damit diese in der richtigen
# reihenfolge durch pop wieder entnommen werden
neighbors.reverse()
for neighbor in neighbors:
Q.append(neighbor)
print_heading('Crawl des Graphs über iterative Tiefensuche')
g = nx.dfs_tree(graph, root_id)
edges = g.edges()
@timed
def timed_crawl_dfs_iter(max_visits):
neighbor_count = 0
for node in crawl_dfs_iter(edges, root_id, max_visits=count):
# print('node {:5d}, neighbors: {:5d}'.format(node, len(nx.neighbors(graph, node))))
neighbor_count += len(nx.neighbors(graph, node))
msg = 'Knotengrad {:.4f} bei {:5d} besuchten Knoten'
print(msg.format((neighbor_count / count), count))
for count in max_visits:
timed_crawl_dfs_iter(count)
def crawl_bfs(edges, Q=[], S=set(), max_visits=10, callback=lambda x: x):
next_q = []
while len(S) < max_visits:
for node in Q:
if node not in S and len(S) < max_visits:
S.add(node)
callback(node)
neighbors = [n[1] for n in edges if n[0] == node and n[1] not in S]
next_q += neighbors
crawl_bfs(edges, next_q, S, max_visits, callback)
print_heading('Crawl des Graphs über Breitensuche')
g = nx.bfs_tree(graph, root_id)
edges = g.edges()
@timed
def timed_crawl_bfs(max_visits):
global neighbor_count
neighbor_count = 0
crawl_bfs(edges, [root_id], max_visits=count, S=set(), callback=neighbor_counter)
print('Knotengrad {:10,.4f} bei {:5d} besuchten Knoten'.format((neighbor_count / count), count))
for count in max_visits:
timed_crawl_bfs(count)
def crawl_bfs_iter(edges, root, max_visits=10):
visited = 0
# Verwendung von Q als Queue (FIFO)
Q = []
S = set()
Q.append(root)
S.add(root)
while Q and visited < max_visits:
node = Q.pop(0)
yield node
visited += 1
neighbors = [e[1] for e in edges if e[0] == node]
for neighbor in neighbors:
if neighbor not in S:
S.add(neighbor)
Q.append(neighbor)
print_heading('Crawl des Graphs über iterative Breitensuche')
g = nx.bfs_tree(graph, root_id)
edges = g.edges()
@timed
def timed_crawl_bfs_iter(max_visits):
neighbor_count = 0
for node in crawl_bfs_iter(edges, root_id, max_visits=count):
# print('node {:5d}, neighbors: {:5d}'.format(node, len(nx.neighbors(graph, node))))
neighbor_count += len(nx.neighbors(graph, node))
print('Knotengrad {:10,.4f} bei {:5d} besuchten Knoten'.format((neighbor_count / count), count))
for count in max_visits:
timed_crawl_bfs_iter(count)
Explanation: Aufgabe 7.1 Crawlen unstrukturierter Netze
7.1.1 Vergleich der mittleren Knotengrade
Implementierung der Algorithmen
An dieser Stelle sind zunächst Tiefen- und Breitensuche implementiert und erklärt. Der reine Vergleich folgt später im Dokument.
End of explanation
sum([len(nx.neighbors(graph, n)) for n in graph.nodes()]) / len(graph.nodes())
Explanation: Man kann an den einzeln ermittelten Knotengraden sehen, dass selbst bei einer hohen Anzahl an Knoten (>= 800) ein ungenaues Vergleichsergebnis zum tatsächlichen mittleren Knotengrad ergibt. Eine Annäherung ist trotzdem zu erkennen.
Neben der Ermittlung der Knotengrade kann man sehen, dass ein Crawling mittels BFS schneller ist als die DFS Umsetzung. Bei einer sehr hohen Anzahl an durchsuchten Knoten empfiehlt sich die iterative Implementierung, da ansonsten wieder Heap-Probleme bei einer Rekursion auftreten.
Der tatsächliche mittlere Knotengrad berechnet sich auch der Summe aller Nachbarn aller Knoten des ungerichteten Graphs $g$ durch die Anzahl der Knoten.
$\dfrac{\sum_{i=0}^{n} neighbors(g[i])}{n}$
End of explanation
def ecdf(values):
Liefert eine Liste an 2-Tupeln, die die x- und y-Werte von
Punkten innerhalb einer empirischen Distributionsfunktion darstellen.
Beispiel:
Gegeben sei die Menge (2, 3, 3, 5, 8, 9, 9, 10).
Geliefert wird
[(2, 0.12), (3, 0.38), (5, 0.5), (8, 0.62), (9, 0.88), (10, 1.0)]
12% aller Werte sind <= 2
38% aller Werte sind <= 3
50% aller Werte sind <= 5
...
# 1. sortieren der werte
values = sorted(values)
# 2. reduzieren der werte auf eindeutigkeit
unique_values = sorted(list(set(values)))
# 3. ermittlung wie viele werte <= x sind fuer jede eindeutige zeit
cumsum_values = []
for u in unique_values:
cumsum_values.append((u, len([1 for _ in values if _ <= u])))
# 4. ermittlung der prozentualen anteile wie viele werte <= x sind
y = np.round([c / len(values) for t, c in cumsum_values], decimals=2)
return list(zip(unique_values, y))
def plot_ecdf(points, complementary=False, plotter=plt,
point_color='#e53935', line_color='#1e88e5',
title=None, xlabel=None, label=''):
x = np.array([p[0] for p in points])
y = np.array([p[1] for p in points])
# das komplement einer cdf ist CCDF(x) = 1 - CDF(x)
if complementary:
y = 1 - y # numpy array
# plot mit spezifischer punktefarbe ansonsten wird die farbe von pyplot gewaehlt
if point_color:
plotter.plot(x, y, color=point_color, linestyle=' ', marker='.', label=label)
else:
plotter.plot(x, y, linestyle=' ', marker='.', label=label)
# entweder ist hier pyplot oder eine achse gegeben
if plotter == plt:
plotter.title(title)
plotter.xlabel(xlabel)
else:
plotter.set_title(title)
plotter.set_xlabel(xlabel)
# rand hinzufuegen
x_min, x_max, y_min, y_max = plotter.axis()
plotter.axis((x_min - 10, x_max + 10, y_min - .02, y_max + .02))
# von x bis x + 1 wird ein einzelner strich geplottet
if line_color:
for i in range(len(x)):
x_0 = x[i]
x_1 = x[i + 1] if i < len(x) - 1 else x[i] + 1
plotter.plot([x_0, x_1], [y[i], y[i]], color=line_color, linestyle='-', label=label)
# enthaelt das anzahl nachbarn jedes knoten des ungerichteten graphen
neighbor_count = [len(nx.neighbors(graph, n)) for n in graph.nodes()]
f, axes = plt.subplots(ncols=2, nrows=1, figsize=(12, 3))
ax1, ax2 = axes.ravel()
plot_ecdf(ecdf(neighbor_count),
plotter=ax1,
title='CDF über die Anzahl Nachbarn',
xlabel='Anzahl Nachbarn')
plot_ecdf(ecdf(neighbor_count),
plotter=ax2,
title='CCDF über die Anzahl Nachbarn',
xlabel='Anzahl Nachbarn',
complementary=True)
plt.show()
Explanation: Plot der (C)CDF
End of explanation
print_heading('Höchste Anzahlen an Nachbarn')
print(sorted(neighbor_count)[-10:], end='\n\n')
border = 250
print_heading('Begrenzung zwischen %d' % border)
print('Knoten mit weniger als %d Nachbarn: %d' % (border, len([n for n in neighbor_count if n < border])))
print('Knoten mit mehr als %d Nachbarn: %d' % (border, len([n for n in neighbor_count if n >= border])))
# enthaelt das anzahl nachbarn jedes knoten des ungerichteten graphen
# bis zu einer maximalen anzahl von ``border``
clipped_neighbor_count = [n for n in neighbor_count if n < border]
f, axes = plt.subplots(ncols=2, nrows=1, figsize=(12, 3))
ax1, ax2 = axes.ravel()
def plot_log_cdf(values, ax, title, xlabel, complementary=False):
Plottet die gegebenen Werte als CDF auf einer logarithmischen
x-Achse.
points = ecdf(values)
x_min, x_max, y_min, y_max = ax.axis()
ax.axis((x_min - 10, x_max + 10, y_min - .02, y_max + .02))
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_xscale('log')
ax.set_xlim(1e0, 1e3)
plot_ecdf(points, plotter=ax, line_color=None, complementary=complementary, title=title, xlabel=xlabel)
# CDF der Nachbarn
plot_log_cdf(clipped_neighbor_count,
ax1,
'CDF über die Anzahl Nachbarn',
'Anzahl Nachbarn')
# CCDF der Nachbarn
plot_log_cdf(clipped_neighbor_count,
ax2,
'CCDF über die Anzahl Nachbarn',
'Anzahl Nachbarn',
complementary=True)
plt.show()
Explanation: Man kann im obigen Beispiel sehen, dass es nur wenige Knoten gibt, die mehr als ca. 250 Nachbarn besitzen. Daher ist die Grenze von 100% schnell erreicht und verfälscht den Bereich 0 < x < 250. Dieser soll näher betrachtet werden.
End of explanation
def plot_log_log_cdf(values, ax, title, xlabel, complementary=False):
plot_log_cdf(values, ax, title, xlabel, complementary=complementary)
ax.set_yscale('log')
ax.set_ylim(0, 1e0)
f, axes = plt.subplots(ncols=2, nrows=1, figsize=(12, 3))
ax1, ax2 = axes.ravel()
# CDF der Nachbarn
plot_log_log_cdf(clipped_neighbor_count,
ax1,
'CDF über die Anzahl Nachbarn',
'Anzahl Nachbarn')
# CCDF der Nachbarn
plot_log_log_cdf(clipped_neighbor_count,
ax2,
'CCDF über die Anzahl Nachbarn',
'Anzahl Nachbarn',
complementary=True)
plt.show()
Explanation: Hier kann man sehen, dass noch rund 80% aller Knoten weniger als 70 Nachbarn besitzen. Das plotten in einem doppelt logarithmischen Maßstab macht an dieser Stelle keinen Sinn.
End of explanation
def plot_multi_ecdf(values, legend, title='', xlabel=''):
Plottet alle in ``values`` gegebenen Arrays in ein CDF-Diagram.
Es muss eine Legende gegeben sein, die alle Label-Elemente beinhaltet, die
fuer ein einzelnes Array gelten.
for i in range(len(values)):
arr = values[i]
points = ecdf(arr)
plot_ecdf(points, label=legend[i], point_color=None, line_color=None)
plt.title(title)
plt.xlabel(xlabel)
# alle daten, die fuer die berechnung der einzelnen
# cdfs benoetigt werden
data = {
'root_id': 42,
'counts': (800, 1600),
'crawler': {
'BFS': {
'func': crawl_bfs_iter
},
'DFS': {
'func': crawl_dfs_iter
}
}
}
# ermittlung beider baeume
dfs_tree = nx.dfs_tree(graph, data['root_id'])
bfs_tree = nx.dfs_tree(graph, data['root_id'])
# die baeume werden ueber ihre kanten geprueft
data['crawler']['BFS']['edges'] = bfs_tree.edges()
data['crawler']['DFS']['edges'] = dfs_tree.edges()
# enthaelt alle nachbarwerte mit einem label, das in der
# legende ausgegeben wird, Bsp.: 'BFS 800'
neighbors = []
# fuer jede anzahl
for count in data['counts']:
# und jeden algorithmus
for _id in data['crawler'].keys():
crawler = data['crawler'][_id]
crawler[str(count)] = []
# markiere den beginn des crawlings
print('crawle {:s} mit {:5d} besuchen'.format(_id, count), end=' ')
# equivalent zu z.B.
# for node in crawl_bfs_iter(bfs_tree.edges(), 42, max_visits=800):
# crawler['800'].append(len(nx.neighbors(graph, node)))
# len(nx.neighbors(graph, node)) = Anzahl Nachbarn eines Knoten
# Hier werden die Nachbarn eines Knotens des ungerichteten Graphen gezaehlt!
for node in crawler['func'](crawler['edges'], data['root_id'], max_visits=count):
crawler[str(count)].append(len(nx.neighbors(graph, node)))
print(' ..fertig')
# hinzufuegen der anzahlen an nachbarn mit entsprechendem label
# hier wird die grenze wieder verwendet
neighbors.append({
'values': [el for el in crawler[str(count)] if el < border],
'label': '%s %d' % (_id, count)})
print('crawle den gesamten graph', end=' ')
# berechnung der anzahl an nachbarn fuer den gesamten ungerichteten graph
snapshot = [len(nx.neighbors(graph, node))
for node in crawl_bfs_iter(bfs_tree.edges(),
data['root_id'],
max_visits=len(graph.nodes()))]
neighbors.append({
'values': [el for el in snapshot if el < border],
'label': 'BFS unbegrenzt'
})
print(' ..fertig')
plot_multi_ecdf([n['values'] for n in neighbors],
title='CDF der Nachbarverteilung',
xlabel='Anzahl Nachbarn',
legend=[n['label'] for n in neighbors])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
Explanation: Der folgende Code ist definitiv verbesserungswürdig. Es sollen die einzelnen Anzahlen an Nachbarn, über Breiten- und Tiefensuche mit einer maximalen Anzahl von 800 und 1600 und des gesamte Graphs ermittelt werden. Danach erfolgt eine Darstellung der CDFs nebeneinander.
Vergleich der Knotengrade der einzelnen Algorithmen
End of explanation
import random
def random_walk(graph, root, max_visits=10):
Generator der einen Graph anhand seiner Kanten mit Hilfe von
random walk in iterativer Variante crawlt. Die Id des
besuchten Knoten wird dabei geliefert.
:param graph: Graph ueber den der random walk ausgefuehrt werden soll
:param root: Id des Ausgangsknotens
:param max_visits: Anzahl der maximal zu besuchenden Knoten
visited = 0
S = set()
# die queue enthaelt im gegensatz zu den anderen implementierungen
# den pfad, der im random walk gelaufen wurde
Q = []
Q.append(root)
while Q and visited < max_visits:
node = Q[-1]
# print('{:>10s}: {:d}'.format('yield', node))
yield node
visited += 1
if node not in S and visited < max_visits:
S.add(node)
# nachbarn die noch nicht besucht wurden
neighbors = [n for n in nx.neighbors(graph, node) if n not in S]
# sollten keine nachbarn vorhanden sein muessen die vorigen
# knoten abgefragt werden (up[-1] ist der aktuelle knoten)
up = -2
# solange alle nachbarn des gewaehlten knotens besucht wurden
# wird der knoten von der queue entfernt und neue nachbarn
# gesucht
while not neighbors:
node = Q[up]
neighbors = [n for n in nx.neighbors(graph, node) if n not in S]
up -= 1
# ansonsten waehle einen zufaelligen nachbar aus
# und schiebe ihn auf die queue
neighbor = random.choice(neighbors)
Q.append(neighbor)
node_count = (100, 200, 400, 800, 1600)
data = []
for c in node_count:
# ermittlung der anzahl an nachbarn fuer jeden dieser knoten
count = [len(nx.neighbors(graph, node)) for node in random_walk(graph, 42, c)]
data.append({
'label': c,
'neighbor_count': [i for i in count if i < border]
})
plot_multi_ecdf([n['neighbor_count'] for n in data],
title='CDF der Nachbarverteilung mit Random Walk',
xlabel='Anzahl Nachbarn',
legend=[n['label'] for n in data])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
Explanation: 7.1.2 Random Walk
Hier wird der Graph nicht mehr anhand seiner Struktur durchlaufen, sondern es werden zufällig Knoten ausgewählt und deren Nachbarn untersucht.
End of explanation
numbers = [1, 10, 50, 100, 200, 500]
for i in range(5):
print_heading('Test #%d' % (i + 1))
for number in numbers:
g = nx.MultiGraph()
g.add_nodes_from(graph.nodes())
g.add_edges_from(graph.edges())
for _ in range(number + 1):
node = random.choice(g.nodes())
g.remove_node(node)
print('{:3d} entfernt => {:2d} Komponenten'.format(number, nx.number_connected_components(g)))
import random
numbers = [1, 10, 50, 100, 200]
component_counts = {}
for n in numbers:
component_counts[n] = []
print('arbeite', end='')
runs, steps, step = 50, 5, 0
for i in range(runs):
for number in numbers:
H = graph.copy()
for _ in range(number + 1):
node = random.choice(H.nodes())
H.remove_node(node)
component_counts[number].append(nx.number_connected_components(H))
step += 1
if step % steps == 0:
print('..%d%%' % (step / runs * 100), end='')
print_heading('\nDurchschnittliche Anzahl an Komponenten')
for number in sorted(component_counts.keys()):
print('# {:<3d} entfernt => {:8,.2f}'.format(number, np.average(component_counts[number])))
Explanation: 7.2 Angriffe auf unstrukturierte Overlays
End of explanation
G = nx.Graph()
G.add_nodes_from([i for i in range(1, 13)])
G.add_edges_from([
(1, 2),
(1, 3),
(3, 4),
(3, 8),
(3, 11),
(4, 5),
(4, 6),
(4, 7),
(4, 8),
(6, 11),
(7, 8),
(7, 11),
(7, 10),
(8, 9),
(9, 10),
(9, 12),
])
# positions for all nodes
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos, node_size=500, alpha=0.8)
nx.draw_networkx_edges(G, pos, width=1.0, alpha=0.5)
nx.draw_networkx_labels(G, pos)
plt.show()
Explanation: 7.3 Graphen-Metriken
End of explanation
# Cluster coefficient fuer einzelne Knoten
nx.clustering(G)
Explanation: 7.3.1 Globaler Cluster Koeffizient
Siehe NetworkX API Doc
Wenn man den Clustering Koeffizienten eines einzelnen Knotens per Hand ausrechnen möchte benötigt man die Anzahl an Nachbarn des Knoten (Knotengrad) und die Anzahl der Verbindungen unter diesen Nachbarn $N_v$.
$CC(v)$:
$v$: Knoten
$K_v$: Knotengrad
$N_v$: Anzahl Verbindungen unter den Knoten von $v$
$CC(v) = \dfrac{2 N_v}{K_v(K_v - 1)}$
End of explanation
# Cluster coefficient fuer den Graph
nx.average_clustering(G)
Explanation: Beispiel für den Knoten 3.
4 Nachbarn: [1, 4, 8, 11]
1 Verbindung: [(4, 8)]
=> $\dfrac{2 * 1}{4 * 3} = \dfrac{2}{12} = \dfrac{1}{6} = 0.166$
Der globale Clustering Koeffizient ist lediglich der Durchschnitt aller lokalen Clustering Koeffizienten.
End of explanation
nx.betweenness_centrality(G, normalized=False)
Explanation: 7.3.2 Betweenness Centrality
Siehe NetworkX API Doc
Zur Berechnung der Betweenness Centrality eines einzelnen Knotens $v$, müssen zunächst alle kürzesten Pfade gefunden werden, die durch diesen Knoten $v$ verlaufen.
Man bildet die Paare der kürzesten Pfade.
Für jedes Paar $(s, t)$ werden die Anzahlen der kürzesten Pfade $\sigma_{(s, t)}$ zueinander gezählt.
Davon werden die Anzahl der kürzesten Pfade, die durch den gewählten Knoten verlaufen $\sigma_{(s, t|v)}$ geteilt durch die Anzahl der kürzestens Pfade zwischen den Knoten = $betweenness_i$.
Den Ablauf 2. und 3. nimmt man für jedes in 1. ermittelte Paar vor. Die Summe aus den einzelnen Ergebnissen aus 3. bildet die Betweenness Centrality des Knotens $betweenness = \sum_{i = 0}^{n} betweenness_i$, wobei $n$ die Anzahl der Paare ist.
Siehe auch:
8 -7 Betweenness centrality Part I
8 - 8 Betweenness centrality Part II
End of explanation
from collections import OrderedDict
def print_shortest_paths(paths):
for start, end in paths:
print('%d -- %d:' % (start, end))
for path in paths[(start, end)]:
print(' %s' % path)
paths = OrderedDict()
for start in G.nodes():
for end in G.nodes():
# der Graph ist bidirektional, daher muss nur in
# eine Richtung geschaut werden (1 -> 4, 4 -> 1 unnoetig)
if start < end:
# Bilden eines Knotenpaares
pair = (start, end)
if pair not in paths:
paths[pair] = []
# hinzufuegen aller kuerzesten pfade zwischen start und end
for path in nx.all_shortest_paths(G, start, end):
paths[pair].append(path)
# ausgabe aller kuerzesten pfade im graph
print_shortest_paths(paths)
Explanation: Im folgenden werden alle kürzesten Pfade gesammelt.
End of explanation
def betweenness_relevants(paths, target):
Liefert die Paare inklusive Pfade die relevant
fuer die Berechnung der betweenness centrality sind.
:param paths: dict and 2-Tupeln die Start und Ziel
bilden inklusive der Liste an kuerzesten Pfaden
:param target: Objekt dessen betweenness centrality
berechnet werden soll
:return: gefilterte menge an pfaden
target_paths = paths.copy()
for pair in paths:
is_in = False
targets = [p[1:len(p) - 1] for p in paths[pair]]
for t in targets:
if target in t:
is_in = True
if not is_in:
del target_paths[pair]
return target_paths
target = 7
targets = betweenness_relevants(paths, target)
print_shortest_paths(targets)
Explanation: Beispiel für Knoten 7
Kürzeste Pfade durch den Knoten:
End of explanation
def calc_betweenness_i(paths, target):
Berechnet die einzelnen Summanden zur Ermittlung
der Betweenness centrality eines Knotens.
:param paths: gefilterte Menge an Pfaden, die relevant
sind zur Berechnung der Betweenness centrality
betweenness_i = []
for pair in paths:
denominator = len(paths[pair])
nominator = sum([1 for p in paths[pair] if target in p])
betweenness_i.append(nominator / denominator)
return betweenness_i
betweenness_i = calc_betweenness_i(targets, target)
print(' + '.join(['{:.2f}'.format(i) for i in betweenness_i]))
print_heading('Betweenness centrality = {:.2f}'.format(sum(betweenness_i)))
Explanation: Ermittlung der einzelnen Summanden zur Betweenness centrality
End of explanation
target = 8
targets = betweenness_relevants(paths, target)
print_shortest_paths(targets)
betweenness_i = calc_betweenness_i(targets, target)
print(' + '.join(['{:.2f}'.format(i) for i in betweenness_i]))
print_heading('Betweenness centrality = {:.2f}'.format(sum(betweenness_i)))
Explanation: Beispiel für Knoten 8
Kürzeste Pfade durch den Knoten:
End of explanation
target = 6
targets = betweenness_relevants(paths, target)
print_shortest_paths(targets)
betweenness_i = calc_betweenness_i(targets, target)
print(' + '.join(['{:.2f}'.format(i) for i in betweenness_i]))
print_heading('Betweenness centrality = {:.2f}'.format(sum(betweenness_i)))
Explanation: Beispiel für Knoten 6
Kürzeste Pfade durch den Knoten:
End of explanation |
10,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Author
Step1: First let's check if there are new or deleted files (only matching by file names).
Step2: So we have the same set of files in both versions
Step3: Let's make sure the structure hasn't changed
Step4: All files have the same columns as before
Step5: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation, item, and liens_rome_referentiels, so let's see more precisely.
Step6: Alright, so the only change seems to be 15 new jobs added. Let's take a look (only showing interesting fields)
Step7: They mostly seem related to the digital industry, e.g. we finally have a job for John, our UX Designer. But there are also few others.
OK, let's check at the changes in items
Step8: As anticipated it is a very minor change (hard to see it visually)
Step9: Those entries look legitimate.
The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those.
Step10: So in addition to the added and remove items, there are 48 fixes. Let's have a look | Python Code:
import collections
import glob
import os
from os import path
import matplotlib_venn
import pandas as pd
rome_path = path.join(os.getenv('DATA_FOLDER'), 'rome/csv')
OLD_VERSION = '332'
NEW_VERSION = '333'
old_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(OLD_VERSION)))
new_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(NEW_VERSION)))
Explanation: Author: Pascal, [email protected]
Date: 2017-11-08
ROME update from v332 to v333
In November 2017 a new version of the ROME was realeased. I want to investigate what changed and whether we need to do anything about it.
You might not be able to reproduce this notebook, mostly because it requires to have the two versions of the ROME in your data/rome/csv folder which happens only just before we switch to v333. You will have to trust me on the results ;-)
Skip the run test because it requires older versions of the ROME.
End of explanation
new_files = new_version_files - frozenset(f.replace(OLD_VERSION, NEW_VERSION) for f in old_version_files)
deleted_files = old_version_files - frozenset(f.replace(NEW_VERSION, OLD_VERSION) for f in new_version_files)
print('{:d} new files'.format(len(new_files)))
print('{:d} deleted files'.format(len(deleted_files)))
Explanation: First let's check if there are new or deleted files (only matching by file names).
End of explanation
# Load all ROME datasets for the two versions we compare.
VersionedDataset = collections.namedtuple('VersionedDataset', ['basename', 'old', 'new'])
rome_data = [VersionedDataset(
basename=path.basename(f),
old=pd.read_csv(f.replace(NEW_VERSION, OLD_VERSION)),
new=pd.read_csv(f))
for f in sorted(new_version_files)]
def find_rome_dataset_by_name(data, partial_name):
for dataset in data:
if 'unix_{}_v{}_utf8.csv'.format(partial_name, NEW_VERSION) == dataset.basename:
return dataset
raise ValueError('No dataset named {}, the list is\n{}'.format(partial_name, [d.basename for d in data]))
Explanation: So we have the same set of files in both versions: good start.
Now let's set up a dataset that, for each table, links both the old and the new file together.
End of explanation
for dataset in rome_data:
if set(dataset.old.columns) != set(dataset.new.columns):
print('Columns of {} have changed.'.format(dataset.basename))
Explanation: Let's make sure the structure hasn't changed:
End of explanation
same_row_count_files = 0
for dataset in rome_data:
diff = len(dataset.new.index) - len(dataset.old.index)
if diff > 0:
print('{:d}/{:d} values added in {}'.format(diff, len(dataset.new.index), dataset.basename))
elif diff < 0:
print('{:d}/{:d} values removed in {}'.format(-diff, len(dataset.old.index), dataset.basename))
else:
same_row_count_files += 1
print('{:d}/{:d} files with the same number of rows'.format(same_row_count_files, len(rome_data)))
Explanation: All files have the same columns as before: still good.
Now let's see for each file if there are more or less rows.
End of explanation
jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation')
new_jobs = set(jobs.new.code_ogr) - set(jobs.old.code_ogr)
obsolete_jobs = set(jobs.old.code_ogr) - set(jobs.new.code_ogr)
stable_jobs = set(jobs.new.code_ogr) & set(jobs.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_jobs), len(new_jobs), len(stable_jobs)), (OLD_VERSION, NEW_VERSION));
Explanation: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation, item, and liens_rome_referentiels, so let's see more precisely.
End of explanation
pd.options.display.max_colwidth = 2000
jobs.new[jobs.new.code_ogr.isin(new_jobs)][['code_ogr', 'libelle_appellation_long', 'code_rome']]
Explanation: Alright, so the only change seems to be 15 new jobs added. Let's take a look (only showing interesting fields):
End of explanation
items = find_rome_dataset_by_name(rome_data, 'item')
new_items = set(items.new.code_ogr) - set(items.old.code_ogr)
obsolete_items = set(items.old.code_ogr) - set(items.new.code_ogr)
stable_items = set(items.new.code_ogr) & set(items.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_items), len(new_items), len(stable_items)), (OLD_VERSION, NEW_VERSION));
Explanation: They mostly seem related to the digital industry, e.g. we finally have a job for John, our UX Designer. But there are also few others.
OK, let's check at the changes in items:
End of explanation
items.old[items.old.code_ogr.isin(obsolete_items)].tail()
items.new[items.new.code_ogr.isin(new_items)].head()
Explanation: As anticipated it is a very minor change (hard to see it visually): some items are now obsolete and new ones have been created. Let's have a look.
End of explanation
links = find_rome_dataset_by_name(rome_data, 'liens_rome_referentiels')
old_links_on_stable_items = links.old[links.old.code_ogr.isin(stable_items)]
new_links_on_stable_items = links.new[links.new.code_ogr.isin(stable_items)]
old = old_links_on_stable_items[['code_rome', 'code_ogr']]
new = new_links_on_stable_items[['code_rome', 'code_ogr']]
links_merged = old.merge(new, how='outer', indicator=True)
links_merged['_diff'] = links_merged._merge.map({'left_only': 'removed', 'right_only': 'added'})
links_merged._diff.value_counts()
Explanation: Those entries look legitimate.
The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those.
End of explanation
job_group_names = find_rome_dataset_by_name(rome_data, 'referentiel_code_rome').old.set_index('code_rome').libelle_rome
item_names = items.new.set_index('code_ogr').libelle.drop_duplicates()
links_merged['job_group_name'] = links_merged.code_rome.map(job_group_names)
links_merged['item_name'] = links_merged.code_ogr.map(item_names)
links_merged.dropna().head(10)
Explanation: So in addition to the added and remove items, there are 48 fixes. Let's have a look:
End of explanation |
10,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
determine_region
A notebook to determine what region (e.g. neighborhood, ward, census district) the issue is referring to
Step1: Read in neighborhood shapefiles
Step3: Now plot the shapefiles
Step4: Read in issues and determine the region
In this section I will read determine the region each issue occurs in. First to read in the issues
Step5: Remove issues that do not have correct coordinates | Python Code:
import fiona
from shapely.geometry import shape
import nhrc2
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from collections import defaultdict
import numpy as np
from matplotlib.patches import Polygon
from shapely.geometry import Point
%matplotlib inline
#the project root directory:
nhrc2dir = ('/').join(nhrc2.__file__.split('/')[:-1])+'/'
Explanation: determine_region
A notebook to determine what region (e.g. neighborhood, ward, census district) the issue is referring to
End of explanation
c = fiona.open(nhrc2dir+'data/nh_neighborhoods/nh_neighborhoods.shp')
pol = c.next()
geom = shape(pol['geometry'])
c.crs
for i in c.items():
print(i[1])
len(c)
for i in c:
pol = i
geom = shape(pol['geometry'])
geom
geom
Explanation: Read in neighborhood shapefiles:
End of explanation
#Based on code from Kelly Jordhal:
#http://nbviewer.ipython.org/github/mqlaql/geospatial-data/blob/master/Geospatial-Data-with-Python.ipynb
def plot_polygon(ax, poly):
a = np.asarray(poly.exterior)
ax.add_patch(Polygon(a, facecolor='#46959E', alpha=0.3))
ax.plot(a[:, 0], a[:, 1], color='black')
def plot_multipolygon(ax, geom):
Can safely call with either Polygon or Multipolygon geometry
if geom.type == 'Polygon':
plot_polygon(ax, geom)
elif geom.type == 'MultiPolygon':
for poly in geom.geoms:
plot_polygon(ax, poly)
nhv_geom = defaultdict()
#colors = ['red', 'green', 'orange', 'brown', 'purple']
fig, ax = plt.subplots(figsize=(12,12))
for rec in c:
#print(rec['geometry']['type'])
hood = rec['properties']['name']
nhv_geom[hood] = shape(rec['geometry'])
plot_multipolygon(ax, nhv_geom[hood])
labels = ax.get_xticklabels()
for label in labels:
label.set_rotation(90)
ax.set_xlabel('Longitude')
ax.set_ylabel('Latitude')
ax.plot(scf_df.loc[0, 'lng'], scf_df.loc[0, 'lat'], 'o')
Explanation: Now plot the shapefiles
End of explanation
import nhrc2.backend.read_seeclickfix_api_to_csv as rscf
scf_cats = rscf.read_categories(readfile=True)
scf_df = rscf.read_issues(scf_cats, readfile=True)
len(scf_cats)
scf_df.head(3)
len(scf_df)
Explanation: Read in issues and determine the region
In this section I will read determine the region each issue occurs in. First to read in the issues:
End of explanation
scf_df = scf_df[((scf_df['lat'] < 41.36) & (scf_df['lat'] > 41.24) & (scf_df['lng']>=-73.00) & (scf_df['lng'] <= -72.86))]
print(len(scf_df))
scf_df.loc[0, 'lat']
grid_point = Point(scf_df.loc[0, 'lng'], scf_df.loc[0, 'lat'])
for idx in range(5):
grid_point = Point(scf_df.loc[idx, 'lng'], scf_df.loc[idx, 'lat'])
print('Point {} at {}'.format(idx, scf_df.loc[idx, 'address']))
print('Downtown: {}'.format(grid_point.within(nhv_geom['Downtown'])))
print('East Rock: {}'.format(grid_point.within(nhv_geom['East Rock'])))
print('Fair Haven Heights: {}'.format(grid_point.within(nhv_geom['Fair Haven Heights'])))
print('Number of neighborhoods: {}'.format(len(nhv_geom.keys())))
for hood in nhv_geom.keys():
print(hood)
def get_neighborhoods(scf_df, neighborhoods):
hoods = []
for idx in scf_df.index:
grid_point = Point(scf_df.loc[idx, 'lng'], scf_df.loc[idx, 'lat'])
for hoodnum, hood in enumerate(nhv_geom.keys()):
if grid_point.within(nhv_geom[hood]):
hoods.append(hood)
break
if hoodnum == 19:
#There are 20 neighborhoods. If you are the 20th (element 19 in
#zero-based indexing) and have not continued out of the iteration
#set the neighborhood name to "Other":
hoods.append('Other')
return hoods
%time nbrhoods = get_neighborhoods(scf_df, nhv_geom)
print(len(scf_df))
print(len(nbrhoods))
Explanation: Remove issues that do not have correct coordinates:
End of explanation |
10,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AF6UY ditDahReader Library Usage
The AF6UY ditDahReader python3 library is a morse code (CW) library with its final goal of teach the author (AF6UY) morse code by playing IRC streams in morse code. Along the way it will have other CW learning tools as well.
This file shows various use cases of ditDahReader.
Step1: Basic Tone generation
Plot the first few samples and last few samples of the tone, showing the raised cosine attack of the sine wave. The raised cosine is to help keep clicking from happening.
Step2: Raised cosine
A plot of the raised cosine looks like this.
Step3: Check that the raised cosine is the correct number of milleseconds
Step4: Play the tone. Should be at the default 600 Hz, easy to hear for us old guys and should have no clicking sound.
Step5: Morse class
Let's do some decoding of morse. We use the standard '.' for dit and '-' for dah. No human should actually read these characters. I was considering using some off the wall characters so no one tried, but thought it would be a lot harder to debug. The point is, we needed a way to understand more code with some easy characters, so why not use something readable. We use ' | Python Code:
import ditDahReader as dd
import numpy as np
import matplotlib.pyplot as plt
Explanation: AF6UY ditDahReader Library Usage
The AF6UY ditDahReader python3 library is a morse code (CW) library with its final goal of teach the author (AF6UY) morse code by playing IRC streams in morse code. Along the way it will have other CW learning tools as well.
This file shows various use cases of ditDahReader.
End of explanation
t = dd.Tone()
t.createTone(500) # ms
plt.plot(t.tone[:500])
plt.show()
plt.plot(t.tone[-500:])
plt.show()
Explanation: Basic Tone generation
Plot the first few samples and last few samples of the tone, showing the raised cosine attack of the sine wave. The raised cosine is to help keep clicking from happening.
End of explanation
plt.plot(t.rc)
plt.show()
Explanation: Raised cosine
A plot of the raised cosine looks like this.
End of explanation
np.isclose(len(t.rc) / t.fs, t.attack * dd.ms)
Explanation: Check that the raised cosine is the correct number of milleseconds
End of explanation
t.play()
from scipy.fftpack import fft, fftshift
N = len(t.tone)
T = 1.0 / t.fs
yf = fftshift(fft(t.tone))
xf = np.linspace(0.0, 1.0/(2.0*T), int(N/2))
plt.plot(xf, 20*np.log(2.0/N*np.abs(yf[int(N/2):])))
plt.show()
Explanation: Play the tone. Should be at the default 600 Hz, easy to hear for us old guys and should have no clicking sound.
End of explanation
m = dd.Morse()
m.translate("W1AW de AF6UY") == ".--:.----:.-:.-- -..:. .-:..-.:-....:..-:-.--"
m.play("AF6UY")
w = m.buildPlayList("AF6UY")
plt.plot(w)
plt.show()
N = len(w)
T = 1.0 / t.fs
yf = fftshift(fft(w))
xf = np.linspace(0.0, 1.0/(2.0*T), int(N/2))
plt.plot(xf, 20*np.log(2.0/N*np.abs(yf[int(N/2):])))
plt.show()
Explanation: Morse class
Let's do some decoding of morse. We use the standard '.' for dit and '-' for dah. No human should actually read these characters. I was considering using some off the wall characters so no one tried, but thought it would be a lot harder to debug. The point is, we needed a way to understand more code with some easy characters, so why not use something readable. We use ':' as the character break as you'll see in the example below.
End of explanation |
10,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model
Step2: Standardized and relative regression coefficients (betas)
The relative coefficients are intended to show relative contribution of different feature and their primary purpose is to indentify whether one of the features has an unproportionate effect over the final score. They are computed as standardized/(sum of absolute values of standardized coefficients).
Negative standardized coefficients are <span class="highlight_color">highlighted</span>.
Note
Step3: Here are the same values, shown graphically. | Python Code:
Markdown('Model used: **{}**'.format(model_name))
Markdown('Number of features in model: **{}**'.format(len(features_used)))
builtin_ols_models = ['LinearRegression',
'EqualWeightsLR',
'RebalancedLR',
'NNLR',
'LassoFixedLambdaThenNNLR',
'LassoFixedLambdaThenLR',
'PositiveLassoCVThenLR',
'WeightedLeastSquares']
builtin_lasso_models = ['LassoFixedLambda',
'PositiveLassoCV']
# we first just show a summary of the OLS model and the main model parameters
if model_name in builtin_ols_models:
display(Markdown('### Model summary'))
summary_file = join(output_dir, '{}_ols_summary.txt'.format(experiment_id))
with open(summary_file, 'r') as summf:
model_summary = summf.read()
print(model_summary)
display(Markdown('### Model fit'))
df_fit = DataReader.read_from_file(join(output_dir, '{}_model_fit.{}'.format(experiment_id,
file_format)))
display(HTML(df_fit.to_html(index=False,
float_format=float_format_func)))
Explanation: Model
End of explanation
markdown_str =
**Note**: The coefficients were estimated using LASSO regression. Unlike OLS (standard) linear regression, lasso estimation is based on an optimization routine and therefore the exact estimates may differ across different systems.
if model_name in builtin_lasso_models:
display(Markdown(markdown_str))
df_betas.sort_values(by='feature', inplace=True)
display(HTML(df_betas.to_html(classes=['sortable'],
index=False,
escape=False,
float_format=float_format_func,
formatters={'standardized': color_highlighter})))
Explanation: Standardized and relative regression coefficients (betas)
The relative coefficients are intended to show relative contribution of different feature and their primary purpose is to indentify whether one of the features has an unproportionate effect over the final score. They are computed as standardized/(sum of absolute values of standardized coefficients).
Negative standardized coefficients are <span class="highlight_color">highlighted</span>.
Note: if the model contains negative coefficients, relative values will not sum up to one and their interpretation is generally questionable.
End of explanation
df_betas_sorted = df_betas.sort_values(by='standardized', ascending=False)
df_betas_sorted.reset_index(drop=True, inplace=True)
fig = plt.figure()
fig.set_size_inches(8, 3)
fig.subplots_adjust(bottom=0.5)
grey_colors = sns.color_palette('Greys', len(features_used))[::-1]
with sns.axes_style('whitegrid'):
ax1=fig.add_subplot(121)
sns.barplot(x="feature", y="standardized", data=df_betas_sorted,
order=df_betas_sorted['feature'].values,
palette=sns.color_palette("Greys", 1), ax=ax1)
ax1.set_xticklabels(df_betas_sorted['feature'].values, rotation=90)
ax1.set_title('Values of standardized coefficients')
ax1.set_xlabel('')
ax1.set_ylabel('')
# no pie chart if we have more than 15 features,
# if the feature names are long (pie chart looks ugly)
# or if there are any negative coefficients.
if len(features_used) <= 15 and longest_feature_name <= 10 and (df_betas_sorted['relative']>=0).all():
ax2=fig.add_subplot(133, aspect=True)
ax2.pie(abs(df_betas_sorted['relative'].values), colors=grey_colors,
labels=df_betas_sorted['feature'].values, normalize=True)
ax2.set_title('Proportional contribution of each feature')
else:
fig.set_size_inches(len(features_used), 3)
betas_file = join(figure_dir, '{}_betas.svg'.format(experiment_id))
plt.savefig(betas_file)
if use_thumbnails:
show_thumbnail(betas_file, next(id_generator))
else:
plt.show()
if model_name in builtin_ols_models:
display(Markdown('### Model diagnostics'))
display(Markdown("These are standard plots for model diagnostics for the main model. All information is computed based on the training set."))
# read in the OLS model file and create the diagnostics plots
if model_name in builtin_ols_models:
ols_file = join(output_dir, '{}.ols'.format(experiment_id))
model = pickle.load(open(ols_file, 'rb'))
model_predictions = model.predict()
with sns.axes_style('white'):
f, (ax1, ax2) = plt.subplots(1, 2)
f.set_size_inches((10, 4))
###
# for now, we do not show the influence plot since it can be slow to generate
###
# sm.graphics.influence_plot(model.sm_ols, criterion="cooks", size=10, ax=ax1)
# ax1.set_title('Residuals vs. Leverage', fontsize=16)
# ax1.set_xlabel('Leverage', fontsize=16)
# ax1.set_ylabel('Standardized Residuals', fontsize=16)
sm.qqplot(model.resid, stats.norm, fit=True, line='q', ax=ax1)
ax1.set_title('Normal Q-Q Plot', fontsize=16)
ax1.set_xlabel('Theoretical Quantiles', fontsize=16)
ax1.set_ylabel('Sample Quantiles', fontsize=16)
ax2.scatter(model_predictions, model.resid)
ax2.set_xlabel('Fitted values', fontsize=16)
ax2.set_ylabel('Residuals', fontsize=16)
ax2.set_title('Residuals vs. Fitted', fontsize=16)
imgfile = join(figure_dir, '{}_ols_diagnostic_plots.png'.format(experiment_id))
plt.savefig(imgfile)
if use_thumbnails:
show_thumbnail(imgfile, next(id_generator))
else:
display(Image(imgfile))
plt.close()
Explanation: Here are the same values, shown graphically.
End of explanation |
10,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building an image retrieval system with deep features
Fire up GraphLab Create
Step1: Load the CIFAR-10 dataset
We will use a popular benchmark dataset in computer vision called CIFAR-10.
(We've reduced the data to just 4 categories = {'cat','bird','automobile','dog'}.)
This dataset is already split into a training set and test set. In this simple retrieval example, there is no notion of "testing", so we will only use the training data.
Step2: Computing deep features for our images
The two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded.
(Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)
Step3: Train a nearest-neighbors model for retrieving images using deep features
We will now build a simple image retrieval system that finds the nearest neighbors for any image.
Step4: Use image retrieval model with deep features to find similar images
Let's find similar images to this cat picture.
Step5: We are going to create a simple function to view the nearest neighbors to save typing
Step6: Very cool results showing similar cats.
Finding similar images to a car
Step7: Just for fun, let's create a lambda to find and show nearest neighbor images | Python Code:
import graphlab
Explanation: Building an image retrieval system with deep features
Fire up GraphLab Create
End of explanation
image_train = graphlab.SFrame('image_train_data/')
image_test = graphlab.SFrame('image_test_data/')
Explanation: Load the CIFAR-10 dataset
We will use a popular benchmark dataset in computer vision called CIFAR-10.
(We've reduced the data to just 4 categories = {'cat','bird','automobile','dog'}.)
This dataset is already split into a training set and test set. In this simple retrieval example, there is no notion of "testing", so we will only use the training data.
End of explanation
#deep_learning_model = graphlab.load_model('http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenet_model_iter45')
#image_train['deep_features'] = deep_learning_model.extract_features(image_train)
image_train.head()
Explanation: Computing deep features for our images
The two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded.
(Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)
End of explanation
knn_model = graphlab.nearest_neighbors.create(image_train[image_train['label']=='cat'],features=['deep_features'],
label='id')
Explanation: Train a nearest-neighbors model for retrieving images using deep features
We will now build a simple image retrieval system that finds the nearest neighbors for any image.
End of explanation
graphlab.canvas.set_target('ipynb')
cat = image_train[18:19]
cat['image'].show()
knn_model.query(image_test[0:1])
Explanation: Use image retrieval model with deep features to find similar images
Let's find similar images to this cat picture.
End of explanation
def get_images_from_ids(query_result):
return image_train.filter_by(query_result['reference_label'],'id')
cat_neighbors = get_images_from_ids(knn_model.query(cat))
cat_neighbors['image'].show()
Explanation: We are going to create a simple function to view the nearest neighbors to save typing:
End of explanation
car = image_train[8:9]
car['image'].show()
get_images_from_ids(knn_model.query(car))['image'].show()
Explanation: Very cool results showing similar cats.
Finding similar images to a car
End of explanation
show_neighbors = lambda i: get_images_from_ids(knn_model.query(image_train[i:i+1]))['image'].show()
show_neighbors(8)
show_neighbors(26)
Explanation: Just for fun, let's create a lambda to find and show nearest neighbor images
End of explanation |
10,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Maxpooling Layer
In this notebook, we add and visualize the output of a maxpooling layer in a CNN.
A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN.
<img src='images/CNN_all_layers.png' height=50% width=50% />
Import the image
Step1: Define and visualize the filters
Step2: Define convolutional and pooling layers
You've seen how to define a convolutional layer, next is a
Step3: Visualize the output of each filter
First, we'll define a helper function, viz_layer that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
Step4: Let's look at the output of a convolutional layer after a ReLu activation function is applied.
ReLu activation
A ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, x.
<img src='images/relu_ex.png' height=50% width=50% />
Step5: Visualize the output of the pooling layer
Then, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.
Take a look at the values on the x, y axes to see how the image has changed size. | Python Code:
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
Explanation: Maxpooling Layer
In this notebook, we add and visualize the output of a maxpooling layer in a CNN.
A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN.
<img src='images/CNN_all_layers.png' height=50% width=50% />
Import the image
End of explanation
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
Explanation: Define and visualize the filters
End of explanation
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
Explanation: Define convolutional and pooling layers
You've seen how to define a convolutional layer, next is a:
* Pooling layer
In the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, documented here, with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!
A maxpooling layer reduces the x-y size of an input and only keeps the most active pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
<img src='images/maxpooling_ex.png' height=50% width=50% />
End of explanation
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
Explanation: Visualize the output of each filter
First, we'll define a helper function, viz_layer that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
End of explanation
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
Explanation: Let's look at the output of a convolutional layer after a ReLu activation function is applied.
ReLu activation
A ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, x.
<img src='images/relu_ex.png' height=50% width=50% />
End of explanation
# visualize the output of the pooling layer
viz_layer(pooled_layer)
Explanation: Visualize the output of the pooling layer
Then, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.
Take a look at the values on the x, y axes to see how the image has changed size.
End of explanation |
10,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
10,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Get the Data
Read in the College_Data file using read_csv. Figure out how to set the first column as the index.
Step2: Check the head of the data
Step3: Check the info() and describe() methods on the data.
Step4: EDA
It's time to create some data visualizations!
Create a scatterplot of Grad.Rate versus Room.Board where the points are colored by the Private column.
Step5: Create a scatterplot of F.Undergrad versus Outstate where the points are colored by the Private column.
Step6: Create a stacked histogram showing Out of State Tuition based on the Private column. Try doing this using sns.FacetGrid. If that is too tricky, see if you can do it just by using two instances of pandas.plot(kind='hist').
Step7: Create a similar histogram for the Grad.Rate column.
Step8: Notice how there seems to be a private school with a graduation rate of higher than 100%.What is the name of that school?
Step9: Set that school's graduation rate to 100 so it makes sense. You may get a warning not an error) when doing this operation, so use dataframe operations or just re-do the histogram visualization to make sure it actually went through.
Step10: K Means Cluster Creation
Now it is time to create the Cluster labels!
Import KMeans from SciKit Learn.
Step11: Create an instance of a K Means model with 2 clusters.
Step12: Fit the model to all the data except for the Private label.
Step13: What are the cluster center vectors?
Step14: Evaluation
There is no perfect way to evaluate clustering if you don't have the labels, however since this is just an exercise, we do have the labels, so we take advantage of this to evaluate our clusters, keep in mind, you usually won't have this luxury in the real world.
Create a new column for df called 'Cluster', which is a 1 for a Private school, and a 0 for a public school.
Step15: Create a confusion matrix and classification report to see how well the Kmeans clustering worked without being given any labels. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
K Means Clustering Project - Solutions
For this project we will attempt to use KMeans Clustering to cluster Universities into to two groups, Private and Public.
It is very important to note, we actually have the labels for this data set, but we will NOT use them for the KMeans clustering algorithm, since that is an unsupervised learning algorithm.
When using the Kmeans algorithm under normal circumstances, it is because you don't have labels. In this case we will use the labels to try to get an idea of how well the algorithm performed, but you won't usually do this for Kmeans, so the classification report and confusion matrix at the end of this project, don't truly make sense in a real world setting!.
The Data
We will use a data frame with 777 observations on the following 18 variables.
* Private A factor with levels No and Yes indicating private or public university
* Apps Number of applications received
* Accept Number of applications accepted
* Enroll Number of new students enrolled
* Top10perc Pct. new students from top 10% of H.S. class
* Top25perc Pct. new students from top 25% of H.S. class
* F.Undergrad Number of fulltime undergraduates
* P.Undergrad Number of parttime undergraduates
* Outstate Out-of-state tuition
* Room.Board Room and board costs
* Books Estimated book costs
* Personal Estimated personal spending
* PhD Pct. of faculty with Ph.D.’s
* Terminal Pct. of faculty with terminal degree
* S.F.Ratio Student/faculty ratio
* perc.alumni Pct. alumni who donate
* Expend Instructional expenditure per student
* Grad.Rate Graduation rate
Import Libraries
Import the libraries you usually use for data analysis.
End of explanation
df = pd.read_csv('College_Data',index_col=0)
Explanation: Get the Data
Read in the College_Data file using read_csv. Figure out how to set the first column as the index.
End of explanation
df.head()
Explanation: Check the head of the data
End of explanation
df.info()
df.describe()
Explanation: Check the info() and describe() methods on the data.
End of explanation
sns.set_style('whitegrid')
sns.lmplot('Room.Board','Grad.Rate',data=df, hue='Private',
palette='coolwarm',size=6,aspect=1,fit_reg=False)
Explanation: EDA
It's time to create some data visualizations!
Create a scatterplot of Grad.Rate versus Room.Board where the points are colored by the Private column.
End of explanation
sns.set_style('whitegrid')
sns.lmplot('Outstate','F.Undergrad',data=df, hue='Private',
palette='coolwarm',size=6,aspect=1,fit_reg=False)
Explanation: Create a scatterplot of F.Undergrad versus Outstate where the points are colored by the Private column.
End of explanation
sns.set_style('darkgrid')
g = sns.FacetGrid(df,hue="Private",palette='coolwarm',size=6,aspect=2)
g = g.map(plt.hist,'Outstate',bins=20,alpha=0.7)
Explanation: Create a stacked histogram showing Out of State Tuition based on the Private column. Try doing this using sns.FacetGrid. If that is too tricky, see if you can do it just by using two instances of pandas.plot(kind='hist').
End of explanation
sns.set_style('darkgrid')
g = sns.FacetGrid(df,hue="Private",palette='coolwarm',size=6,aspect=2)
g = g.map(plt.hist,'Grad.Rate',bins=20,alpha=0.7)
Explanation: Create a similar histogram for the Grad.Rate column.
End of explanation
df[df['Grad.Rate'] > 100]
Explanation: Notice how there seems to be a private school with a graduation rate of higher than 100%.What is the name of that school?
End of explanation
df['Grad.Rate']['Cazenovia College'] = 100
df[df['Grad.Rate'] > 100]
sns.set_style('darkgrid')
g = sns.FacetGrid(df,hue="Private",palette='coolwarm',size=6,aspect=2)
g = g.map(plt.hist,'Grad.Rate',bins=20,alpha=0.7)
Explanation: Set that school's graduation rate to 100 so it makes sense. You may get a warning not an error) when doing this operation, so use dataframe operations or just re-do the histogram visualization to make sure it actually went through.
End of explanation
from sklearn.cluster import KMeans
Explanation: K Means Cluster Creation
Now it is time to create the Cluster labels!
Import KMeans from SciKit Learn.
End of explanation
kmeans = KMeans(n_clusters=2)
Explanation: Create an instance of a K Means model with 2 clusters.
End of explanation
kmeans.fit(df.drop('Private',axis=1))
Explanation: Fit the model to all the data except for the Private label.
End of explanation
kmeans.cluster_centers_
Explanation: What are the cluster center vectors?
End of explanation
def converter(cluster):
if cluster=='Yes':
return 1
else:
return 0
df['Cluster'] = df['Private'].apply(converter)
df.head()
Explanation: Evaluation
There is no perfect way to evaluate clustering if you don't have the labels, however since this is just an exercise, we do have the labels, so we take advantage of this to evaluate our clusters, keep in mind, you usually won't have this luxury in the real world.
Create a new column for df called 'Cluster', which is a 1 for a Private school, and a 0 for a public school.
End of explanation
from sklearn.metrics import confusion_matrix,classification_report
print(confusion_matrix(df['Cluster'],kmeans.labels_))
print(classification_report(df['Cluster'],kmeans.labels_))
Explanation: Create a confusion matrix and classification report to see how well the Kmeans clustering worked without being given any labels.
End of explanation |
10,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Circular Regression
Step1: Directional statistics, also known as circular statistics or spherical statistics, refers to a branch of statistics dealing with data which domain is the unit circle, as opposed to "linear" data which support is the real line. Circular data is convenient when dealing with directions or rotations. Some examples include temporal periods like hours or days, compass directions, dihedral angles in biomolecules, etc.
The fact that a Sunday can be both the day before or after a Monday, or that 0 is a "better average" for 2 and 358 degrees than 180 are illustrations that circular data and circular statistical methods are better equipped to deal with this kind of problem than the more familiar methods 1.
There are a few circular distributions, one of them is the VonMises distribution, that we can think as the cousin of the Gaussian that lives in circular space. The domain of this distribution is any interval of length $2\pi$. We are going to adopt the convention that the interval goes from $-\pi$ to $\pi$, so for example 0 radians is the same as $2\pi$. The VonMises is defined using two parameters, the mean $\mu$ (the circular mean) and the concentration $\kappa$, with $\frac{1}{\kappa}$ being analogue of the variance. Let see a few example of the VonMises family
Step2: When doing linear regression a commonly used link function is $2 \arctan(u)$ this ensure that values over the real line are mapped into the interval $[-\pi, \pi]$
Step3: Bambi supports circular regression with the VonMises family, to exemplify this we are going to use a dataset from the following experiment. 31 periwinkles (a kind of sea snail) were removed from it original place and released down shore. Then, our task is to model the direction of motion as function of the distance travelled by them after being release.
Step4: Just to compare results, we are going to use the VonMises family and the normal (default) family.
Step5: We can see that there is a negative relationship between distance and direction. This could be explained as Periwinkles travelling in a direction towards the sea travelled shorter distances than those travelling in directions away from it. From a biological perspective, this could have been due to a propensity of the periwinkles to stop moving once they are close to the sea.
We can also see that if inadvertently we had assumed a normal response we would have obtained a fit with higher uncertainty and more importantly the wrong sign for the relationship.
As a last step for this example we are going to do a posterior predictive check. In the figure below we have to panels showing the same data, with the only difference that the on the right is using a polar projection and the KDE are computing taking into account the circularity of the data.
We can see that our modeling is failing at capturing the bimodality in the data (with mode around 1.6 and $\pm \pi$) and hence the predicted distribution is wider and with a mean closer to $\pm \pi$. | Python Code:
import arviz as az
import bambi as bmb
from matplotlib.lines import Line2D
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
az.style.use("arviz-white")
Explanation: Circular Regression
End of explanation
x = np.linspace(-np.pi, np.pi, 200)
mus = [0., 0., 0., -2.5]
kappas = [.001, 0.5, 3, 0.5]
for mu, kappa in zip(mus, kappas):
pdf = stats.vonmises.pdf(x, kappa, loc=mu)
plt.plot(x, pdf, label=r'$\mu$ = {}, $\kappa$ = {}'.format(mu, kappa))
plt.yticks([])
plt.legend(loc=1);
Explanation: Directional statistics, also known as circular statistics or spherical statistics, refers to a branch of statistics dealing with data which domain is the unit circle, as opposed to "linear" data which support is the real line. Circular data is convenient when dealing with directions or rotations. Some examples include temporal periods like hours or days, compass directions, dihedral angles in biomolecules, etc.
The fact that a Sunday can be both the day before or after a Monday, or that 0 is a "better average" for 2 and 358 degrees than 180 are illustrations that circular data and circular statistical methods are better equipped to deal with this kind of problem than the more familiar methods 1.
There are a few circular distributions, one of them is the VonMises distribution, that we can think as the cousin of the Gaussian that lives in circular space. The domain of this distribution is any interval of length $2\pi$. We are going to adopt the convention that the interval goes from $-\pi$ to $\pi$, so for example 0 radians is the same as $2\pi$. The VonMises is defined using two parameters, the mean $\mu$ (the circular mean) and the concentration $\kappa$, with $\frac{1}{\kappa}$ being analogue of the variance. Let see a few example of the VonMises family:
End of explanation
u = np.linspace(-12, 12, 200)
plt.plot(u, 2*np.arctan(u))
plt.xlabel("Reals")
plt.ylabel("Radians");
Explanation: When doing linear regression a commonly used link function is $2 \arctan(u)$ this ensure that values over the real line are mapped into the interval $[-\pi, \pi]$
End of explanation
data = bmb.load_data("periwinkles")
data.head()
Explanation: Bambi supports circular regression with the VonMises family, to exemplify this we are going to use a dataset from the following experiment. 31 periwinkles (a kind of sea snail) were removed from it original place and released down shore. Then, our task is to model the direction of motion as function of the distance travelled by them after being release.
End of explanation
model_vm = bmb.Model("direction ~ distance", data, family="vonmises")
idata_vm = model_vm.fit(include_mean=True)
model_n = bmb.Model("direction ~ distance", data)
idata_n = model_n.fit(include_mean=True)
az.summary(idata_vm, var_names=["~direction_mean"])
_, ax = plt.subplots(1,2, figsize=(8, 4), sharey=True)
posterior_mean = bmb.families.link.tan_2(idata_vm.posterior["direction_mean"])
ax[0].plot(data.distance, posterior_mean.mean(["chain", "draw"]))
az.plot_hdi(data.distance, posterior_mean, ax=ax[0])
ax[0].plot(data.distance, data.direction, "k.")
ax[0].set_xlabel("Distance travelled (in m)")
ax[0].set_ylabel("Direction of travel (radians)")
ax[0].set_title("VonMises Family")
posterior_mean = idata_n.posterior["direction_mean"]
ax[1].plot(data.distance, posterior_mean.mean(["chain", "draw"]))
az.plot_hdi(data.distance, posterior_mean, ax=ax[1])
ax[1].plot(data.distance, data.direction, "k.")
ax[1].set_xlabel("Distance travelled (in m)")
ax[1].set_title("Normal Family");
Explanation: Just to compare results, we are going to use the VonMises family and the normal (default) family.
End of explanation
fig = plt.figure(figsize=(12, 5))
ax0 = plt.subplot(121)
ax1 = plt.subplot(122, projection='polar')
model_vm.predict(idata_vm, kind="pps")
pp_samples = idata_vm.posterior_predictive["direction"].stack(samples=("chain", "draw")).T[::50]
colors = ["C0" , "k", "C1"]
for ax, circ in zip((ax0, ax1), (False, "radians", colors)):
for s in pp_samples:
az.plot_kde(s.values, plot_kwargs={"color":colors[0], "alpha": 0.25}, is_circular=circ, ax=ax)
az.plot_kde(idata_vm.observed_data["direction"].values,
plot_kwargs={"color":colors[1], "lw":3}, is_circular=circ, ax=ax)
az.plot_kde(idata_vm.posterior_predictive["direction"].values,
plot_kwargs={"color":colors[2], "ls":"--", "lw":3}, is_circular=circ, ax=ax)
custom_lines = [Line2D([0], [0], color=c) for c in colors]
ax0.legend(custom_lines, ["posterior_predictive", "Observed", 'mean posterior predictive'])
ax0.set_yticks([])
fig.suptitle("Directions (radians)", fontsize=18);
Explanation: We can see that there is a negative relationship between distance and direction. This could be explained as Periwinkles travelling in a direction towards the sea travelled shorter distances than those travelling in directions away from it. From a biological perspective, this could have been due to a propensity of the periwinkles to stop moving once they are close to the sea.
We can also see that if inadvertently we had assumed a normal response we would have obtained a fit with higher uncertainty and more importantly the wrong sign for the relationship.
As a last step for this example we are going to do a posterior predictive check. In the figure below we have to panels showing the same data, with the only difference that the on the right is using a polar projection and the KDE are computing taking into account the circularity of the data.
We can see that our modeling is failing at capturing the bimodality in the data (with mode around 1.6 and $\pm \pi$) and hence the predicted distribution is wider and with a mean closer to $\pm \pi$.
End of explanation |
10,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started
This tutorial is meant to get you started with writing your tests and tuning scripts using Kernel Tuner. We'll use a simple 2D Convolution kernel as an example kernel, but as you will find out shortly, much of the scripts that you write with Kernel Tuner can be reused for testing and tuning other kernels.
<div class="alert alert-info">
**Note
Step1: Implement a test
We will start with using Kernel Tuner's run_kernel function to call our naive 2D convolution kernel. But first we will have to create some input data, which we will do as follows
Step2: Now that we have our input and output data structures created, we can look at how to run our naive kernel on this data, by calling run_kernel. The run_kernel function has the following signature
Step3: The problem_size is what is used by Kernel Tuner to determine the grid dimensions of the kernel.
Our naive kernel needs one thread for each pixel in the output image. As defined above, our output_size is $4096 \times 4096$.
Kernel Tuner computes the grid dimensions of a kernel by dividing the problem_size in each dimension with the
grid divisors in that dimension. The grid divisors are, by default, simply the thread block dimensions. So for our naive kernel we do not need to specify any grid divisor lists at this point.
Step4: The arguments is a list of arguments that are used to run the kernel on the GPU. arguments should be specified as a list of Numpy objects (arrays and/or scalars) that correspond with the function arguments of our CUDA kernel. Our naive convolution kernel has the following signature
Step5: The final required argument is params, which is a dictionary with the user-defined parameters of the kernel. Remember that the user, is you! You can specify anything here and Kernel Tuner will insert a C-preprocessor #define statement into the kernel with the value that you specify. For example, if you were to create a dictionary like so
Step6: Finally, we specify a some input dimensions that are required by our kernel. As you may have noticed the kernel uses, currently undefined, constants, like image_height, image_width, filter_heigth, and filter_width. We also insert those values using the parameters feature of Kernel Tuner. Note that this is not required, we could also have specified these at runtime as arguments to the kernel.
Step7: Now we have setup everything that should allow us to call run_kernel
Step8: If you execute the above cell it will allocate GPU memory, move the contents of the arguments list to GPU memory, compile the kernel specified in kernel_source, and run the kernel name kernel_name with the thread block dimensions specified in params and the grid dimensions derived from the problem_size. After executing the kernel, run_kernel will also retrieve the results from GPU memory, and free GPU memory. The run_kernel function returns the data retrieved from the GPU in a list of Numpy arrays that we have named answer in the above example.
The answer list contains Numpy objects (arrays and/or scalars) in the same order and of the same type as the arguments list that we used to call the kernel with, but in contrast to arguments it contains the data that was stored in GPU memory after our naive convolution kernel had finished executing. This feature is particularly useful for implementing tests for your GPU kernels. You can perform the same operation in Python and compare the results.
Tuning 2D Convolution
In many cases there are more tunable parameters than just the thread block dimensions. We have included a highly-optimized 2D Convolution kernel that contains many parametrized code optimizations. It's a bit long to include here, so instead we just point to the file, you may need to adjust the path a little bit depending on where you've stored the Kernel Tuner's source code and where this notebook is executing.
Step9: Tuning a kernel with Kernel Tuner is done using the tune_kernel function. The interface should look familiar, because it's exactly like run_kernel
Step10: Let's just try that out and see what happens
Step11: As you can see, Kernel Tuner takes the Cartesian product of all lists in tune_params and benchmarks a kernel for each possible combination of values for all the tunable parameters. For such a small set of combinations benchmarking all of them is not really a problem. However, if there are a lot of tunable parameters with many different options this can get problematic. Therefore, Kernel Tuner supports many different optimization strategies, how to use these is explained the API documentation of tune_kernel.
Some combinations of values are illegal and will be skipped automatically. For example, using thread block dimensions of $128 \times 16 = 2048$, which is more than the limit of 1024 that is currently the limit in all CUDA devices. Configurations that fail for other (to be expected) reasons like using too much shared memory, or requiring more registers than available on the device will also be skipped silently by Kernel Tuner, unless you specify "verbose=True" as an optional argument to tune_kernel. Note that other errors, like an out-of-bounds memory access will not be ignored.
The tune_kernel function returns two things. The first is the results, which is a list of records that show the execution time of each benchmarked kernel and the parameters used to compile and run that specific kernel configuration. Secondly, tune_kernel returns a dictionary that describes the environment in which the tuning experiment took place. That means all the inputs to tune_kernel are recorded, but also the software versions of your CUDA installation, OS and so on, along with GPU device information. This second dictionary can be stored along with the results so that you can always find out under what circumstances those results were obtained.
More tunable parameters
I promised that we would use more tunable parameters than just thread block dimensions. Our 2D Convolution kernel also also supports tiling factors in the x and y dimensions. Tiling factors indicate that the amount of work performed by the thread block in a particular dimension is increased with a certain factor.
Step12: It's important to understand that if we increase the amount of work that is performed by every thread block, we also need fewer thread blocks, because the total amount of work stays the same. Remember that the Kernel Tuner computes the grid dimensions (the number of thread blocks the kernel is executed with) from the problem_size and the thread block dimensions.
So now we need to tell Kernel Tuner that we have a tunable parameter that influences the way that the grid dimensions are computed, for this we have the grid divisor lists. You may have noticed that we already have a tunable parameter that influences the grid dimensions, namely the thread block dimensions that we call "block_size_x" and "block_size_y". We did not yet need to specify any grid divisor lists because Kernel Tuner is dividing the problem size by the thread block dimensions by default. However, if we are going to use grid divisor lists we need to specify all tunable parameters that divide the problem size in a certain dimension to obtain the grid size in that dimension.
So to mimick the default behavior that we have been assuming so far we would need to specify
Step13: Now we should add the tiling factors to the grid divisor lists because, as the tiling factor is increased, the number of thread blocks in that dimension should be decreased correspondingly.
Step14: Before we continue with calling tune_kernel we'll show how to make Kernel Tuner display the performance of our kernel using the commonly used performance metric GFLOP/s (giga floating-point operations per second). We can specify how Kernel Tuner should compute user-defined metrics by using the metrics option. Metrics should be specified using an ordered dictionary, because metrics are composable. We can define metrics as lambda functions that take one argument, a dictionary with the tunable parameters and benchmark results of the kernel configuration.
Step15: Now we are ready to call tune_kernel again with our expanded search space. Note that this may take a bit longer since we have just increased our parameter space with a factor of 9. | Python Code:
%%writefile convolution_naive.cu
__global__ void convolution_kernel(float *output, float *input, float *filter) {
int x = blockIdx.x * blockDim.x + threadIdx.x;
int y = blockIdx.y * blockDim.y + threadIdx.y;
int i, j;
float sum = 0.0;
if (y < image_height && x < image_width) {
for (j = 0; j < filter_height; j++) {
for (i = 0; i < filter_width; i++) {
sum += input[(y + j) * input_width + (x + i)] * filter[j * filter_width + i];
}
}
output[y * image_width + x] = sum;
}
}
Explanation: Getting Started
This tutorial is meant to get you started with writing your tests and tuning scripts using Kernel Tuner. We'll use a simple 2D Convolution kernel as an example kernel, but as you will find out shortly, much of the scripts that you write with Kernel Tuner can be reused for testing and tuning other kernels.
<div class="alert alert-info">
**Note:** If you are reading this tutorial on the Kernel Tuner's documentation pages, note that you can actually run this tutorial as a Jupyter Notebook. Just clone the Kernel Tuner's [GitHub repository](http://github.com/benvanwerkhoven/kernel_tuner). Install using *pip install .[tutorial,cuda]* and you're ready to go! You can start the tutorial by typing "jupyter notebook" in the "kernel_tuner/tutorial" directory.
</div>
2D Convolution example
Convolution operations are essential to signal and image processing
applications and are the main operation in convolutional neural networks used for
deep learning.
A convolution operation computes the linear combination of the
weights in a convolution filter and a range of pixels from the input
image for each output pixel. A 2D convolution of an input image $I$ of size
$(w\times h)$ and a convolution filter $F$ of size $(F_w\times F_h)$ computes
an output image $O$ of size $((w-F_w)\times (h-F_h))$:
\begin{equation}\nonumber
O(x,y) = \sum\limits_{j=0}^{F_h} \sum\limits_{i=0}^{F_w} I(x+i,y+j)\times F(i,j)
\end{equation}
A naive CUDA kernel for 2D Convolution parallelizes the operation by creating one thread for each output pixel. Note that to avoid confusion around the term kernel, we refer to the convolution filter as a
filter.
The kernel code is shown in the following code block, make sure you execute all code blocks in this tutorial by selecting them and pressing shift+enter:
End of explanation
import numpy as np
from kernel_tuner import run_kernel
filter_size = (17, 17)
output_size = (4096, 4096)
size = np.prod(output_size)
border_size = (filter_size[0]//2*2, filter_size[1]//2*2)
input_size = ((output_size[0]+border_size[0]) * (output_size[1]+border_size[1]))
output_image = np.zeros(size).astype(np.float32)
input_image = np.random.randn(input_size).astype(np.float32)
conv_filter = np.random.randn(filter_size[0]*filter_size[1]).astype(np.float32)
Explanation: Implement a test
We will start with using Kernel Tuner's run_kernel function to call our naive 2D convolution kernel. But first we will have to create some input data, which we will do as follows:
End of explanation
kernel_name = "convolution_kernel"
kernel_source = "convolution_naive.cu"
Explanation: Now that we have our input and output data structures created, we can look at how to run our naive kernel on this data, by calling run_kernel. The run_kernel function has the following signature:
run_kernel(kernel_name, kernel_source, problem_size, arguments, params, ...)
The ellipsis here indicate that there are many more optional arguments, which we won't need right now. If you're interested, the complete API documentation of run_kernel can be found here.
The five required arguments of run_kernel are:
* kernel_name name of the kernel as a string
* kernel_source string filename, or one or more strings with code or a code generator function
* problem_size the size of the domain in up to three dimensions
* arguments a list of arguments used to call the kernel
* params a dictionary with the tunable parameters
The kernel_name is simply a string with the name of the kernel
in the code. The kernel_source can be a string containing the code, or a filename. The first cell in this notebook wrote the kernel code to a file name "convolution_naive.cu".
End of explanation
problem_size = output_size
Explanation: The problem_size is what is used by Kernel Tuner to determine the grid dimensions of the kernel.
Our naive kernel needs one thread for each pixel in the output image. As defined above, our output_size is $4096 \times 4096$.
Kernel Tuner computes the grid dimensions of a kernel by dividing the problem_size in each dimension with the
grid divisors in that dimension. The grid divisors are, by default, simply the thread block dimensions. So for our naive kernel we do not need to specify any grid divisor lists at this point.
End of explanation
arguments = [output_image, input_image, conv_filter]
Explanation: The arguments is a list of arguments that are used to run the kernel on the GPU. arguments should be specified as a list of Numpy objects (arrays and/or scalars) that correspond with the function arguments of our CUDA kernel. Our naive convolution kernel has the following signature:
__global__ void convolution_kernel(float *output, float *input, float *filter) { }
Therefore, our list of Numpy objects should contain the output image, the input image, and the convolution filter, and exactly in that order, matching the type (32-bit floating-point arrays) and dimensions that are expected by the kernel.
End of explanation
params = dict()
params["block_size_x"] = 16
params["block_size_y"] = 16
Explanation: The final required argument is params, which is a dictionary with the user-defined parameters of the kernel. Remember that the user, is you! You can specify anything here and Kernel Tuner will insert a C-preprocessor #define statement into the kernel with the value that you specify. For example, if you were to create a dictionary like so:
params = {"I_like_convolutions": 42}
Kernel Tuner will insert the following line into our naive convolution kernel:
#define I_like_convolutions 42
While we do like convolutions, this definition won't have much effect on the performance of our kernel. Unless of course somewhere in our kernel we are doing something differently depending on the definition or the value of this preprocessor token.
In addition to freely defined parameters, there are a few special values. You may have noticed that we are about to call a CUDA kernel but we haven't specified any thread block dimensions yet. When using Kernel Tuner, thread block dimensions are basically just parameters to the kernel. Therefore, the parameters with the names "block_size_x", "block_size_y", and "block_size_z" will be interpreted by Kernel Tuner as the thread block dimensions in x,y, and z.
Note that these are just the defaults, if you prefer to name your thread block dimensions differently, please use the block_size_names= option.
Let's continue with the creation of our params dictionary such that we can run our naive convolution kernel. As thread block dimensions we will just select the trusty old $16 \times 16$:
End of explanation
params["image_height"] = output_size[1]
params["image_width"] = output_size[0]
params["filter_height"] = filter_size[1]
params["filter_width"] = filter_size[0]
params["input_width"] = output_size[0] + border_size[0]
Explanation: Finally, we specify a some input dimensions that are required by our kernel. As you may have noticed the kernel uses, currently undefined, constants, like image_height, image_width, filter_heigth, and filter_width. We also insert those values using the parameters feature of Kernel Tuner. Note that this is not required, we could also have specified these at runtime as arguments to the kernel.
End of explanation
answer = run_kernel(kernel_name, kernel_source, problem_size, arguments, params)
print("Done")
Explanation: Now we have setup everything that should allow us to call run_kernel:
End of explanation
filename = "../examples/cuda/convolution.cu"
Explanation: If you execute the above cell it will allocate GPU memory, move the contents of the arguments list to GPU memory, compile the kernel specified in kernel_source, and run the kernel name kernel_name with the thread block dimensions specified in params and the grid dimensions derived from the problem_size. After executing the kernel, run_kernel will also retrieve the results from GPU memory, and free GPU memory. The run_kernel function returns the data retrieved from the GPU in a list of Numpy arrays that we have named answer in the above example.
The answer list contains Numpy objects (arrays and/or scalars) in the same order and of the same type as the arguments list that we used to call the kernel with, but in contrast to arguments it contains the data that was stored in GPU memory after our naive convolution kernel had finished executing. This feature is particularly useful for implementing tests for your GPU kernels. You can perform the same operation in Python and compare the results.
Tuning 2D Convolution
In many cases there are more tunable parameters than just the thread block dimensions. We have included a highly-optimized 2D Convolution kernel that contains many parametrized code optimizations. It's a bit long to include here, so instead we just point to the file, you may need to adjust the path a little bit depending on where you've stored the Kernel Tuner's source code and where this notebook is executing.
End of explanation
tune_params = dict()
tune_params["block_size_x"] = [16, 32, 64, 128]
tune_params["block_size_y"] = [8, 16]
Explanation: Tuning a kernel with Kernel Tuner is done using the tune_kernel function. The interface should look familiar, because it's exactly like run_kernel:
tune_kernel(kernel_name, kernel_string, problem_size, arguments, tune_params, ...)
The only difference is that the params dictionary is replaced by a tune_params dictionary that works similarly, but instead of a single value per parameter tune_params should contain a list of possible values for that parameter.
Again, the ellipsis indicate that there are many more optional arguments, but we won't need those right now. If you're interested, the complete API documentation of tune_kernel can be found here.
We could create a tune_params dictionary in the following way:
End of explanation
from kernel_tuner import tune_kernel
results, env = tune_kernel(kernel_name, filename, problem_size, arguments, tune_params)
Explanation: Let's just try that out and see what happens:
End of explanation
tune_params["tile_size_x"] = [1, 2, 4]
tune_params["tile_size_y"] = [1, 2, 4]
Explanation: As you can see, Kernel Tuner takes the Cartesian product of all lists in tune_params and benchmarks a kernel for each possible combination of values for all the tunable parameters. For such a small set of combinations benchmarking all of them is not really a problem. However, if there are a lot of tunable parameters with many different options this can get problematic. Therefore, Kernel Tuner supports many different optimization strategies, how to use these is explained the API documentation of tune_kernel.
Some combinations of values are illegal and will be skipped automatically. For example, using thread block dimensions of $128 \times 16 = 2048$, which is more than the limit of 1024 that is currently the limit in all CUDA devices. Configurations that fail for other (to be expected) reasons like using too much shared memory, or requiring more registers than available on the device will also be skipped silently by Kernel Tuner, unless you specify "verbose=True" as an optional argument to tune_kernel. Note that other errors, like an out-of-bounds memory access will not be ignored.
The tune_kernel function returns two things. The first is the results, which is a list of records that show the execution time of each benchmarked kernel and the parameters used to compile and run that specific kernel configuration. Secondly, tune_kernel returns a dictionary that describes the environment in which the tuning experiment took place. That means all the inputs to tune_kernel are recorded, but also the software versions of your CUDA installation, OS and so on, along with GPU device information. This second dictionary can be stored along with the results so that you can always find out under what circumstances those results were obtained.
More tunable parameters
I promised that we would use more tunable parameters than just thread block dimensions. Our 2D Convolution kernel also also supports tiling factors in the x and y dimensions. Tiling factors indicate that the amount of work performed by the thread block in a particular dimension is increased with a certain factor.
End of explanation
grid_div_x = ["block_size_x"]
grid_div_y = ["block_size_y"]
Explanation: It's important to understand that if we increase the amount of work that is performed by every thread block, we also need fewer thread blocks, because the total amount of work stays the same. Remember that the Kernel Tuner computes the grid dimensions (the number of thread blocks the kernel is executed with) from the problem_size and the thread block dimensions.
So now we need to tell Kernel Tuner that we have a tunable parameter that influences the way that the grid dimensions are computed, for this we have the grid divisor lists. You may have noticed that we already have a tunable parameter that influences the grid dimensions, namely the thread block dimensions that we call "block_size_x" and "block_size_y". We did not yet need to specify any grid divisor lists because Kernel Tuner is dividing the problem size by the thread block dimensions by default. However, if we are going to use grid divisor lists we need to specify all tunable parameters that divide the problem size in a certain dimension to obtain the grid size in that dimension.
So to mimick the default behavior that we have been assuming so far we would need to specify:
End of explanation
grid_div_x = ["block_size_x", "tile_size_x"]
grid_div_y = ["block_size_y", "tile_size_y"]
Explanation: Now we should add the tiling factors to the grid divisor lists because, as the tiling factor is increased, the number of thread blocks in that dimension should be decreased correspondingly.
End of explanation
from collections import OrderedDict
metrics = OrderedDict()
metrics["GFLOP/s"] = lambda p : np.prod((2,)+output_size+filter_size)/1e9 / (p["time"]/1e3)
Explanation: Before we continue with calling tune_kernel we'll show how to make Kernel Tuner display the performance of our kernel using the commonly used performance metric GFLOP/s (giga floating-point operations per second). We can specify how Kernel Tuner should compute user-defined metrics by using the metrics option. Metrics should be specified using an ordered dictionary, because metrics are composable. We can define metrics as lambda functions that take one argument, a dictionary with the tunable parameters and benchmark results of the kernel configuration.
End of explanation
results, env = tune_kernel(kernel_name, filename, problem_size, arguments, tune_params,
grid_div_x=grid_div_x, grid_div_y=grid_div_y, metrics=metrics)
Explanation: Now we are ready to call tune_kernel again with our expanded search space. Note that this may take a bit longer since we have just increased our parameter space with a factor of 9.
End of explanation |
10,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test
The following unit test is expected to fail until you solve the challenge. | Python Code:
%run ../linked_list/linked_list.py
%load ../linked_list/linked_list.py
class MyLinkedList(LinkedList):
def is_palindrome(self):
# TODO: Implement me
pass
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Determine if a linked list is a palindrome.
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Is a single character or number a palindrome?
No
Can we assume we already have a linked list class that can be used for this problem?
Yes
Test Cases
Empty list -> False
Single element list -> False
Two or more element list, not a palindrome -> False
General case: Palindrome with even length -> True
General case: Palindrome with odd length -> True
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
# %load test_palindrome.py
from nose.tools import assert_equal
class TestPalindrome(object):
def test_palindrome(self):
print('Test: Empty list')
linked_list = MyLinkedList()
assert_equal(linked_list.is_palindrome(), False)
print('Test: Single element list')
head = Node(1)
linked_list = MyLinkedList(head)
assert_equal(linked_list.is_palindrome(), False)
print('Test: Two element list, not a palindrome')
linked_list.append(2)
assert_equal(linked_list.is_palindrome(), False)
print('Test: General case: Palindrome with even length')
head = Node('a')
linked_list = MyLinkedList(head)
linked_list.append('b')
linked_list.append('b')
linked_list.append('a')
assert_equal(linked_list.is_palindrome(), True)
print('Test: General case: Palindrome with odd length')
head = Node(1)
linked_list = MyLinkedList(head)
linked_list.append(2)
linked_list.append(3)
linked_list.append(2)
linked_list.append(1)
assert_equal(linked_list.is_palindrome(), True)
print('Success: test_palindrome')
def main():
test = TestPalindrome()
test.test_palindrome()
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation |
10,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
using pcolormesh to plot an x-z cross section of the cloudsat radar reflectivity
This notebook shows how to read in the reflectivity, convert it to dbZe (dbZ equivalent,
which means the dbZ that the measured reflectivity would have it the cloud was made
of liquid water drops)
1. Read in the height and reflectivity fields
Step1: 2. Make a masked array of the reflectivity so that pcolormesh will plot it
note that I need to find the missing data before I divide by factor=100 to
convert from int16 to float
Step2: 3. Find the part of the orbing that corresponds to the 3 minutes containing the storm
You need to enter the start_hour and start_minute for the start time of your cyclone in the granule
Step3: 4. convert time to distance by using pyproj to get the greatcircle distance between shots
Step4: 5. Make the plot assuming that height is the same for every shot
i.e. assume that height[0, | Python Code:
import h5py
import numpy as np
import datetime as dt
from datetime import timezone as tz
from matplotlib import pyplot as plt
import pyproj
from numpy import ma
from a301utils.a301_readfile import download
from a301lib.cloudsat import get_geo
z_file='2008082060027_10105_CS_2B-GEOPROF_GRANULE_P_R04_E02.h5'
download(z_file)
meters2km=1.e3
lats,lons,date_times,prof_times,dem_elevation=get_geo(z_file)
with h5py.File(z_file,'r') as zin:
zvals=zin['2B-GEOPROF']['Data Fields']['Radar_Reflectivity'][...]
factor=zin['2B-GEOPROF']['Data Fields']['Radar_Reflectivity'].attrs['factor']
missing=zin['2B-GEOPROF']['Data Fields']['Radar_Reflectivity'].attrs['missing']
height=zin['2B-GEOPROF']['Geolocation Fields']['Height'][...]
Explanation: using pcolormesh to plot an x-z cross section of the cloudsat radar reflectivity
This notebook shows how to read in the reflectivity, convert it to dbZe (dbZ equivalent,
which means the dbZ that the measured reflectivity would have it the cloud was made
of liquid water drops)
1. Read in the height and reflectivity fields
End of explanation
hit=(zvals == missing)
zvals = zvals/factor
zvals[hit]=np.nan
zvals=ma.masked_invalid(zvals)
Explanation: 2. Make a masked array of the reflectivity so that pcolormesh will plot it
note that I need to find the missing data before I divide by factor=100 to
convert from int16 to float
End of explanation
first_time=date_times[0]
print('orbit start: {}'.format(first_time))
start_hour=6
start_minute=45
storm_start=starttime=dt.datetime(first_time.year,first_time.month,first_time.day,
start_hour,start_minute,0,tzinfo=tz.utc)
#
# get 3 minutes of data from the storm_start
#
storm_stop=storm_start + dt.timedelta(minutes=3)
print('storm start: {}'.format(storm_start))
hit = np.logical_and(date_times > storm_start,date_times < storm_stop)
lats = lats[hit]
lons=lons[hit]
prof_times=prof_times[hit]
zvals=zvals[hit,:]
height=height[hit,:]
date_times=date_times[hit]
Explanation: 3. Find the part of the orbing that corresponds to the 3 minutes containing the storm
You need to enter the start_hour and start_minute for the start time of your cyclone in the granule
End of explanation
great_circle=pyproj.Geod(ellps='WGS84')
distance=[0]
start=(lons[0],lats[0])
for index in np.arange(1,len(lons)):
azi12,azi21,step= great_circle.inv(lons[index-1],lats[index-1],lons[index],lats[index])
distance.append(distance[index-1] + step)
distance=np.array(distance)/meters2km
Explanation: 4. convert time to distance by using pyproj to get the greatcircle distance between shots
End of explanation
%matplotlib inline
plt.close('all')
fig,ax=plt.subplots(1,1,figsize=(40,4))
from matplotlib import cm
from matplotlib.colors import Normalize
vmin=-30
vmax=20
the_norm=Normalize(vmin=vmin,vmax=vmax,clip=False)
cmap=cm.jet
cmap.set_over('w')
cmap.set_under('b',alpha=0.2)
cmap.set_bad('0.75') #75% grey
col=ax.pcolormesh(distance,height[0,:]/meters2km,zvals.T,cmap=cmap,
norm=the_norm)
ax.set(ylim=[0,17],xlim=(0,1200))
fig.colorbar(col,extend='both')
ax.set(xlabel='distance (km)',ylabel='height (km)')
fig.savefig('cloudsat.png')
plt.show()
!open cloudsat.png
Explanation: 5. Make the plot assuming that height is the same for every shot
i.e. assume that height[0,:] = height[1,:] = ...
in reality, the bin heights are depend on the details of the radar returns, so
we would need to historgram the heights into a uniform set of bins -- ignore that for this qualitative picture
End of explanation |
10,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análise das soluções do programa Rampa
Esta página apresenta as principais soluções apresentadas no programa Rampa.
O objetivo é entender as discrepâncias entre elas e comparar suas vantagens e desvantagens.
Solução conceitual com indices
Step1: Soluções mais eficientes
Step2: Cópia na forma de ladrilho
Na solução usando ladrilho (tile), cria-se um ladrilho
com a linha rampa pelo arange e depois ladrilha-se esta linha
com "lado" linhas.
Step3: Utilizando resize
A solução do resize utiliza-se da propriedade da função resize do numpy
que completa o raster da imagem repetindo-o até o tamanho final.
Step4: Utilizando repeat
O repeat to numpy repete cada pixel n vezes. Para utilizá-lo no neste
problema, repetindo-se a linha rampa (arange) por lado, depois de
reformatá-lo para duas dimensões há necessidade de fazer a transposição
pois a repetição se dá na horizontal e o que se quer é a repetição na
veritical, pois na imagem final as linhas são repetidas.
Step5: Utilizando a operação módulo
Uma solução que não foi encontrada é a que utiliza o operador "módulo", isto é,
o resto da divisão. Nesta solução, cria-se um vetor único do tamanho final da
imagem, aplicando-se o operador "modulo lado". Depois basta reformatar para
duas dimensões. O inconveniente desta solução é que o vetor precisa ser de 32
bits, pois o número total de pixels normalmente é maior que 65535, que é o máximo
que se pode representar em 'uint16'.
Step6: Testando
Step7: Comparando tempo
O tempo de cada função é calculado executando-a mil vezes e calculando o percentil 2. | Python Code:
def rr_indices( lado):
import numpy as np
r,c = np.indices( (lado, lado), dtype='uint16' )
return c
print(rr_indices(11))
Explanation: Análise das soluções do programa Rampa
Esta página apresenta as principais soluções apresentadas no programa Rampa.
O objetivo é entender as discrepâncias entre elas e comparar suas vantagens e desvantagens.
Solução conceitual com indices:
Nesta abordagem, a matriz das coordenadas das colunas "c" já é a rampa desejada. O único cuidado é que
como um dos valores de teste a rampa ultrapassa o valor 255, é importante que o tipo de
pixel seja inteiro com no mínimo 16 bits.
End of explanation
def rr_broadcast( lado):
import numpy as np
row = np.arange(lado, dtype='uint16')
g = np.empty ( (lado,lado), dtype='uint16')
g[:,:] = row
return g
print(rr_broadcast(11))
Explanation: Soluções mais eficientes:
Existem no mínimo três soluções que se classificam como as mais eficientes.
Todas elas tem o custo de escrever uma linha e depois copiar esta linha
em todas as linhas da imagem de saída.
Cópia com broadcast
A mais interessante e mais eficiente é a cópia por broadcast (assunto do módulo 3).
Nesta solução declara-se a
imagem final com empty e depois copia-se uma linha criada pelo arange para
cada linha da imagem utilizando-se da propriedade de broadcast do numpy.
End of explanation
def rr_tile( lado):
import numpy as np
f = np.arange(lado, dtype='uint16')
return np.tile( f, (lado,1))
print(rr_tile(11))
Explanation: Cópia na forma de ladrilho
Na solução usando ladrilho (tile), cria-se um ladrilho
com a linha rampa pelo arange e depois ladrilha-se esta linha
com "lado" linhas.
End of explanation
def rr_resize( lado):
import numpy as np
f = np.arange(lado, dtype='uint16')
return np.resize(f, (lado,lado))
print(rr_resize(11))
Explanation: Utilizando resize
A solução do resize utiliza-se da propriedade da função resize do numpy
que completa o raster da imagem repetindo-o até o tamanho final.
End of explanation
def rr_repeat( lado):
import numpy as np
f = np.arange(lado, dtype='uint16')
return np.repeat( f, lado).reshape(lado, lado).transpose()
print(rr_repeat(11))
Explanation: Utilizando repeat
O repeat to numpy repete cada pixel n vezes. Para utilizá-lo no neste
problema, repetindo-se a linha rampa (arange) por lado, depois de
reformatá-lo para duas dimensões há necessidade de fazer a transposição
pois a repetição se dá na horizontal e o que se quer é a repetição na
veritical, pois na imagem final as linhas são repetidas.
End of explanation
def rr_modulo( lado):
import numpy as np
f = np.arange(lado * lado, dtype='int32')
return (f % lado).reshape(lado,lado)
print(rr_modulo(11))
Explanation: Utilizando a operação módulo
Uma solução que não foi encontrada é a que utiliza o operador "módulo", isto é,
o resto da divisão. Nesta solução, cria-se um vetor único do tamanho final da
imagem, aplicando-se o operador "modulo lado". Depois basta reformatar para
duas dimensões. O inconveniente desta solução é que o vetor precisa ser de 32
bits, pois o número total de pixels normalmente é maior que 65535, que é o máximo
que se pode representar em 'uint16'.
End of explanation
p = [rr_broadcast, rr_resize, rr_repeat, rr_tile, rr_indices, rr_modulo]
lado = 101
f = rr_indices(lado)
for func in p:
if not (func(lado) == f).all():
print('func %s failed' % func.__name__)
Explanation: Testando
End of explanation
import numpy as np
p = [rr_broadcast, rr_tile, rr_resize, rr_repeat, rr_modulo, rr_indices]
lado = 20057
for func in p:
print(func.__name__)
%timeit f = func(lado)
print()
Explanation: Comparando tempo
O tempo de cada função é calculado executando-a mil vezes e calculando o percentil 2.
End of explanation |
10,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hello world TensorFlow-Android
This notebook focuses on the basics of creating your first Andoird App based on TensorFlow. I've created a small DNN to classify IRIS dataset. I've discussed in detail about training this dataset in my YouTube channel. In this notebook I discuss what we need to do from the Python end.
Step 1
Step1: Preparing data for training
Step2: Step 1
Step3: Step 2
Step4: Step 3
Step5: Cross check whether input and output nodes are present in graph def
Step6: Cross checking input and output nodes in the .pb file | Python Code:
#import desired packages
import tensorflow as tf
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import os.path
import sys
# library for freezing the graph
from tensorflow.python.tools import freeze_graph
# library for optmising inference
from tensorflow.python.tools import optimize_for_inference_lib
%matplotlib inline
Explanation: Hello world TensorFlow-Android
This notebook focuses on the basics of creating your first Andoird App based on TensorFlow. I've created a small DNN to classify IRIS dataset. I've discussed in detail about training this dataset in my YouTube channel. In this notebook I discuss what we need to do from the Python end.
Step 1: Train a deep network
Step 2: Save the TF graph and model parameters
Step 3: Make the model ready for inference and export them
End of explanation
#import data
data=pd.read_csv('/Users/Enkay/Documents/Viky/python/tensorflow/iris/iris.data', names=['f1','f2','f3','f4','f5'])
#map data into arrays
s=np.asarray([1,0,0])
ve=np.asarray([0,1,0])
vi=np.asarray([0,0,1])
data['f5'] = data['f5'].map({'Iris-setosa': s, 'Iris-versicolor': ve,'Iris-virginica':vi})
#shuffle the data
data=data.iloc[np.random.permutation(len(data))]
data=data.reset_index(drop=True)
#training data
x_input=data.loc[0:105,['f1','f2','f3','f4']]
y_input=data['f5'][0:106]
#test data
x_test=data.loc[106:149,['f1','f2','f3','f4']]
y_test=data['f5'][106:150]
Explanation: Preparing data for training
End of explanation
#placeholders and variables. input has 4 features and output has 3 classes
x=tf.placeholder(tf.float32,shape=[None,4] , name="Input")
y_=tf.placeholder(tf.float32,shape=[None, 3])
#weight and bias
W=tf.Variable(tf.zeros([4,3]))
b=tf.Variable(tf.zeros([3]))
# model
#softmax function for multiclass classification
y = tf.nn.softmax((tf.matmul(x, W) + b),name="output")
# Cost function
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
# Optimiser
train_step = tf.train.AdamOptimizer(0.01).minimize(cross_entropy)
#calculating accuracy of our model
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
output=tf.argmax(y,axis=1)
#session parameters
sess = tf.InteractiveSession()
#initialising variables
init = tf.global_variables_initializer()
sess.run(init)
#number of interations
epoch=2000
#Training
for step in range(epoch):
_, loss=sess.run([train_step,cross_entropy], feed_dict={x: x_input, y_:[t for t in y_input.as_matrix()]})
if step%500==0 :
print (loss)
# grabbing the default graph
g = tf.get_default_graph()
# every operations in our graph
[op.name for op in g.get_operations()]
Explanation: Step 1: Train a deep network
End of explanation
saver = tf.train.Saver()
model_directory='model_files/'
if not os.path.exists(model_directory):
os.makedirs(model_directory)
#saving the graph
tf.train.write_graph(sess.graph_def, model_directory, 'savegraph.pbtxt')
# saving the values of weights and other parameters of the model
saver.save(sess, 'model_files/model.ckpt')
Explanation: Step 2: Saving the model
End of explanation
# Freeze the graph
MODEL_NAME = 'iris'
input_graph_path = 'model_files/savegraph.pbtxt'
checkpoint_path = 'model_files/model.ckpt'
input_saver_def_path = ""
input_binary = False
output_node_names = "output"
restore_op_name = "save/restore_all"
filename_tensor_name = "save/Const:0"
output_frozen_graph_name = 'model_files/frozen_model_'+MODEL_NAME+'.pb'
output_optimized_graph_name = 'model_files/optimized_inference_model_'+MODEL_NAME+'.pb'
clear_devices = True
freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
input_binary, checkpoint_path, output_node_names,
restore_op_name, filename_tensor_name,
output_frozen_graph_name, clear_devices, "")
output_graph_def = optimize_for_inference_lib.optimize_for_inference(
sess.graph_def,
["Input"], # an array of the input node(s)
["output"], # an array of output nodes
tf.float32.as_datatype_enum)
Explanation: Step 3 : Make the model ready for inference
Freezing the model
End of explanation
output_graph_def
with tf.gfile.GFile(output_optimized_graph_name, "wb") as f:
f.write(output_graph_def.SerializeToString())
Explanation: Cross check whether input and output nodes are present in graph def
End of explanation
g = tf.GraphDef()
##checking frozen graph
g.ParseFromString(open(output_optimized_graph_name, 'rb').read())
g
g1 = tf.GraphDef()
##checking frozen graph
g1.ParseFromString(open("frozen_model_iris.pb", 'rb').read())
g1
g1==g
Explanation: Cross checking input and output nodes in the .pb file
End of explanation |
10,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-2', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: SNU
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
10,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting PRFs in K2 TPFs from Campaign 9.1
In this simple tutorial we will show how to perform PRF photometry in a K2 target pixel file using PyKE and oktopus.
This notebook was created with the following versions of PyKE and oktopus
Step1: 1. Importing the necessary packages
As usual, let's start off by importing a few packages from the standard scientific Python stack
Step2: Since we will perform PRF photometry in a Target Pixel File (TPF), let's import KeplerTargetPixelFile and KeplerQualityFlags classes from PyKE
Step3: It's always wise to take a prior look at the data, therefore, let's import photutils so that we can perform aperture photometry
Step4: Additionally, we will need a model for the Pixel Response Function and for the scene. We can import those from PyKE, as well.
Step5: Finally, we will use oktopus to handle our statistical assumptions
Step6: 2. Actual PRF Photometry
Let's start by instantiating a KeplerTargetPixelFile object (you may either give a url or a local path to the file)
Step7: Note that we set quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK, which means that cadences that have specific QUALITY flags such as Attitude tweak, Safe mode, etc, will be ignored.
Let's take a look at the pixel data using the plot method from KeplerTargetPixelFile
Step8: Now, let's create circular apertures around the targets using photutils
Step9: We can also use photutils to create aperture photometry light curves from the drawn apertures
Step10: Let's visualize the light curves
Step11: Looking at the data before performing PRF photometry is important because it will give us insights on how to construct our priors on the parameters we want to estimate.
Another important part of PRF photometry is the background flux levels. We can either estimate it beforehand and subtract it from the target fluxes or we can let the background be a free parameter during the estimation process. We will choose the latter here, because we will assume that our data comes from Poisson distributions. Therefore, we want the pixel values to be positive everywhere. And, more precisely, remember that the subtraction of two Poisson random variables is not a Poisson rv. Therefore, subtracting any value would break our statistical assumption.
In any case, let's take a look at the background levels using the estimate_background_per_pixel method from KeplerTargetPixelFile class.
Step12: Ooops! Looks like there is something funny happening on that frame because the background levels are way bellow zero.
Let's plot the pixel data to see what's going on
Step13: Ok, we see that there is something unusual here. Let's just ignore this cadence for now and move on to our PRF photometry.
Let's redraw the background light curve using more meaningful bounds
Step14: Now, let's create our PRF model using the SimpleKeplerPRF class
Step15: This is a simple model based on the PRF calibration data. It depends on the channel and the dimensions of the target pixel file that we want to model. This model is parametrized by stellar positions and flux.
To combine one or more PRF models and a background model, we can use the SceneModel class
Step16: Note that this class takes a list of PRF objects as required inputs. Additionally, a parameter named bkg_model can be used to model the background variations. The default is a constant value for every frame.
Now that we have taken a look at the data and created our model, let's put our assumptions on the table by defining a prior distribution for the parameters.
Let's choose a uniform prior for the whole parameter space. We can do that using the UniformPrior class
Step17: This class takes two parameters
Step18: Now, we can feed both the SceneModel and the UniformPrior objects to the PRFPhotometry class
Step19: Finally, we use the fit method in which we have to pass the pixel data tpf.flux.
Step20: Note that our Poisson likelihood assumption is embedded in the PRFPhotometry class. That can be changed while creating PRFPhotometry through the loss_function parameter.
Now, let's retrieve the fitted parameters which are store in the opt_params attribute
Step21: Let's visualize the parameters as a function of time
Step22: Oops! That outlier is probably the funny cadence we identified before
Step23: We can retrieve the residuals using the get_residuals method
Step24: We can also get the pixels time series for every single model as shown below
Step25: Let's then visualize the single models | Python Code:
import pyke
pyke.__version__
import oktopus
oktopus.__version__
Explanation: Fitting PRFs in K2 TPFs from Campaign 9.1
In this simple tutorial we will show how to perform PRF photometry in a K2 target pixel file using PyKE and oktopus.
This notebook was created with the following versions of PyKE and oktopus:
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rc
rc('text', usetex=True)
font = {'family' : 'serif',
'size' : 22,
'serif' : 'New Century Schoolbook'}
rc('font', **font)
Explanation: 1. Importing the necessary packages
As usual, let's start off by importing a few packages from the standard scientific Python stack:
End of explanation
from pyke import KeplerTargetPixelFile, KeplerQualityFlags
Explanation: Since we will perform PRF photometry in a Target Pixel File (TPF), let's import KeplerTargetPixelFile and KeplerQualityFlags classes from PyKE:
End of explanation
import photutils.aperture as apr
Explanation: It's always wise to take a prior look at the data, therefore, let's import photutils so that we can perform aperture photometry:
End of explanation
from pyke.prf import SimpleKeplerPRF, SceneModel, PRFPhotometry
Explanation: Additionally, we will need a model for the Pixel Response Function and for the scene. We can import those from PyKE, as well.
End of explanation
from oktopus import UniformPrior
Explanation: Finally, we will use oktopus to handle our statistical assumptions:
End of explanation
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c91/'
'224300000/64000/ktwo224364733-c91_lpd-targ.fits.gz',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
Explanation: 2. Actual PRF Photometry
Let's start by instantiating a KeplerTargetPixelFile object (you may either give a url or a local path to the file):
End of explanation
tpf.plot(frame=100)
Explanation: Note that we set quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK, which means that cadences that have specific QUALITY flags such as Attitude tweak, Safe mode, etc, will be ignored.
Let's take a look at the pixel data using the plot method from KeplerTargetPixelFile:
End of explanation
tpf.plot(frame=100)
apr.CircularAperture((941.5, 878.5), r=2).plot(color='cyan')
apr.CircularAperture((944.5, 875.5), r=2).plot(color='cyan')
Explanation: Now, let's create circular apertures around the targets using photutils:
End of explanation
lc1, lc2 = np.zeros(len(tpf.time)), np.zeros(len(tpf.time))
for i in range(len(tpf.time)):
lc1[i] = apr.CircularAperture((941.5 - tpf.column, 878.5 - tpf.row), r=2).do_photometry(tpf.flux[i])[0]
lc2[i] = apr.CircularAperture((944.5 - tpf.column, 875.5 - tpf.row), r=2).do_photometry(tpf.flux[i])[0]
Explanation: We can also use photutils to create aperture photometry light curves from the drawn apertures:
End of explanation
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, lc1, 'o', markersize=2)
plt.xlabel("Time")
plt.ylabel("Flux")
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, lc2, 'ro', markersize=2)
plt.xlabel("Time")
plt.ylabel("Flux")
Explanation: Let's visualize the light curves:
End of explanation
bkg = tpf.estimate_bkg_per_pixel(method='mode')
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, bkg, 'ko', markersize=2)
plt.xlabel("Time")
plt.ylabel("Flux")
Explanation: Looking at the data before performing PRF photometry is important because it will give us insights on how to construct our priors on the parameters we want to estimate.
Another important part of PRF photometry is the background flux levels. We can either estimate it beforehand and subtract it from the target fluxes or we can let the background be a free parameter during the estimation process. We will choose the latter here, because we will assume that our data comes from Poisson distributions. Therefore, we want the pixel values to be positive everywhere. And, more precisely, remember that the subtraction of two Poisson random variables is not a Poisson rv. Therefore, subtracting any value would break our statistical assumption.
In any case, let's take a look at the background levels using the estimate_background_per_pixel method from KeplerTargetPixelFile class.
End of explanation
funny_cadence = np.argwhere(bkg < 0)[0][0]
tpf.plot(frame=funny_cadence)
Explanation: Ooops! Looks like there is something funny happening on that frame because the background levels are way bellow zero.
Let's plot the pixel data to see what's going on:
End of explanation
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, bkg, 'ko', markersize=2)
plt.xlabel("Time")
plt.ylabel("Flux")
plt.ylim(2150, 2400)
Explanation: Ok, we see that there is something unusual here. Let's just ignore this cadence for now and move on to our PRF photometry.
Let's redraw the background light curve using more meaningful bounds:
End of explanation
sprf = SimpleKeplerPRF(tpf.channel, tpf.shape[1:], tpf.column, tpf.row)
Explanation: Now, let's create our PRF model using the SimpleKeplerPRF class:
End of explanation
scene = SceneModel(prfs=[sprf] * 2)
Explanation: This is a simple model based on the PRF calibration data. It depends on the channel and the dimensions of the target pixel file that we want to model. This model is parametrized by stellar positions and flux.
To combine one or more PRF models and a background model, we can use the SceneModel class:
End of explanation
prior = UniformPrior(lb=[10e3, 940., 877., 10e3, 943., 874., 1e3],
ub=[60e3, 944., 880., 60e3, 947., 877., 4e3])
Explanation: Note that this class takes a list of PRF objects as required inputs. Additionally, a parameter named bkg_model can be used to model the background variations. The default is a constant value for every frame.
Now that we have taken a look at the data and created our model, let's put our assumptions on the table by defining a prior distribution for the parameters.
Let's choose a uniform prior for the whole parameter space. We can do that using the UniformPrior class:
End of explanation
scene.plot(*prior.mean)
Explanation: This class takes two parameters: lb, ub. lb stands for lower bound and ub for upper bound.
The order of those values should correspond to the order of the parameters in our model. For example, an object from
SimpleKeplerPRF takes flux, center_col, and center_row. Therefore, we need to define the prior values on that same order. And since we have two targets, that results in six parameters that have to be defined. The last parameter is the background constant.
Let's visualize our model evaluated at the mean value given by our prior probability:
End of explanation
phot = PRFPhotometry(scene_model=scene, prior=prior)
Explanation: Now, we can feed both the SceneModel and the UniformPrior objects to the PRFPhotometry class:
End of explanation
opt_params = phot.fit(tpf.flux)
Explanation: Finally, we use the fit method in which we have to pass the pixel data tpf.flux.
End of explanation
flux_1 = opt_params[:, 0]
xcenter_1 = opt_params[:, 1]
ycenter_1 = opt_params[:, 2]
flux_2 = opt_params[:, 3]
xcenter_2 = opt_params[:, 4]
ycenter_2 = opt_params[:, 5]
bkg_hat = opt_params[:, 6]
Explanation: Note that our Poisson likelihood assumption is embedded in the PRFPhotometry class. That can be changed while creating PRFPhotometry through the loss_function parameter.
Now, let's retrieve the fitted parameters which are store in the opt_params attribute:
End of explanation
plt.figure(figsize=[18, 4])
plt.plot(tpf.time, flux_1, 'o', markersize=2)
plt.ylabel("Flux")
plt.xlabel("Time")
plt.ylim()
Explanation: Let's visualize the parameters as a function of time:
End of explanation
outlier = np.argwhere(flux_1 > 25000)[0][0]
outlier == funny_cadence
plt.figure(figsize=[18, 4])
plt.plot(tpf.time, flux_1, 'o', markersize=2)
plt.ylabel("Flux")
plt.xlabel("Time")
plt.ylim(20500, 22000)
plt.figure(figsize=[18, 4])
plt.plot(tpf.time, xcenter_1, 'o', markersize=2)
plt.ylabel("Column position")
plt.xlabel("Time")
plt.figure(figsize=[18, 4])
plt.plot(tpf.time, ycenter_1, 'o', markersize=2)
plt.ylabel("Row position")
plt.xlabel("Time")
plt.figure(figsize=[18, 4])
plt.plot(tpf.time, flux_2, 'ro', markersize=2)
plt.ylim(17000, 19000)
plt.ylabel("Flux")
plt.xlabel("Time")
plt.figure(figsize=[18, 4])
plt.plot(tpf.time, xcenter_2, 'ro', markersize=2)
plt.ylabel("Column position")
plt.xlabel("Time")
plt.figure(figsize=[18, 4])
plt.plot(tpf.time, ycenter_2, 'ro', markersize=2)
plt.ylabel("Row position")
plt.xlabel("Time")
plt.figure(figsize=[18, 4])
plt.plot(tpf.time, bkg_hat, 'ko', markersize=2)
plt.ylabel("Background Flux")
plt.xlabel("Time")
plt.ylim(2350, 2650)
Explanation: Oops! That outlier is probably the funny cadence we identified before:
End of explanation
residuals = phot.get_residuals()
plt.imshow(residuals[100], origin='lower')
plt.colorbar()
Explanation: We can retrieve the residuals using the get_residuals method:
End of explanation
prf_1 = np.array([scene.prfs[0](*phot.opt_params[i, 0:3])
for i in range(len(tpf.time))])
prf_2 = np.array([scene.prfs[1](*phot.opt_params[i, 3:6])
for i in range(len(tpf.time))])
Explanation: We can also get the pixels time series for every single model as shown below:
End of explanation
plt.imshow(prf_1[100], origin='lower', extent=(940, 949, 872, 880))
plt.colorbar()
plt.imshow(prf_2[100], origin='lower', extent=(940, 949, 872, 880))
plt.colorbar()
Explanation: Let's then visualize the single models:
End of explanation |
10,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
자료 안내
Step1: 데이터 불러오기 및 처리
오늘 사용할 데이터는 다음과 같다.
미국 51개 주(State)별 담배(식물) 도매가격 및 판매일자
Step2: read_csv 함수의 리턴값은 DataFrame 이라는 자료형이다.
Step3: DataFrame 자료형
자세한 설명은 다음 시간에 추가될 것임. 우선은 아래 사이트를 참조할 수 있다는 정도만 언급함.
(5.2절 내용까지면 충분함)
http
Step4: 인자를 주면 원하는 만큼 보여준다.
Step5: 파일이 매우 많은 수의 데이터를 포함하고 있을 경우, 맨 뒷쪽 부분을 확인하고 싶으면
tail 메소드를 활용한다. 사용법은 head 메소드와 동일하다.
아래 명령어를 통해 Weed_Price.csv 파일에 22899개의 데이터가 저장되어 있음을 확인할 수 있다.
Step6: 결측치 존재 여부
위 결과를 보면 LowQ 목록에 NaN 이라는 기호가 포함되어 있다. NaN은 Not a Number, 즉, 숫자가 아니다라는 의미이며, 데이터가 애초부터 존재하지 않았거나 누락되었음을 의미한다.
DataFrame의 dtypes
DataFrame 자료형의 dtypes 속성을 이용하면 열별 목록에 사용된 자료형을 확인할 수 있다.
Weed_Price.csv 파일을 읽어 들인 prices_pd 변수에 저장된 DataFrame 값의 열별 목록에 사용된 자료형을 보여준다.
주의
Step7: 정렬 및 결측치 채우기
정렬하기
주별로, 날짜별로 데이터를 정렬한다.
Step8: 결측치 채우기
평균을 구하기 위해서는 결측치(누락된 데이터)가 없어야 한다.
여기서는 이전 줄의 데이터를 이용하여 채우는 방식(method='ffill')을 이용한다.
주의
Step9: 정렬된 데이터의 첫 부분은 아래와 같이 알라바마(Alabama) 주의 데이터만 날짜별로 순서대로 보인다.
Step10: 정렬된 데이터의 끝 부분은 아래와 같이 요밍(Wyoming) 주의 데이터만 날짜별로 순서대로 보인다.
이제 결측치가 더 이상 존재하지 않는다.
Step11: 데이터 분석하기
Step12: 캘리포니아 주에서 거래된 첫 5개의 데이터를 확인해보자.
Step13: HighQ 열 목록에 있는 값들의 총합을 구해보자.
주의
Step14: HighQ 열 목록에 있는 값들의 개수를 확인해보자.
주의
Step15: 이제 캘리포니아 주에서 거래된 HighQ의 담배가격의 평균값을 구할 수 있다.
Step16: 중앙값(Median)
캘리포니아 주에서 거래된 HighQ의 담배가격의 중앙값을 구하자.
중앙값 = 데이터를 크기 순으로 정렬하였을 때 가장 가운데에 위치한 수
데이터의 크기 n이 홀수일 때
Step17: 따라서 중앙값은 $\frac{\text{ca_count}-1}{2}$번째에 위치한 값이다.
주의
Step18: 인덱스 로케이션 함수인 iloc 함수를 활용한다.
주의
Step19: 최빈값(Mode)
캘리포니아 주에서 거래된 HighQ의 담배가격의 최빈값을 구하자.
최빈값 = 가장 자주 발생한 데이터
주의
Step20: 연습문제
연습
지금까지 구한 평균값, 중앙값, 최빈값을 구하는 함수가 이미 DataFrame과 Series 자료형의 메소드로 구현되어 있다.
아래 코드들을 실행하면서 각각의 코드의 의미를 확인하라.
Step21: 연습
캘리포니아 주에서 2013년, 2014년, 2015년에 거래된 HighQ의 담배(식물) 도매가격의 평균을 각각 구하라.
힌트
Step22: 견본답안2
아래와 같은 방식을 이용하여 인덱스 정보를 구하여 슬라이싱 기능을 활용할 수도 있다.
슬라이싱을 활용하여 연도별 평균을 구하는 방식은 본문 내용과 동일한 방식을 따른다.
Step23: year_starts에 담긴 숫자들의 의미는 다음과 같다.
0번줄부터 2013년도 거래가 표시된다.
5번줄부터 2014년도 거래가 표시된다.
369번줄부터 2015년도 거래가 표시된다. | Python Code:
import numpy as np
import pandas as pd
from datetime import datetime as dt
from scipy import stats
Explanation: 자료 안내: 여기서 다루는 내용은 아래 사이트의 내용을 참고하여 생성되었음.
https://github.com/rouseguy/intro2stats
안내사항
오늘 다루는 내용은 pandas 모듈의 소개 정도로 이해하고 넘어갈 것을 권장한다.
아래 내용은 엑셀의 스프레드시트지에 담긴 데이터를 분석하여 평균 등을 어떻게 구하는가를 알고 있다면 어렵지 않게 이해할 수 있는 내용이다. 즉, 넘파이의 어레이에 대한 기초지식과 엑셀에 대한 기초지식을 활용하면 내용을 기본적으로 이해할 수 있을 것이다.
좀 더 자세한 설명이 요구된다면 아래 사이트의 설명을 미리 읽으면 좋다(5.2절 내용까지면 충분함).
하지만, 아래 내용을 엑셀의 기능과 비교하면서 먼저 주욱 훑어 볼 것을 권장한다.
http://sinpong.tistory.com/category/Python%20for%20data%20analysis
평균(Average) 구하기
오늘의 주요 예제
미국에서 거래되는 담배(식물)의 도매가격 데이터를 분석하여, 거래된 도매가의 평균을 구한다.
평균값(Mean)
중앙값(Median)
최빈값(Mode)
평균에 대한 보다 자세한 설명은 첨부된 강의노트 참조: GongSu21-Averages.pdf
사용하는 주요 모듈
아래는 통계분석에서 기본적으로 사용되는 모듈들이다.
pandas: 통계분석 전용 모듈
numpy 모듈을 바탕으로 하여 통계분석에 특화된 모듈임.
마이크로소프트의 엑셀처럼 작동하는 기능을 지원함
datetime: 날짜와 시간을 적절하게 표시하도록 도와주는 기능을 지원하는 모듈
scipy: 수치계산, 공업수학 등을 지원하는 모듈
팬더스(Pandas) 소개
pandas란?
빠르고 쉬운 데이터 분석 도구를 지원하는 파이썬 모듈
numpy를 기본적으로 활용함.
pandas의 기능
데이터 정렬 등 다양한 연산 기능 지원
강력한 색인 및 슬라이싱 기능
시계열(time series) 기능 지원
결측치(누락된 데이터) 처리
SQL과 같은 DB의 관계연산 기능 지원
주의: pandas 모듈의 기능에 대한 보다 자세한 설명은 다음 시간에 다룬다.
여기서는 pandas 모듈을 어떻게 활용하는지에 대한 감을 잡기만 하면 된다.
End of explanation
prices_pd = pd.read_csv("data/Weed_Price.csv", parse_dates=[-1])
Explanation: 데이터 불러오기 및 처리
오늘 사용할 데이터는 다음과 같다.
미국 51개 주(State)별 담배(식물) 도매가격 및 판매일자: Weed_price.csv
아래 그림은 미국의 주별 담배(식물) 판매 데이터를 담은 Weed_Price.csv 파일를 엑셀로 읽었을 때의 일부를 보여준다.
실제 데이터량은 22899개이며, 아래 그림에는 5개의 데이터만을 보여주고 있다.
* 주의: 1번줄은 테이블의 열별 목록(column names)을 담고 있다.
* 열별 목록: State, HighQ, HighQN, MedQ, MedQN, LowQ, LowQN, date
<p>
<table cellspacing="20">
<tr>
<td>
<img src="img/weed_price.png" style="width:600">
</td>
</tr>
</table>
</p>
csv 파일 불러오기
pandas 모듈의 read_csv 함수 활용
read_csv 함수의 리턴값은 DataFrame 이라는 특수한 자료형임
엑셀의 위 그림 모양의 스프레드시트(spreadsheet)라고 생각하면 됨.
언급한 세 개의 csv 파일을 pandas의 read_csv 함수를 이용하여 불러들이자.
주의: Weed_Price.csv 파일을 불러들일 때, parse_dates라는 키워드 인자가 사용되었다.
* parse_dates 키워드 인자: 날짜를 읽어들일 때 다양한 방식을 사용하도록 하는 기능을 갖고 있다.
* 여기서 값을 [-1]로 준 것은 소스 데이터에 있는 날짜 데이터를 변경하지 말고 그대로 불러오라는 의미이다.
* 위 엑셀파일에서 볼 수 있듯이, 마지막 열에 포함된 날짜표시는 굳이 변경을 요하지 않는다.
End of explanation
type(prices_pd)
Explanation: read_csv 함수의 리턴값은 DataFrame 이라는 자료형이다.
End of explanation
prices_pd.head()
Explanation: DataFrame 자료형
자세한 설명은 다음 시간에 추가될 것임. 우선은 아래 사이트를 참조할 수 있다는 정도만 언급함.
(5.2절 내용까지면 충분함)
http://sinpong.tistory.com/category/Python%20for%20data%20analysis
DataFrame 자료형과 엑셀의 스프레드시트 비교하기
불러 들인 Weed_Price.csv 파일의 상위 다섯 줄을 확인해보면, 앞서 엑셀파일 그림에서 본 내용과 일치한다.
다만, 행과 열의 목록이 조금 다를 뿐이다.
* 엑셀에서는 열 목록이 A, B, C, ..., H로 되어 있으며, 소스 파일의 열 목록은 1번 줄로 밀려 있다.
* 엑셀에서의 행 목록은 1, 2, 3, ... 으로 되어 있다.
하지만 read_csv 파일은 좀 다르게 불러 들인다.
* 열 목록은 소스 파일의 열 목록을 그대로 사용한다.
* 행 목록은 0, 1, 2, ... 으로 되어 있다.
데이터 파일의 상위 몇 줄을 불러들이기 위해서는 DataFrame 자료형의 head 메소드를 활용한다.
인자값을 주지 않으면 상위 5줄을 보여준다.
End of explanation
prices_pd.head(10)
Explanation: 인자를 주면 원하는 만큼 보여준다.
End of explanation
prices_pd.tail()
Explanation: 파일이 매우 많은 수의 데이터를 포함하고 있을 경우, 맨 뒷쪽 부분을 확인하고 싶으면
tail 메소드를 활용한다. 사용법은 head 메소드와 동일하다.
아래 명령어를 통해 Weed_Price.csv 파일에 22899개의 데이터가 저장되어 있음을 확인할 수 있다.
End of explanation
prices_pd.dtypes
Explanation: 결측치 존재 여부
위 결과를 보면 LowQ 목록에 NaN 이라는 기호가 포함되어 있다. NaN은 Not a Number, 즉, 숫자가 아니다라는 의미이며, 데이터가 애초부터 존재하지 않았거나 누락되었음을 의미한다.
DataFrame의 dtypes
DataFrame 자료형의 dtypes 속성을 이용하면 열별 목록에 사용된 자료형을 확인할 수 있다.
Weed_Price.csv 파일을 읽어 들인 prices_pd 변수에 저장된 DataFrame 값의 열별 목록에 사용된 자료형을 보여준다.
주의:
* numpy의 array 자료형의 dtype 속성은 하나의 자료형만을 담고 있다.
* 열별 목록에는 하나의 자료형 값들만 올 수 있다.
즉, 열 하나하나가 넘파이의 array에 해당한다고 볼 수 있다.
* State 목록에 사용된 object 라는 dtype은 문자열이 저장된 위치를 가리키는 포인터를 의미한다.
* 문자열의 길이를 제한할 수 없기 때문에 문자열을 어딘가에 저장하고 포인터가 그 위치를 가리키며,
필요에 따라 포인터 정보를 이용하여 저장된 문자열을 확인한다.
* 마지막 줄에 표시된 "dtype: object"의 의미는 복잡한 데이터들의 자료형이라는 의미로 이해하면 됨.
End of explanation
prices_pd.sort_values(['State', 'date'], inplace=True)
Explanation: 정렬 및 결측치 채우기
정렬하기
주별로, 날짜별로 데이터를 정렬한다.
End of explanation
prices_pd.fillna(method='ffill', inplace=True)
Explanation: 결측치 채우기
평균을 구하기 위해서는 결측치(누락된 데이터)가 없어야 한다.
여기서는 이전 줄의 데이터를 이용하여 채우는 방식(method='ffill')을 이용한다.
주의: 앞서 정렬을 먼저 한 이유는, 결측치가 있을 경우 가능하면 동일한 주(State),
비슷한 시점에서 거래된 가격을 사용하고자 함이다.
End of explanation
prices_pd.head()
Explanation: 정렬된 데이터의 첫 부분은 아래와 같이 알라바마(Alabama) 주의 데이터만 날짜별로 순서대로 보인다.
End of explanation
prices_pd.tail()
Explanation: 정렬된 데이터의 끝 부분은 아래와 같이 요밍(Wyoming) 주의 데이터만 날짜별로 순서대로 보인다.
이제 결측치가 더 이상 존재하지 않는다.
End of explanation
california_pd = prices_pd[prices_pd.State == "California"].copy(True)
Explanation: 데이터 분석하기: 평균(Average)
캘리포니아 주를 대상으로해서 담배(식물) 도매가의 평균(average)을 구해본다.
평균값(Mean)
평균값 = 모든 값들의 합을 값들의 개수로 나누기
$X$: 데이터에 포함된 값들을 대변하는 변수
$n$: 데이터에 포함된 값들의 개수
$\Sigma\, X$: 데이터에 포함된 모든 값들의 합
$$\text{평균값}(\mu) = \frac{\Sigma\, X}{n}$$
먼저 마스크 인덱스를 이용하여 캘리포니아 주의 데이터만 추출해야 한다.
End of explanation
california_pd.head(20)
Explanation: 캘리포니아 주에서 거래된 첫 5개의 데이터를 확인해보자.
End of explanation
ca_sum = california_pd['HighQ'].sum()
ca_sum
Explanation: HighQ 열 목록에 있는 값들의 총합을 구해보자.
주의: sum() 메소드 활용을 기억한다.
End of explanation
ca_count = california_pd['HighQ'].count()
ca_count
Explanation: HighQ 열 목록에 있는 값들의 개수를 확인해보자.
주의: count() 메소드 활용을 기억한다.
End of explanation
# 캘리포니아 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 평균값
ca_mean = ca_sum / ca_count
ca_mean
Explanation: 이제 캘리포니아 주에서 거래된 HighQ의 담배가격의 평균값을 구할 수 있다.
End of explanation
ca_count
Explanation: 중앙값(Median)
캘리포니아 주에서 거래된 HighQ의 담배가격의 중앙값을 구하자.
중앙값 = 데이터를 크기 순으로 정렬하였을 때 가장 가운데에 위치한 수
데이터의 크기 n이 홀수일 때: $\frac{n+1}{2}$번 째 위치한 데이터
데이터의 크기 n이 짝수일 때: $\frac{n}{2}$번 째와 $\frac{n}{2}+1$번 째에 위치한 데이터들의 평균값
여기서는 데이터의 크기가 449로 홀수이다.
End of explanation
ca_highq_pd = california_pd.sort_values(['HighQ'])
ca_highq_pd.head()
Explanation: 따라서 중앙값은 $\frac{\text{ca_count}-1}{2}$번째에 위치한 값이다.
주의: 인덱스는 0부터 출발한다. 따라서 중앙값이 하나 앞으로 당겨진다.
End of explanation
# 캘리포니아에서 거래된 상품(HighQ) 담배(식물) 도매가의 중앙값
ca_median = ca_highq_pd.HighQ.iloc[int((ca_count-1)/ 2)]
ca_median
Explanation: 인덱스 로케이션 함수인 iloc 함수를 활용한다.
주의: iloc 메소드는 인덱스 번호를 사용한다.
위 표에서 보여주는 인덱스 번호는 Weed_Price.csv 파일을 처음 불러왔을 때 사용된 인덱스 번호이다.
하지만 ca_high_pd 에서는 참고사항으로 사용될 뿐이며, iloc 함수에 인자로 들어가는 인덱스는 다시 0부터 세는 것으로 시작한다.
따라서 아래 코드처럼 기존의 참고용 인덱스를 사용하면 옳은 답을 구할 수 없다.
End of explanation
# 캘리포니아 주에서 가장 빈번하게 거래된 상품(HighQ) 담배(식물)의 도매가
ca_mode = ca_highq_pd.HighQ.value_counts().index[0]
ca_mode
Explanation: 최빈값(Mode)
캘리포니아 주에서 거래된 HighQ의 담배가격의 최빈값을 구하자.
최빈값 = 가장 자주 발생한 데이터
주의: value_counts() 메소드 활용을 기억한다.
End of explanation
california_pd.mean()
california_pd.mean().HighQ
california_pd.median()
california_pd.mode()
california_pd.mode().HighQ
california_pd.HighQ.mean()
california_pd.HighQ.median()
california_pd.HighQ.mode()
Explanation: 연습문제
연습
지금까지 구한 평균값, 중앙값, 최빈값을 구하는 함수가 이미 DataFrame과 Series 자료형의 메소드로 구현되어 있다.
아래 코드들을 실행하면서 각각의 코드의 의미를 확인하라.
End of explanation
sum = 0
count = 0
for index in np.arange(len(california_pd)):
if california_pd.iloc[index]['date'].year == 2014:
sum += california_pd.iloc[index]['HighQ']
count += 1
sum/count
Explanation: 연습
캘리포니아 주에서 2013년, 2014년, 2015년에 거래된 HighQ의 담배(식물) 도매가격의 평균을 각각 구하라.
힌트: california_pd.iloc[0]['date'].year
견본답안1
2014년에 거래된 도매가의 평균값을 아래와 같이 계산할 수 있다.
sum 변수: 2014년도에 거래된 도매가의 총합을 담는다.
count 변수: 2014년도의 거래 횟수를 담는다.
End of explanation
years = np.arange(2013, 2016)
year_starts = [0]
for yr in years:
for index in np.arange(year_starts[-1], len(california_pd)):
if california_pd.iloc[index]['date'].year == yr:
continue
else:
year_starts.append(index)
break
year_starts
Explanation: 견본답안2
아래와 같은 방식을 이용하여 인덱스 정보를 구하여 슬라이싱 기능을 활용할 수도 있다.
슬라이싱을 활용하여 연도별 평균을 구하는 방식은 본문 내용과 동일한 방식을 따른다.
End of explanation
california_pd.iloc[4]
california_pd.iloc[5]
california_pd.iloc[368]
california_pd.iloc[369]
Explanation: year_starts에 담긴 숫자들의 의미는 다음과 같다.
0번줄부터 2013년도 거래가 표시된다.
5번줄부터 2014년도 거래가 표시된다.
369번줄부터 2015년도 거래가 표시된다.
End of explanation |
10,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaia DR2 variability catalogs
Part II
Step1: The catalog has many columns. What are they?
Step2: Gaia Documentation section 14.3.6 explains that some of the columns are populated with arrays! So this catalog can be thought of as a table-of-tables. The typical length of the tables are small, usually just 3-5 entries.
Step3: I think the segments further consist of lightcurves, for which merely the summary statistics are listed here, but I'm not sure.
Since all the files are still only 150 MB, we can just read in all the files and concatenate them.
Step4: This step only takes a few seconds. Let's use a progress bar to keep track.
Step5: We have 147,535 rotationally modulated variable stars. What is the typical number of segments across the entire catalog?
Step6: What are these segments? Are they the arbitrary Gaia segments, or are they something else?
Let's ask our first question
Step7: Next up
Step8: Wow, $>0.4$ magnitudes is a lot! Most have much lower amplitudes.
The problem with max activity index is that it may be sensitive to flares. Instead, let's use the $A$ and $B$ coefficients of the $\sin{}$ and $\cos{}$ functions
Step9: Gasp! The arrays are actually stored as strings! We need to first convert them to numpy arrays.
Step10: Only run this once
Step11: Let's compare the max_activity_index with the newly determined mean amplitude.
The $95^{th}$ to $5^{th}$ percentile should be almost-but-not-quite twice the amplitude
Step12: The lines track decently well. There's some scatter! Probably in part due to non-sinusoidal behavior.
Let's convert the mean magnitude amplitude to an unspotted-to-spotted flux ratio
Step13: Promising!
Let's read in the Kepler data and cross-match! This cross-match with Gaia and K2 data comes from Meg Bedell.
Step14: We only want a few of the 95 columns, so let's select a subset.
Step15: The to_pandas() method returns byte strings. Arg! We'll have to clean it. Here is a reuseable piece of code
Step16: We can merge (e.g. SQL join) these two dataframes on the source_id key.
Step17: We'll only keep columns that are in both catalogs.
Step18: Only 524 sources appear in both catalogs! Boo! Well, better than nothing!
It's actually even fewer K2 targets, since some targets are single in K2 but have two or more matches in Gaia. These could be background stars or bona-fide binaries. Let's flag them.
Step19: Let's cull the list and just use the "single" stars, which is really the sources for which Gaia did not identify more than one target within 1 arcsecond.
Step20: A mere 224 sources! Boo hoo!
Step21: The points look drawn from their parent population. | Python Code:
# %load /Users/obsidian/Desktop/defaults.py
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
! du -hs ../data/dr2/Gaia/gdr2/vari_rotation_modulation/csv
df0 = pd.read_csv('../data/dr2/Gaia/gdr2/vari_rotation_modulation/csv/VariRotationModulation_0.csv.gz')
df0.shape
Explanation: Gaia DR2 variability catalogs
Part II: Rotation modulation
In this notebook we explore what's in the VariRotationModulation catalog from Gaia DR2. We eventually cross-match it with K2 and see what it looks like!
gully
May 2, 2018
End of explanation
df0.columns
Explanation: The catalog has many columns. What are they?
End of explanation
df0.num_segments.describe()
df0.loc[1]
Explanation: Gaia Documentation section 14.3.6 explains that some of the columns are populated with arrays! So this catalog can be thought of as a table-of-tables. The typical length of the tables are small, usually just 3-5 entries.
End of explanation
import glob
fns = glob.glob('../data/dr2/Gaia/gdr2/vari_rotation_modulation/csv/VariRotationModulation_*.csv.gz')
n_files = len(fns)
Explanation: I think the segments further consist of lightcurves, for which merely the summary statistics are listed here, but I'm not sure.
Since all the files are still only 150 MB, we can just read in all the files and concatenate them.
End of explanation
from astropy.utils.console import ProgressBar
df_rotmod = pd.DataFrame()
with ProgressBar(n_files, ipython_widget=True) as bar:
for i, fn in enumerate(fns):
df_i = pd.read_csv(fn)
df_rotmod = df_rotmod.append(df_i, ignore_index=True)
bar.update()
df_rotmod.shape
Explanation: This step only takes a few seconds. Let's use a progress bar to keep track.
End of explanation
df_rotmod.num_segments.hist(bins=11)
plt.yscale('log')
plt.xlabel('$N_{\mathrm{segments}}$')
plt.ylabel('occurence');
Explanation: We have 147,535 rotationally modulated variable stars. What is the typical number of segments across the entire catalog?
End of explanation
df_rotmod.best_rotation_period.hist(bins=30)
plt.yscale('log')
plt.xlabel('$P_{\mathrm{rot}}$ [days]')
plt.ylabel('$N$')
Explanation: What are these segments? Are they the arbitrary Gaia segments, or are they something else?
Let's ask our first question: What are the distribution of periods?
best_rotation_period : Best rotation period (double, Time[day])
this field is an estimate of the stellar rotation period and is obtained by averaging the periods obtained in the different segments
End of explanation
df_rotmod.max_activity_index.hist(bins=30)
plt.yscale('log')
plt.xlabel('$95^{th} - 5^{th}$ variability percentile[mag]')
plt.ylabel('$N$');
Explanation: Next up: What are the distribution of amplitudes?
We will use the segments_activity_index:
segments_activity_index : Activity Index in segment (double, Magnitude[mag])
this array stores the activity indexes measured in the different segments. In a given segment the amplitude of variability A is taken as an index of the magnetic activity level. The amplitude of variability is measured by means of the equation:
$$A=mag_{95}−mag_{5}$$
where $mag_{95}$ and $mag_{5}$ are the 95-th and the 5-th percentiles of the G-band magnitude values.
End of explanation
val = df_rotmod.segments_cos_term[0]
val
Explanation: Wow, $>0.4$ magnitudes is a lot! Most have much lower amplitudes.
The problem with max activity index is that it may be sensitive to flares. Instead, let's use the $A$ and $B$ coefficients of the $\sin{}$ and $\cos{}$ functions:
segments_cos_term : Coefficient of cosine term of linear fit in segment (double, Magnitude[mag])
if a significative period T0 is detected in a time-series segment, then the points of the time-series segment are fitted with the function
$$mag(t) = mag_0 + A\cos(2\pi T_0 t) + B \sin(2\pi T_0 t)$$
Let's call the total amplitude $\alpha$, then we can apply:
$\alpha = \sqrt{A^2+B^2}$
End of explanation
np.array(eval(val))
NaN = np.NaN #Needed for all the NaN values in the strings.
clean_strings = lambda str_in: np.array(eval(str_in))
Explanation: Gasp! The arrays are actually stored as strings! We need to first convert them to numpy arrays.
End of explanation
if type(df_rotmod['segments_cos_term'][0]) == str:
df_rotmod['segments_cos_term'] = df_rotmod['segments_cos_term'].apply(clean_strings)
df_rotmod['segments_sin_term'] = df_rotmod['segments_sin_term'].apply(clean_strings)
else:
print('Skipping rewrite.')
amplitude_vector = (df_rotmod.segments_sin_term**2 + df_rotmod.segments_cos_term**2)**0.5
df_rotmod['mean_amplitude'] = amplitude_vector.apply(np.nanmean)
Explanation: Only run this once:
End of explanation
amp_conv_factor = 1.97537
x_dashed = np.linspace(0,1, 10)
y_dashed = amp_conv_factor * x_dashed
plt.figure(figsize=(5,5))
plt.plot(df_rotmod.mean_amplitude, df_rotmod.max_activity_index, '.', alpha=0.05)
plt.plot(x_dashed, y_dashed, 'k--')
plt.xlim(0,0.5)
plt.ylim(0,1);
plt.xlabel(r'Mean cyclic amplitude, $\alpha$ [mag]')
plt.ylabel(r'$95^{th} - 5^{th}$ variability percentile[mag]');
Explanation: Let's compare the max_activity_index with the newly determined mean amplitude.
The $95^{th}$ to $5^{th}$ percentile should be almost-but-not-quite twice the amplitude:
End of explanation
df_rotmod['amplitude_linear'] = 10**(-df_rotmod.mean_amplitude/2.5)
df_rotmod['amplitude_linear'].hist(bins=500)
plt.xlim(0.9, 1.01)
plt.figure(figsize=(5,5))
plt.plot(df_rotmod.best_rotation_period, df_rotmod.amplitude_linear, '.', alpha=0.01)
plt.xlim(0, 60)
plt.ylim(0.9, 1.04)
plt.xlabel('$P_{\mathrm{rot}}$ [days]')
plt.text(1, 0.92, ' Rapidly rotating\n spot dominated')
plt.text(36, 1.02, ' Slowly rotating\n facular dominated')
plt.ylabel('Flux decrement $(f_{\mathrm{spot, min}})$ ');
Explanation: The lines track decently well. There's some scatter! Probably in part due to non-sinusoidal behavior.
Let's convert the mean magnitude amplitude to an unspotted-to-spotted flux ratio:
End of explanation
from astropy.table import Table
k2_fun = Table.read('../../K2-metadata/metadata/k2_dr2_1arcsec.fits', format='fits')
len(k2_fun), len(k2_fun.columns)
Explanation: Promising!
Let's read in the Kepler data and cross-match! This cross-match with Gaia and K2 data comes from Meg Bedell.
End of explanation
col_subset = ['source_id', 'epic_number', 'tm_name', 'k2_campaign_str']
k2_df = k2_fun[col_subset].to_pandas()
Explanation: We only want a few of the 95 columns, so let's select a subset.
End of explanation
def clean_to_pandas(df):
'''Cleans a dataframe converted with the to_pandas method'''
for col in df.columns:
if type(k2_df[col][0]) == bytes:
df[col] = df[col].str.decode('utf-8')
return df
k2_df = clean_to_pandas(k2_df)
df_rotmod.columns
keep_cols = ['source_id', 'num_segments', 'best_rotation_period', 'amplitude_linear']
Explanation: The to_pandas() method returns byte strings. Arg! We'll have to clean it. Here is a reuseable piece of code:
End of explanation
k2_df.head()
df_rotmod[keep_cols].head()
Explanation: We can merge (e.g. SQL join) these two dataframes on the source_id key.
End of explanation
df_comb = pd.merge(k2_df, df_rotmod[keep_cols], how='inner', on='source_id')
df_comb.head()
df_comb.shape
Explanation: We'll only keep columns that are in both catalogs.
End of explanation
multiplicity_count = df_comb.groupby('epic_number').\
source_id.count().to_frame().\
rename(columns={'source_id':'multiplicity'})
df = pd.merge(df_comb, multiplicity_count, left_on='epic_number', right_index=True)
df.head(20)
Explanation: Only 524 sources appear in both catalogs! Boo! Well, better than nothing!
It's actually even fewer K2 targets, since some targets are single in K2 but have two or more matches in Gaia. These could be background stars or bona-fide binaries. Let's flag them.
End of explanation
df_single = df[df.multiplicity == 1]
df_single.shape
Explanation: Let's cull the list and just use the "single" stars, which is really the sources for which Gaia did not identify more than one target within 1 arcsecond.
End of explanation
plt.figure(figsize=(5,5))
plt.plot(df_single.best_rotation_period, df_single.amplitude_linear, '.', alpha=0.1)
plt.xlim(0, 60)
plt.ylim(0.9, 1.04)
plt.xlabel('$P_{\mathrm{rot}}$ [days]')
plt.text(1, 0.92, ' Rapidly rotating\n spot dominated')
plt.text(36, 1.02, ' Slowly rotating\n facular dominated')
plt.ylabel('Flux decrement $(f_{\mathrm{spot, min}})$ ')
plt.title('K2 x Gaia x rotational modulation');
Explanation: A mere 224 sources! Boo hoo!
End of explanation
df_single.sort_values('amplitude_linear', ascending=True).head(25).style.format({'source_id':"{:.0f}",
'epic_number':"{:.0f}"})
df_single.to_csv('../data/analysis/k2_gaia_rotmod_single.csv', index=False)
Explanation: The points look drawn from their parent population.
End of explanation |
10,839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's pull it all together to do something cool.
Let's reuse a lot of our code to make a movie of our travel around San Francisco.
We'll first select a bunch of recent scenes, activate, and download them.
After that we'll create a mosaic, a path, and trace the path through the moasic.
We'll use the path to crop subregions, save them as images, and create a video.
First step is to trace our AOI and a path through it.
Step1: Query the API
Now we'll save the geometry for our AOI and the path.
We'll also filter and cleanup our data just like before.
Step2: Just like before we clean up our data and distill it down to just the scenes we want.
Step3: To make sure we are good we'll visually inspect the scenes in our slippy map.
Step12: This is from the previous notebook. We are just activating and downloading scenes.
Step13: Perform the actual activation ... go get coffee
Step14: Downloand the scenes
Step15: Now, just like before, we will mosaic those scenes.
It is easier to call out using subprocess and use the command line util.
Just iterate through the files and drop them into a single file sf_mosaic.tif
Step16: Let's take a look at what we got
Step17: Now we are going to write a quick crop function.
this function takes in a, scene, a center position, and the width and height of a window.
We'll use numpy slice notation to make the crop.
Let's pick a spot and see what we get.
Step19: Now to figure out how our lat/long values map to pixels.
The next thing we need is a way to map from a lat and long in our slippy map to the pixel position in our image.
We'll use what we know about the lat/long of the corners of our image to do that.
We'll ask GDAL to tell us the extents of our scene and the geotransofrm.
We'll then apply the GeoTransform from GDAL to the coordinates that are the extents of our scene.
Now we have the corners of our scene in Lat/Long
Step20: Here we'll call the functions we wrote.
First we open the scene and get the width and height.
Then from the geotransorm we'll reproject those points to lat and long.
Step21: Now we'll do a bit of hack.
That bit above is precise but complex, we are going to make everything easier to think about.
We are going to linearize our scene, which isn't perfect, but good enough for our application.
What this function does is take in a given lat,long, the size of the image, and the extents as lat,lon coordinates.
For a given pixel we map it's x and y values to the value between a given lat and long and return the results.
Now we can ask, for a given lat,long pair what is the corresponding pixel.
Step22: Let's check our work
First we'll create a draw point function that just puts a red dot at given pixel.
We'll get our scene, and map all of the lat/long points in our path to pixel values.
Finally we'll load our image, plot the points and show our results
Step23: Now things get interesting....
Our path is just a few waypoint but to make a video we need just about every point between our waypoints.
To get all of the points between our waypoints we'll have to write a little interpolation script.
Interpolation is just a fancy word for nicely space points bewteen or waypoints, we'll call the space between each point as our "velocity."
If we were really slick we could define a heading vector and and build a spline so the camera faces the direction of heading. Our approach is fine as the top of the frame is always North, which makes reckoning easy.
Once we have our interpolation function all we need to do is to crop our large mosaic at each point in our interpolation point list and save it in a sequential file.
Step24: Before we generate our video frames, let's check our work
We'll load our image.
Build the interpolated waypoints list.
Draw the points on the image using our draw_point method.
Plot the results
Step25: Now let's re-load the image and run the scene maker.
Step26: Finally, let's make a movie.
Our friend AVConv, which is like ffmpeg is a handy command line util for transcoding video.
AVConv can also convert a series of images into a video and vice versa.
We'll set up our command and use subprocess to make the call. | Python Code:
# Basemap Mosaic (v1 API)
mosaicsSeries = 'global_quarterly_2017q1_mosaic'
# Planet tile server base URL (Planet Explorer Mosaics Tiles)
mosaicsTilesURL_base = 'https://tiles0.planet.com/experimental/mosaics/planet-tiles/' + mosaicsSeries + '/gmap/{z}/{x}/{y}.png'
# Planet tile server url
mosaicsTilesURL = mosaicsTilesURL_base + '?api_key=' + api_keys["PLANET_API_KEY"]
# Map Settings
# Define colors
colors = {'blue': "#009da5"}
# Define initial map center lat/long
center = [37.774929,-122.419416]
# Define initial map zoom level
zoom = 11
# Set Map Tiles URL
planetMapTiles = TileLayer(url= mosaicsTilesURL)
# Create the map
m = Map(
center=center,
zoom=zoom,
default_tiles = planetMapTiles # Uncomment to use Planet.com basemap
)
# Define the draw tool type options
polygon = {'shapeOptions': {'color': colors['blue']}}
rectangle = {'shapeOptions': {'color': colors['blue']}}
# Create the draw controls
# @see https://github.com/ellisonbg/ipyleaflet/blob/master/ipyleaflet/leaflet.py#L293
dc = DrawControl(
polygon = polygon,
rectangle = rectangle
)
# Initialize an action counter variable
actionCount = 0
AOIs = {}
# Register the draw controls handler
def handle_draw(self, action, geo_json):
# Increment the action counter
global actionCount
actionCount += 1
# Remove the `style` property from the GeoJSON
geo_json['properties'] = {}
# Convert geo_json output to a string and prettify (indent & replace ' with ")
geojsonStr = json.dumps(geo_json, indent=2).replace("'", '"')
AOIs[actionCount] = json.loads(geojsonStr)
# Attach the draw handler to the draw controls `on_draw` event
dc.on_draw(handle_draw)
m.add_control(dc)
m
Explanation: Let's pull it all together to do something cool.
Let's reuse a lot of our code to make a movie of our travel around San Francisco.
We'll first select a bunch of recent scenes, activate, and download them.
After that we'll create a mosaic, a path, and trace the path through the moasic.
We'll use the path to crop subregions, save them as images, and create a video.
First step is to trace our AOI and a path through it.
End of explanation
print AOIs
areaAOI = AOIs[1]["geometry"]
pathAOI = AOIs[2]["geometry"]
aoi_file ="san_francisco.geojson"
with open(aoi_file,"w") as f:
f.write(json.dumps(areaAOI))
# build a query using the AOI and
# a cloud_cover filter that excludes 'cloud free' scenes
old = datetime.datetime(year=2017,month=1,day=1)
new = datetime.datetime(year=2017,month=8,day=10)
query = filters.and_filter(
filters.geom_filter(areaAOI),
filters.range_filter('cloud_cover', lt=0.05),
filters.date_range('acquired', gt=old),
filters.date_range('acquired', lt=new)
)
# build a request for only PlanetScope imagery
request = filters.build_search_request(
query, item_types=['PSScene3Band']
)
# if you don't have an API key configured, this will raise an exception
result = client.quick_search(request)
scenes = []
planet_map = {}
for item in result.items_iter(limit=500):
planet_map[item['id']]=item
props = item['properties']
props["id"] = item['id']
props["geometry"] = item["geometry"]
props["thumbnail"] = item["_links"]["thumbnail"]
scenes.append(props)
scenes = pd.DataFrame(data=scenes)
display(scenes)
print len(scenes)
Explanation: Query the API
Now we'll save the geometry for our AOI and the path.
We'll also filter and cleanup our data just like before.
End of explanation
# now let's clean up the datetime stuff
# make a shapely shape from our aoi
sanfran = shape(areaAOI)
footprints = []
overlaps = []
# go through the geometry from our api call, convert to a shape and calculate overlap area.
# also save the shape for safe keeping
for footprint in scenes["geometry"].tolist():
s = shape(footprint)
footprints.append(s)
overlap = 100.0*(sanfran.intersection(s).area / sanfran.area)
overlaps.append(overlap)
# take our lists and add them back to our dataframe
scenes['overlap'] = pd.Series(overlaps, index=scenes.index)
scenes['footprint'] = pd.Series(footprints, index=scenes.index)
# now make sure pandas knows about our date/time columns.
scenes["acquired"] = pd.to_datetime(scenes["acquired"])
scenes["published"] = pd.to_datetime(scenes["published"])
scenes["updated"] = pd.to_datetime(scenes["updated"])
scenes.head()
# Now let's get it down to just good, recent, clear scenes
clear = scenes['cloud_cover']<0.1
good = scenes['quality_category']=="standard"
recent = scenes["acquired"] > datetime.date(year=2017,month=5,day=1)
partial_coverage = scenes["overlap"] > 60
good_scenes = scenes[(good&clear&recent&partial_coverage)]
print good_scenes
Explanation: Just like before we clean up our data and distill it down to just the scenes we want.
End of explanation
# first create a list of colors
colors = ["#ff0000","#00ff00","#0000ff","#ffff00","#ff00ff","#00ffff","#ff0000","#00ff00","#0000ff","#ffff00","#ff00ff","#00ffff"]
# grab our scenes from the geometry/footprint geojson
# Chane this number as needed
footprints = good_scenes[0:10]["geometry"].tolist()
# for each footprint/color combo
for footprint,color in zip(footprints,colors):
# create the leaflet object
feat = {'geometry':footprint,"properties":{
'style':{'color': color,'fillColor': color,'fillOpacity': 0.2,'weight': 1}},
'type':u"Feature"}
# convert to geojson
gjson = GeoJSON(data=feat)
# add it our map
m.add_layer(gjson)
# now we will draw our original AOI on top
feat = {'geometry':areaAOI,"properties":{
'style':{'color': "#FFFFFF",'fillColor': "#FFFFFF",'fillOpacity': 0.5,'weight': 1}},
'type':u"Feature"}
gjson = GeoJSON(data=feat)
m.add_layer(gjson)
m
Explanation: To make sure we are good we'll visually inspect the scenes in our slippy map.
End of explanation
def get_products(client, scene_id, asset_type='PSScene3Band'):
Ask the client to return the available products for a
given scene and asset type. Returns a list of product
strings
out = client.get_assets_by_id(asset_type,scene_id)
temp = out.get()
return temp.keys()
def activate_product(client, scene_id, asset_type="PSScene3Band",product="analytic"):
Activate a product given a scene, an asset type, and a product.
On success return the return value of the API call and an activation object
temp = client.get_assets_by_id(asset_type,scene_id)
products = temp.get()
if( product in products.keys() ):
return client.activate(products[product]),products[product]
else:
return None
def download_and_save(client,product):
Given a client and a product activation object download the asset.
This will save the tiff file in the local directory and return its
file name.
out = client.download(product)
fp = out.get_body()
fp.write()
return fp.name
def scenes_are_active(scene_list):
Check if all of the resources in a given list of
scene activation objects is read for downloading.
return True
retVal = True
for scene in scene_list:
if scene["status"] != "active":
print "{} is not ready.".format(scene)
return False
return True
def load_image4(filename):
Return a 4D (r, g, b, nir) numpy array with the data in the specified TIFF filename.
path = os.path.abspath(os.path.join('./', filename))
if os.path.exists(path):
with rasterio.open(path) as src:
b, g, r, nir = src.read()
return np.dstack([r, g, b, nir])
def load_image3(filename):
Return a 3D (r, g, b) numpy array with the data in the specified TIFF filename.
path = os.path.abspath(os.path.join('./', filename))
if os.path.exists(path):
with rasterio.open(path) as src:
b,g,r,mask = src.read()
return np.dstack([b, g, r])
def get_mask(filename):
Return a 1D mask numpy array with the data in the specified TIFF filename.
path = os.path.abspath(os.path.join('./', filename))
if os.path.exists(path):
with rasterio.open(path) as src:
b,g,r,mask = src.read()
return np.dstack([mask])
def rgbir_to_rgb(img_4band):
Convert an RGBIR image to RGB
return img_4band[:,:,:3]
Explanation: This is from the previous notebook. We are just activating and downloading scenes.
End of explanation
to_get = good_scenes["id"][0:10].tolist()
to_get = sorted(to_get)
activated = []
# for each scene to get
for scene in to_get:
# get the product
product_types = get_products(client,scene)
for p in product_types:
# if there is a visual productfor p in labels:
if p == "visual": # p == "basic_analytic_dn"
print "Activating {0} for scene {1}".format(p,scene)
# activate the product
_,product = activate_product(client,scene,product=p)
activated.append(product)
Explanation: Perform the actual activation ... go get coffee
End of explanation
tiff_files = []
asset_type = "_3B_Visual"
# check if our scenes have been activated
if scenes_are_active(activated):
for to_download,name in zip(activated,to_get):
# create the product name
name = name + asset_type + ".tif"
# if the product exists locally
if( os.path.isfile(name) ):
# do nothing
print "We have scene {0} already, skipping...".format(name)
tiff_files.append(name)
elif to_download["status"] == "active":
# otherwise download the product
print "Downloading {0}....".format(name)
fname = download_and_save(client,to_download)
tiff_files.append(fname)
print "Download done."
else:
print "Could not download, still activating"
else:
print "Scenes aren't ready yet"
print tiff_files
Explanation: Downloand the scenes
End of explanation
subprocess.call(["rm","sf_mosaic.tif"])
commands = ["gdalwarp", # t
"-t_srs","EPSG:3857",
"-cutline",aoi_file,
"-crop_to_cutline",
"-tap",
"-tr", "3", "3"
"-overwrite"]
output_mosaic = "_mosaic.tif"
for tiff in tiff_files:
commands.append(tiff)
commands.append(output_mosaic)
print " ".join(commands)
subprocess.call(commands)
Explanation: Now, just like before, we will mosaic those scenes.
It is easier to call out using subprocess and use the command line util.
Just iterate through the files and drop them into a single file sf_mosaic.tif
End of explanation
merged = load_image3(output_mosaic)
plt.figure(0,figsize=(18,18))
plt.imshow(merged)
plt.title("merged")
Explanation: Let's take a look at what we got
End of explanation
def crop_to_area(scene,x_c,y_c,w,h):
tlx = x_c-(w/2)
tly = y_c-(h/2)
brx = x_c+(w/2)
bry = y_c+(h/2)
return scene[tly:bry,tlx:brx,:]
plt.figure(0,figsize=(3,4))
plt.imshow(crop_to_area(merged,3000,3000,640,480))
plt.title("merged")
Explanation: Now we are going to write a quick crop function.
this function takes in a, scene, a center position, and the width and height of a window.
We'll use numpy slice notation to make the crop.
Let's pick a spot and see what we get.
End of explanation
# Liberally borrowed from this example
# https://gis.stackexchange.com/questions/57834/how-to-get-raster-corner-coordinates-using-python-gdal-bindings
def GetExtent(gt,cols,rows):
Get the list of corners in our output image in the format
[[x,y],[x,y],[x,y]]
ext=[]
# for the corners of the image
xarr=[0,cols]
yarr=[0,rows]
for px in xarr:
for py in yarr:
# apply the geo coordiante transform
# using the affine transform we got from GDAL
x=gt[0]+(px*gt[1])+(py*gt[2])
y=gt[3]+(px*gt[4])+(py*gt[5])
ext.append([x,y])
yarr.reverse()
return ext
def ReprojectCoords(coords,src_srs,tgt_srs):
trans_coords=[]
# create a transform object from the source and target ref system
transform = osr.CoordinateTransformation( src_srs, tgt_srs)
for x,y in coords:
# transform the points
x,y,z = transform.TransformPoint(x,y)
# add it to the list.
trans_coords.append([x,y])
return trans_coords
Explanation: Now to figure out how our lat/long values map to pixels.
The next thing we need is a way to map from a lat and long in our slippy map to the pixel position in our image.
We'll use what we know about the lat/long of the corners of our image to do that.
We'll ask GDAL to tell us the extents of our scene and the geotransofrm.
We'll then apply the GeoTransform from GDAL to the coordinates that are the extents of our scene.
Now we have the corners of our scene in Lat/Long
End of explanation
# TLDR: pixels => UTM coordiantes => Lat Long
raster=output_mosaic
# Load the GDAL File
ds=gdal.Open(raster)
# get the geotransform
gt=ds.GetGeoTransform()
# get the width and height of our image
cols = ds.RasterXSize
rows = ds.RasterYSize
# Generate the coordinates of our image in utm
ext=GetExtent(gt,cols,rows)
# get the spatial referencec object
src_srs=osr.SpatialReference()
# get the data that will allow us to move from UTM to Lat Lon.
src_srs.ImportFromWkt(ds.GetProjection())
tgt_srs = src_srs.CloneGeogCS()
extents = ReprojectCoords(ext,src_srs,tgt_srs)
print extents
Explanation: Here we'll call the functions we wrote.
First we open the scene and get the width and height.
Then from the geotransorm we'll reproject those points to lat and long.
End of explanation
def poor_mans_lat_lon_2_pix(lon,lat,w,h,extents):
# split up our lat and longs
lats = [e[1] for e in extents]
lons = [e[0] for e in extents]
# calculate our scene extents max and min
lat_max = np.max(lats)
lat_min = np.min(lats)
lon_max = np.max(lons)
lon_min = np.min(lons)
# calculate the difference between our start point
# and our minimum
lat_diff = lat-lat_min
lon_diff = lon-lon_min
# create the linearization
lat_r = float(h)/(lat_max-lat_min)
lon_r = float(w)/(lon_max-lon_min)
# generate the results.
return int(lat_r*lat_diff),int(lon_r*lon_diff)
Explanation: Now we'll do a bit of hack.
That bit above is precise but complex, we are going to make everything easier to think about.
We are going to linearize our scene, which isn't perfect, but good enough for our application.
What this function does is take in a given lat,long, the size of the image, and the extents as lat,lon coordinates.
For a given pixel we map it's x and y values to the value between a given lat and long and return the results.
Now we can ask, for a given lat,long pair what is the corresponding pixel.
End of explanation
def draw_point(x,y,img,t=40):
h,w,d = img.shape
y = h-y
img[(y-t):(y+t),(x-t):(x+t),:] = [255,0,0]
h,w,c = merged.shape
waypoints = [poor_mans_lat_lon_2_pix(point[0],point[1],w,h,extents) for point in pathAOI["coordinates"]]
print waypoints
merged = load_image3(output_mosaic)
[draw_point(pt[1],pt[0],merged) for pt in waypoints]
plt.figure(0,figsize=(18,18))
plt.imshow(merged)
plt.title("merged")
Explanation: Let's check our work
First we'll create a draw point function that just puts a red dot at given pixel.
We'll get our scene, and map all of the lat/long points in our path to pixel values.
Finally we'll load our image, plot the points and show our results
End of explanation
def interpolate_waypoints(waypoints,velocity=10.0):
retVal = []
last_pt = waypoints[0]
# for each point in our waypoints except the first
for next_pt in waypoints[1:]:
# calculate distance between the points
distance = np.sqrt((last_pt[0]-next_pt[0])**2+(last_pt[1]-next_pt[1])**2)
# use our velocity to calculate the number steps.
steps = np.ceil(distance/velocity)
# linearly space points between the two points on our line
xs = np.array(np.linspace(last_pt[0],next_pt[0],steps),dtype='int64')
ys = np.array(np.linspace(last_pt[1],next_pt[1],steps),dtype='int64')
# zip the points together
retVal += zip(xs,ys)
# move to the next point
last_pt = next_pt
return retVal
def build_scenes(src,waypoints,window=[640,480],path="./movie/"):
count = 0
# Use opencv to change the color space of our image.
src = cv2.cvtColor(src, cv2.COLOR_BGR2RGB)
# define half our sampling window.
w2 = window[0]/2
h2 = window[1]/2
# for our source image get the width and height
h,w,d = src.shape
for pt in waypoints:
# for each point crop the area out.
# the y value of our scene is upside down.
temp = crop_to_area(src,pt[1],h-pt[0],window[0],window[1])
# If we happen to hit the border of the scene, just skip
if temp.shape[0]*temp.shape[1]== 0:
# if we have an issue, just keep plugging along
continue
# Resample the image a bit, this just makes things look nice.
temp = cv2.resize(temp, (int(window[0]*0.75), int(window[1]*.75)))
# create a file name
fname = os.path.abspath(path+"img{num:06d}.png".format(num=count))
# Save it
cv2.imwrite(fname,temp)
count += 1
Explanation: Now things get interesting....
Our path is just a few waypoint but to make a video we need just about every point between our waypoints.
To get all of the points between our waypoints we'll have to write a little interpolation script.
Interpolation is just a fancy word for nicely space points bewteen or waypoints, we'll call the space between each point as our "velocity."
If we were really slick we could define a heading vector and and build a spline so the camera faces the direction of heading. Our approach is fine as the top of the frame is always North, which makes reckoning easy.
Once we have our interpolation function all we need to do is to crop our large mosaic at each point in our interpolation point list and save it in a sequential file.
End of explanation
# load the image
merged = load_image3(output_mosaic)
# interpolate the waypoints
interp = interpolate_waypoints(waypoints, velocity=5)
# draw them on our scene
[draw_point(pt[1],pt[0],merged) for pt in interp]
# display the scene
plt.figure(0,figsize=(18,18))
plt.imshow(merged)
plt.title("merged")
Explanation: Before we generate our video frames, let's check our work
We'll load our image.
Build the interpolated waypoints list.
Draw the points on the image using our draw_point method.
Plot the results
End of explanation
os.system("rm ./movie/*.png")
merged = load_image3(output_mosaic)
build_scenes(merged,interp,window=(640,480))
Explanation: Now let's re-load the image and run the scene maker.
End of explanation
# avconv -framerate 30 -f image2 -i ./movie/img%06d.png -b 65536k out.mpg;
#os.system("rm ./movie/*.png")
framerate = 30
output = "out.mpg"
command = ["avconv","-framerate", str(framerate), "-f", "image2", "-i", "./movie/img%06d.png", "-b", "65536k", output]
os.system(" ".join(command))
Explanation: Finally, let's make a movie.
Our friend AVConv, which is like ffmpeg is a handy command line util for transcoding video.
AVConv can also convert a series of images into a video and vice versa.
We'll set up our command and use subprocess to make the call.
End of explanation |
10,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convert our txt files to csv format (internally)
You will need to create a .\Groundwater-Composition-csv folder to store results in, which is used later on.
Step1: Single File
Step2: Loading all the files
Now we can process all the files correctly. The following will look for our pre-processed files in .\Groundwater-Composition-csv. | Python Code:
inFolder = r'..\Data\Groundwater-Composition2'
#os.listdir(inFolder)
csvFolder = r'..\Data\Groundwater-Composition-csv'
for file in os.listdir(inFolder):
current_file = inFolder + '\\' + file
outFile = csvFolder + '\\' + file
with open(current_file, 'r') as in_file:
lines = in_file.read().splitlines()
stripped = [line.replace("\t",',').split(',') for line in lines]
with open(outFile, 'w') as out_file:
writer = csv.writer(out_file)
writer.writerows(stripped)
Explanation: Convert our txt files to csv format (internally)
You will need to create a .\Groundwater-Composition-csv folder to store results in, which is used later on.
End of explanation
inFile = r'..\Data\Groundwater-Composition-csv\B25A0857.txt'
header_names = "NITG-nr","Monster datum","Monster-nr","Monster apparatuur","Mengmonster","Bovenkant monster (cm tov MV)","Onderkant monster (cm tov MV)","Analyse datum","CO2 (mg/l)","CO3-- (mg/l)","Ca (mg/l)","Cl- (mg/l)","EC (uS/cm)","Fe (mg/l)","HCO3 (mg/l)","KLEUR (mgPt/l)","KMNO4V-O (mg/l)","Mg (mg/l)","Mn (mg/l)","NH4 (mg/l)","NH4-ORG (mg/l)","NO2 (mg/l)","NO3 (mg/l)","Na (mg/l)","NaHCO3 (mg/l)","SO4 (mg/l)","SiO2 (mg/l)","T-PO4 (mg/l)","TEMP-V (C)","TIJDH (mmol/l)","TOTH (mmol/l)","pH (-)"
df = pd.read_csv(inFile, skiprows=6, parse_dates=[1], sep=',', header=None, names=header_names)
df
inFile = r'..\Data\Groundwater-Composition\B25A0857.txt'
header_names = 'NITG-nr', 'X-coord', 'Y-coord', 'Coordinaat systeem', 'Kaartblad', 'Bepaling locatie', 'Maaiveldhoogte (m tov NAP)', 'Bepaling maaiveldhoogte', 'OLGA-nr', 'RIVM-nr', 'Aantal analyses', 'Meetnet', 'Indeling'
df = pd.read_csv(
inFile,
skiprows=2,
parse_dates=[1],
nrows=1,
delim_whitespace=True,
header=None,
names=header_names
)
df
Explanation: Single File
End of explanation
data_header_names = "NITG-nr","Monster datum","Monster-nr","Monster apparatuur","Mengmonster","Bovenkant monster (cm tov MV)","Onderkant monster (cm tov MV)","Analyse datum","CO2 (mg/l)","CO3-- (mg/l)","Ca (mg/l)","Cl- (mg/l)","EC (uS/cm)","Fe (mg/l)","HCO3 (mg/l)","KLEUR (mgPt/l)","KMNO4V-O (mg/l)","Mg (mg/l)","Mn (mg/l)","NH4 (mg/l)","NH4-ORG (mg/l)","NO2 (mg/l)","NO3 (mg/l)","Na (mg/l)","NaHCO3 (mg/l)","SO4 (mg/l)","SiO2 (mg/l)","T-PO4 (mg/l)","TEMP-V (C)","TIJDH (mmol/l)","TOTH (mmol/l)","pH (-)"
header_header_names = 'NITG-nr', 'X-coord', 'Y-coord', 'Coordinaat systeem', 'Kaartblad', 'Bepaling locatie', 'Maaiveldhoogte (m tov NAP)', 'Bepaling maaiveldhoogte', 'OLGA-nr', 'RIVM-nr', 'Aantal analyses', 'Meetnet', 'Indeling'
def pre_read(file):
i=0
loc=0
metarow,skiprows = 0, 0
with open(file) as f:
for line in f:
if line[:7] == 'LOCATIE':
loc=loc+1
if loc==1:
metarow = i+1
if loc==2:
skiprows = i+1
i+=1
return metarow,skiprows
df_list = []
for file in os.listdir(inFolder):
current_file = csvFolder + '\\' + file
# print(current_file)
metarow,skip = pre_read(current_file)
df = pd.read_csv(
current_file,
skiprows=skip,
parse_dates=[1],
sep=',',
header=None,
names=data_header_names
)
df_list.append(df)
# print(df_list)
all_dfs = pd.concat(df_list)
all_dfs = all_dfs[all_dfs['NITG-nr'].str.contains("NITG-nr") == False]
all_dfs = all_dfs[all_dfs['NITG-nr'].str.contains("LOCATIE") == False]
all_dfs = all_dfs[all_dfs['NITG-nr'].str.contains("KWALITEIT") == False]
all_dfs = all_dfs[all_dfs['Monster apparatuur'].str.contains("Rijksdriehoeksmeting") == False]
all_dfs
Explanation: Loading all the files
Now we can process all the files correctly. The following will look for our pre-processed files in .\Groundwater-Composition-csv.
End of explanation |
10,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convergence of Green function calculation
We check the convergence with $N_\text{kpt}$ for the calculation of the vacancy Green function for FCC and HCP structures. In particular, we will look at
Step1: Create an FCC and HCP lattice.
Step2: We will put together our vectors for consideration
Step3: We use $N_\text{max}$ parameter, which controls the automated generation of k-points to iterate through successively denser k-point meshes.
Step4: First, look at the behavior of the error with $p_\text{max}$(error) parameter. The k-point integration error scales as $N_\text{kpt}^{5/3}$, and we see the $p_\text{max}$ error is approximately $10^{-8}$.
Step5: Plot the error in the Green function for FCC (at 0, maximum R, and difference between those GF). We extract the infinite value by fitting the error to $N_{\mathrm{kpt}}^{-5/3}$, which empirically matches the numerical error.
Step6: Plot the error in Green function for HCP. | Python Code:
import sys
sys.path.extend(['../'])
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
%matplotlib inline
import onsager.crystal as crystal
import onsager.GFcalc as GFcalc
Explanation: Convergence of Green function calculation
We check the convergence with $N_\text{kpt}$ for the calculation of the vacancy Green function for FCC and HCP structures. In particular, we will look at:
The $\mathbf{R}=0$ value,
The largest $\mathbf{R}$ value in the calculation of a first neighbor thermodynamic interaction range,
The difference of the Green function value for (1) and (2),
with increasing k-point density.
End of explanation
a0 = 1.
FCC, HCP = crystal.Crystal.FCC(a0, "fcc"), crystal.Crystal.HCP(a0, chemistry="hcp")
print(FCC)
print(HCP)
Explanation: Create an FCC and HCP lattice.
End of explanation
FCCR = np.array([0,2.,2.])
HCPR1, HCPR2 = np.array([4.,0.,0.]), np.array([2.,0.,2*np.sqrt(8/3)])
FCCsite, FCCjn = FCC.sitelist(0), FCC.jumpnetwork(0, 0.75)
HCPsite, HCPjn = HCP.sitelist(0), HCP.jumpnetwork(0, 1.01)
Explanation: We will put together our vectors for consideration:
Maximum $\mathbf{R}$ for FCC = (400), or $\mathbf{x}=2\hat j+2\hat k$.
Maximum $\mathbf{R}$ for HCP = (440), or $\mathbf{x}=4\hat i$, and (222), or $\mathbf{x}=2\hat i + 2\sqrt{8/3}\hat k$.
and our sitelists and jumpnetworks.
End of explanation
FCCdata = {pmaxerror:[] for pmaxerror in range(-16,0)}
print('kpt\tNkpt\tG(0)\tG(R)\tG diff')
for Nmax in range(1,13):
GFFCC = GFcalc.GFCrystalcalc(FCC, 0, FCCsite, FCCjn, Nmax=Nmax)
Nreduce, Nkpt, kpt = GFFCC.Nkpt, np.prod(GFFCC.kptgrid), GFFCC.kptgrid
for pmax in sorted(FCCdata.keys(), reverse=True):
GFFCC.SetRates(np.ones(1), np.zeros(1), np.ones(1)/12, np.zeros(1), 10**(pmax))
g0,gR = GFFCC(0,0,np.zeros(3)), GFFCC(0,0,FCCR)
FCCdata[pmax].append((Nkpt, g0, gR))
Nkpt,g0,gR = FCCdata[-8][-1] # print the 10^-8 values
print("{k[0]}x{k[1]}x{k[2]}\t".format(k=kpt) +
" {:5d} ({})\t{:.12f}\t{:.12f}\t{:.12f}".format(Nkpt, Nreduce,
g0, gR,g0-gR))
HCPdata = []
print('kpt\tNkpt\tG(0)\tG(R1)\tG(R2)\tG(R1)-G(0)\tG(R2)-G0')
for Nmax in range(1,13):
GFHCP = GFcalc.GFCrystalcalc(HCP, 0, HCPsite, HCPjn, Nmax=Nmax)
GFHCP.SetRates(np.ones(1), np.zeros(1), np.ones(2)/12, np.zeros(2), 1e-8)
g0,gR1,gR2 = GFHCP(0,0,np.zeros(3)), GFHCP(0,0,HCPR1), GFHCP(0,0,HCPR2)
Nreduce, Nkpt, kpt = GFHCP.Nkpt, np.prod(GFHCP.kptgrid), GFHCP.kptgrid
HCPdata.append((Nkpt, g0, gR1, gR2))
print("{k[0]}x{k[1]}x{k[2]}\t".format(k=kpt) +
"{:5d} ({})\t{:.12f}\t{:.12f}\t{:.12f}\t{:.12f}\t{:.12f}".format(Nkpt, Nreduce,
g0, gR1, gR2,
g0-gR1, g0-gR2))
Explanation: We use $N_\text{max}$ parameter, which controls the automated generation of k-points to iterate through successively denser k-point meshes.
End of explanation
print('pmax\tGinf\talpha (Nkpt^-5/3 prefactor)')
Ginflist=[]
for pmax in sorted(FCCdata.keys(), reverse=True):
data = FCCdata[pmax]
Nk53 = np.array([N**(5/3) for (N,g0,gR) in data])
gval = np.array([g0 for (N,g0,gR) in data])
N10,N5 = np.average(Nk53*Nk53),np.average(Nk53)
g10,g5 = np.average(gval*Nk53*Nk53),np.average(gval*Nk53)
denom = N10-N5**2
Ginf,alpha = (g10-g5*N5)/denom, (g10*N5-g5*N10)/denom
Ginflist.append(Ginf)
print('{}\t{}\t{}'.format(pmax, Ginf, alpha))
Explanation: First, look at the behavior of the error with $p_\text{max}$(error) parameter. The k-point integration error scales as $N_\text{kpt}^{5/3}$, and we see the $p_\text{max}$ error is approximately $10^{-8}$.
End of explanation
# plot the errors from pmax = 10^-8
data = FCCdata[-8]
Nk = np.array([N for (N,g0,gR) in data])
g0val = np.array([g0 for (N,g0,gR) in data])
gRval = np.array([gR for (N,g0,gR) in data])
gplot = []
Nk53 = np.array([N**(5/3) for (N,g0,gR) in data])
for gdata, start in zip((g0val, gRval, g0val-gRval), (0,1,2)):
N10,N5 = np.average(Nk53[start:]*Nk53[start:]),np.average(Nk53[start:])
denom = N10-N5**2
g10 = np.average(gdata[start:]*Nk53[start:]*Nk53[start:])
g5 = np.average(gdata[start:]*Nk53[start:])
Ginf,alpha = (g10-g5*N5)/denom, (g10*N5-g5*N10)/denom
gplot.append(np.abs(gdata-Ginf))
fig, ax1 = plt.subplots()
ax1.plot(Nk, gplot[0], 'k', label='$G(\mathbf{0})$ error $\sim N_{\mathrm{kpt}}^{-5/3}$')
ax1.plot(Nk, gplot[1], 'b', label='$G(\mathbf{R})$ error $\sim N_{\mathrm{kpt}}^{-5/3}$')
ax1.plot(Nk, gplot[2], 'b--', label='$G(\mathbf{0})-G(\mathbf{R})$ error')
ax1.set_xlim((1e2,2e5))
ax1.set_ylim((1e-11,1))
ax1.set_xscale('log')
ax1.set_yscale('log')
ax1.set_xlabel('$N_{\mathrm{kpt}}$', fontsize='x-large')
ax1.set_ylabel('integration error $G-G^\infty$', fontsize='x-large')
ax1.legend(bbox_to_anchor=(0.6,0.6,0.4,0.4), ncol=1,
shadow=True, frameon=True, fontsize='x-large')
ax2 = ax1.twiny()
ax2.set_xscale('log')
ax2.set_xlim(ax1.get_xlim())
ax2.set_xticks([n for n in Nk])
ax2.set_xticklabels(["${:.0f}^3$".format(n**(1/3)) for n in Nk])
ax2.set_xlabel('k-point grid', fontsize='x-large')
ax2.grid(False)
ax2.tick_params(axis='x', top='on', direction='in', length=6)
plt.show()
# plt.savefig('FCC-GFerror.pdf', transparent=True, format='pdf')
Explanation: Plot the error in the Green function for FCC (at 0, maximum R, and difference between those GF). We extract the infinite value by fitting the error to $N_{\mathrm{kpt}}^{-5/3}$, which empirically matches the numerical error.
End of explanation
# plot the errors from pmax = 10^-8
data = HCPdata
Nk = np.array([N for (N,g0,gR1,gR2) in data])
g0val = np.array([g0 for (N,g0,gR1,gR2) in data])
gR1val = np.array([gR1 for (N,g0,gR1,gR2) in data])
gR2val = np.array([gR2 for (N,g0,gR1,gR2) in data])
gplot = []
Nk53 = np.array([N**(5/3) for (N,g0,gR1,gR2) in data])
for gdata, start in zip((g0val, gR1val, gR2val, g0val-gR1val, g0val-gR2val), (3,3,3,3,3)):
N10,N5 = np.average(Nk53[start:]*Nk53[start:]),np.average(Nk53[start:])
denom = N10-N5**2
g10 = np.average(gdata[start:]*Nk53[start:]*Nk53[start:])
g5 = np.average(gdata[start:]*Nk53[start:])
Ginf,alpha = (g10-g5*N5)/denom, (g10*N5-g5*N10)/denom
gplot.append(np.abs(gdata-Ginf))
fig, ax1 = plt.subplots()
ax1.plot(Nk, gplot[0], 'k', label='$G(\mathbf{0})$ error $\sim N_{\mathrm{kpt}}^{-5/3}$')
ax1.plot(Nk, gplot[1], 'b', label='$G(\mathbf{R}_1)$ error $\sim N_{\mathrm{kpt}}^{-5/3}$')
ax1.plot(Nk, gplot[2], 'r', label='$G(\mathbf{R}_2)$ error $\sim N_{\mathrm{kpt}}^{-5/3}$')
ax1.plot(Nk, gplot[3], 'b--', label='$G(\mathbf{0})-G(\mathbf{R}_1)$ error')
ax1.plot(Nk, gplot[4], 'r--', label='$G(\mathbf{0})-G(\mathbf{R}_2)$ error')
ax1.set_xlim((1e2,2e5))
ax1.set_ylim((1e-11,1))
ax1.set_xscale('log')
ax1.set_yscale('log')
ax1.set_xlabel('$N_{\mathrm{kpt}}$', fontsize='x-large')
ax1.set_ylabel('integration error $G-G^\infty$', fontsize='x-large')
ax1.legend(bbox_to_anchor=(0.6,0.6,0.4,0.4), ncol=1,
shadow=True, frameon=True, fontsize='medium')
ax2 = ax1.twiny()
ax2.set_xscale('log')
ax2.set_xlim(ax1.get_xlim())
ax2.set_xticks([n for n in Nk])
# ax2.set_xticklabels(["${:.0f}$".format((n*1.875)**(1/3)) for n in Nk])
ax2.set_xticklabels(['6','10','16','20','26','30','36','40','46','50','56','60'])
ax2.set_xlabel('k-point divisions (basal)', fontsize='x-large')
ax2.grid(False)
ax2.tick_params(axis='x', top='on', direction='in', length=6)
plt.show()
# plt.savefig('HCP-GFerror.pdf', transparent=True, format='pdf')
Explanation: Plot the error in Green function for HCP.
End of explanation |
10,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pythonic APIs
Step1: Sizing with len()
Step2: Arithmetic
Step3: A simple but full-featured Pythonic class
String formatting mini-language | Python Code:
s = 'Fluent'
L = [10, 20, 30, 40, 50]
print(list(s)) # list constructor iterates over its argument
a, b, *middle, c = L # tuple unpacking iterates over right side
print((a, b, c))
for i in L:
print(i, end=' ')
Explanation: Pythonic APIs: the workshop notebook
Tutorial overview
Introduction
A simple but full-featured Pythonic class
Exercise: custom formatting and alternate constructor
A Pythonic sequence
Exercise: implementing sequence behavior
Coffee break
A Pythonic sequence (continued)
Exercise: custom formatting
Operator overloading
Exercise: implement @ for dot product
Wrap-up
What is Pythonic?
Pythonic code is concise and expressive. It leverages Python features and idioms to acomplish maximum effect with minimum effort, without being unreadable. It uses the language as it's designed to be used, so it is most readable to the fluent Pythonista.
Real example 1: the requests API
requests is pleasant HTTP client library. It's great but it would be awesome if it was asynchronous (but could it be pleasant and asynchronous at the same time?). The examples below are from Kenneth Reitz, the author of requests (source).
Pythonic, using requests
```python
import requests
r = requests.get('https://api.github.com', auth=('user', 'pass'))
print r.status_code
print r.headers['content-type']
------
200
'application/json'
```
Unpythonic, using urllib2
```python
import urllib2
gh_url = 'https://api.github.com'
req = urllib2.Request(gh_url)
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(None, gh_url, 'user', 'pass')
auth_manager = urllib2.HTTPBasicAuthHandler(password_manager)
opener = urllib2.build_opener(auth_manager)
urllib2.install_opener(opener)
handler = urllib2.urlopen(req)
print handler.getcode()
print handler.headers.getheader('content-type')
------
200
'application/json'
```
Real example 2: classes are optional in py.test and nosetests
Features of idiomaitc Python APIs
Let the user apply previous knowledge of the standard types and operations
Make it easy to leverage existing libraries
Come with “batteries included”
Use duck typing for enhanced interoperation with user-defined types
Provide ready to use objects (no instantiation needed)
Don't require subclassing for basic usage
Leverage standard language objects: containers, functions, classes, modules
Make proper use of the Data Model (i.e. special methods)
Introduction
One of the keys to consistent, Pythonic, behavior in Python is understanding and leveraging the Data Model. The Python Data Model defines standard APIs which enable...
Iteration
End of explanation
len(s), len(L)
s.__len__(), L.__len__()
Explanation: Sizing with len()
End of explanation
a = 2
b = 3
a * b, a.__mul__(b)
L = [1, 2, 3]
L.append(L)
L
Explanation: Arithmetic
End of explanation
x = 2**.5
x
format(x, '.3f')
from datetime import datetime
agora = datetime.now()
print(agora)
print(format(agora, '%H:%M'))
'{1:%H}... {0:.3f}!'.format(x, agora)
Explanation: A simple but full-featured Pythonic class
String formatting mini-language
End of explanation |
10,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas cheat sheet
<img src="https
Step1: Thus Series can have different datatypes.
Operations on series
You can add, multiply and other numerical opertions on Series just like on numpy arrays.
Step2: When labels dont match, it puts a nan. Thus when two series are added, you may or may not get the same number of elements
Step3: DataFrames
Creating dataFrames
Pandas DataFrames are built on top of Series. It looks similar to a NumPy array, but has labels for both columns and rows.
Step4: Slicing and dicing DataFrames
You can access DataFrames similar to Series and slice it similar to NumPy arrays
Access columns
DataFrameObj['column_name'] ==> returns a pandas.core.series.Series
Step5: Access rows
DataFrameobj.loc['row_label'] also returns a Series. Notice the .loc
Step6: Accessing using index number
If you don't know the labels, but know the index like in an array, use iloc and pass the index number.
Step7: Dicing DataFrames
Dicing using labels ==> use DataFrameObj.loc[[row_labels],[col_labels]]
Step8: With index number, dice using
DataFrameObj.iloc[[row_indices], [col_indices]]
Step9: Conditional selection
When running a condition on a DataFrame, you are returned a Bool dataframe.
Step10: Chaining conditions
In a Pythonic way, you can chain conditions
df[df condition][selection][selection]
Step11: Multiple conditions
You can select dataframe elements with multiple conditions. Note cannot use Python and , or. Instead use &, |
Step12: Operations on DataFrames
Adding new columns
Create new columns just like adding a kvp to a dictionary.
DataFrameObj['new_col'] = Series
Step13: Dropping rows and columns
DataFrameObj.drop(label, axis, inplace=True / False)
Row labels are axis = 0 and columns are axis = 1
Step14: Drop a row based on a condition.
Step15: DataFrame Index
So far, Car1, Car2.. is the index for rows. If you would like to set a different column as an index, use set_index. If you want to make index as a column rather, and use numerals for index, use reset_index
Set index
Step16: Note, the old index is lost.
Rest index | Python Code:
import pandas as pd
import numpy as np
#from a list
l1 = [1,2,3,4,5]
ser1 = pd.Series(data = l1) #when you dont specify labels for index, it is autogenerated
ser1
#from a numpy array
arr1 = np.array(l1)
l2 = ['a', 'b', 'c','e', 'd']
ser2 = pd.Series(data=arr1, index=l2) #indices can of any data type, here string
ser2
#from a dictionary
d1 = {'usa':1, 'india':2, 'germany':3, 'japan':'china', 'china':4}
ser3 = pd.Series(d1)
ser3
Explanation: Pandas cheat sheet
<img src="https://pandas.pydata.org/static/img/pandas.svg" width=400>
Pandas is Python Data Analysis library. Series and Dataframes are major data structures in Pandas. Pandas is built on top of NumPy arrays.
ToC
Series
Create a series
Operations on series
DataFrames
Creating dataFrames
Slicing and dicing DataFrames
Access columns
Access rows
Accessing using index number
Dicing DataFrames
Conditional selection
Chaining conditions
Multiple conditions
Operations on DataFrames
Adding new columns
Dropping rows and columns
DataFrame index
Set index
Reset index
Series
Series is 1 dimensional data structure. It is similar to numpy array, but each data point has a label in the place of an index.
Create a series
End of explanation
ser1a = pd.Series(l1)
ser1 + ser1a #each individual element with matching index/label is summed
Explanation: Thus Series can have different datatypes.
Operations on series
You can add, multiply and other numerical opertions on Series just like on numpy arrays.
End of explanation
ser1 + ser3
Explanation: When labels dont match, it puts a nan. Thus when two series are added, you may or may not get the same number of elements
End of explanation
arr1 = np.random.rand(4,4)
arr1
row_lables = ['Car1', 'Car2', 'Car3', 'Car4']
col_labels = ['reliability', 'cost', 'competition', 'halflife']
#create a dataframe
df1 = pd.DataFrame(data=arr1, index=row_lables, columns=col_labels)
df1
Explanation: DataFrames
Creating dataFrames
Pandas DataFrames are built on top of Series. It looks similar to a NumPy array, but has labels for both columns and rows.
End of explanation
# Accessing a whole column
df1['reliability']
#can access as a property, but this is not advisable
#since it can clobber builtin methods and properties
df1.reliability
Explanation: Slicing and dicing DataFrames
You can access DataFrames similar to Series and slice it similar to NumPy arrays
Access columns
DataFrameObj['column_name'] ==> returns a pandas.core.series.Series
End of explanation
df1.loc['Car4']
type(df1.loc['Car3'])
Explanation: Access rows
DataFrameobj.loc['row_label'] also returns a Series. Notice the .loc
End of explanation
#get first row, first col
val1 = df1.iloc[0,0]
print(val1)
print(type(val1))
#get full first row
val2 = df1.iloc[0,:]
val2
type(val2)
Explanation: Accessing using index number
If you don't know the labels, but know the index like in an array, use iloc and pass the index number.
End of explanation
#Get cost and competition of cars 2,3
df1.loc[['Car2', 'Car3'], ['cost', 'competition']]
Explanation: Dicing DataFrames
Dicing using labels ==> use DataFrameObj.loc[[row_labels],[col_labels]]
End of explanation
df1.iloc[[1,2], [1,2]]
Explanation: With index number, dice using
DataFrameObj.iloc[[row_indices], [col_indices]]
End of explanation
df1
# find cars with reliability > 0.85
df1['reliability'] > 0.85
#to get the car select the data elements using the bool series
df1[df1['reliability'] > 0.85]
#To get only the car name, which in this case is the index
df1[df1['reliability'] > 0.85].index[0]
Explanation: Conditional selection
When running a condition on a DataFrame, you are returned a Bool dataframe.
End of explanation
#to get the actual value of reliablity for this car
df1[df1['reliability'] > 0.85]['reliability']
# get both reliability and cost
df1[df1['reliability'] > 0.85][['reliability', 'cost']]
Explanation: Chaining conditions
In a Pythonic way, you can chain conditions
df[df condition][selection][selection]
End of explanation
#select cars that have reliability > 0.7 but competition less than 0.5
df1[(df1['reliability'] > 0.7) & (df1['competition'] < 0.5)]
# select cars that have half life > 0.5 or competition < 0.4
df1[(df1['halflife'] > 0.5) | (df1['competition'] < 0.4)]
Explanation: Multiple conditions
You can select dataframe elements with multiple conditions. Note cannot use Python and , or. Instead use &, |
End of explanation
#add full life column
df1['full_life'] = df1['halflife'] * 2 #similar to array, series broadcast multiplication
df1
Explanation: Operations on DataFrames
Adding new columns
Create new columns just like adding a kvp to a dictionary.
DataFrameObj['new_col'] = Series
End of explanation
df1.drop('full_life', axis=1, inplace=False)
df1.drop('Car3') #all else is the default
Explanation: Dropping rows and columns
DataFrameObj.drop(label, axis, inplace=True / False)
Row labels are axis = 0 and columns are axis = 1
End of explanation
df1.drop(df1[df1['cost'] > 0.65].index, inplace=False)
Explanation: Drop a row based on a condition.
End of explanation
#set car names as index for the data frame
car_names = 'altima outback taurus mustang'.split()
car_names
df1['car_names'] = car_names
df1
df_new_index = df1.set_index(keys= df1['car_names'], inplace=False)
df_new_index
Explanation: DataFrame Index
So far, Car1, Car2.. is the index for rows. If you would like to set a different column as an index, use set_index. If you want to make index as a column rather, and use numerals for index, use reset_index
Set index
End of explanation
#reset df1 index to numerals and convert existing to a column
df1.reset_index()
Explanation: Note, the old index is lost.
Rest index
End of explanation |
10,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tidy Data
Thsis notebbok is designed to explore Hadley Wickman article about tidy data using pandas
The datasets are available on github
Step1: Original TB dataset. Corresponding to each ‘m’ column for males, there is also an ‘f’ column
for females, f1524, f2534 and so on. These are not shown to conserve space. Note the mixture of 0s
and missing values. This is due to the data collection process and the distinction is important for
this dataset.
Step2: Create sex and age columns from variable 'column' | Python Code:
import pandas as pd
import numpy as np
# tuberculosis (TB) dataset
path_tb = '/Users/ericfourrier/Documents/ProjetR/tidy-data/data/tb.csv'
df_tb = pd.read_csv(path_tb)
df_tb.head(20)
Explanation: Tidy Data
Thsis notebbok is designed to explore Hadley Wickman article about tidy data using pandas
The datasets are available on github : https://github.com/hadley/tidy-data/blob/master/data/
Import Packages
End of explanation
# clean column names
df_tb = df_tb.rename(columns={'iso2':'country'}) # rename iso2 in country
df_tb = df_tb.drop(['new_sp'],axis = 1)
df_tb.columns = [c.replace('new_sp_','') for c in df_tb.columns] # remove new_sp_
df_tb.head()
df_tb_wide = pd.melt(df_tb,id_vars = ['country','year'])
df_tb_wide = df_tb_wide.rename(columns={'variable':'column','value':'cases'})
df_tb_wide
Explanation: Original TB dataset. Corresponding to each ‘m’ column for males, there is also an ‘f’ column
for females, f1524, f2534 and so on. These are not shown to conserve space. Note the mixture of 0s
and missing values. This is due to the data collection process and the distinction is important for
this dataset.
End of explanation
# create sex:
ages = {"04" : "0-4", "514" : "5-14", "014" : "0-14",
"1524" : "15-24","2534" : "25-34", "3544" : "35-44",
"4554" : "45-54", "5564" : "55-64", "65": "65+", "u" : np.nan}
# Create genre and age from the mixed type column
df_tb_wide['age']=df_tb_wide['column'].str[1:]
df_tb_wide['genre']=df_tb_wide['column'].str[0]
df_tb_wide = df_tb_wide.drop('column', axis=1)
# change category
df_tb_wide['age'] = df_tb_wide['age'].map(lambda x: ages[x])
# clean dataset
df_tb_wide
Explanation: Create sex and age columns from variable 'column'
End of explanation |
10,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The role of dipole orientations in distributed source localization
When performing source localization in a distributed manner
(MNE/dSPM/sLORETA/eLORETA),
the source space is defined as a grid of dipoles that spans a large portion of
the cortex. These dipoles have both a position and an orientation. In this
tutorial, we will look at the various options available to restrict the
orientation of the dipoles and the impact on the resulting source estimate.
See inverse_orientation_constrains
Loading data
Load everything we need to perform source localization on the sample dataset.
Step1: The source space
Let's start by examining the source space as constructed by the
Step2: Fixed dipole orientations
While the source space defines the position of the dipoles, the inverse
operator defines the possible orientations of them. One of the options is to
assign a fixed orientation. Since the neural currents from which MEG and EEG
signals originate flows mostly perpendicular to the cortex [1]_, restricting
the orientation of the dipoles accordingly places a useful restriction on the
source estimate.
By specifying fixed=True when calling
Step3: Restricting the dipole orientations in this manner leads to the following
source estimate for the sample data
Step4: The direction of the estimated current is now restricted to two directions
Step5: When computing the source estimate, the activity at each of the three dipoles
is collapsed into the XYZ components of a single vector, which leads to the
following source estimate for the sample data
Step6: Limiting orientations, but not fixing them
Often, the best results will be obtained by allowing the dipoles to have
somewhat free orientation, but not stray too far from a orientation that is
perpendicular to the cortex. The loose parameter of the
Step7: Discarding dipole orientation information
Often, further analysis of the data does not need information about the
orientation of the dipoles, but rather their magnitudes. The pick_ori
parameter of the | Python Code:
from mayavi import mlab
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
data_path = sample.data_path()
evokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif')
left_auditory = evokeds[0].apply_baseline()
fwd = mne.read_forward_solution(
data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif')
mne.convert_forward_solution(fwd, surf_ori=True, copy=False)
noise_cov = mne.read_cov(data_path + '/MEG/sample/sample_audvis-cov.fif')
subjects_dir = data_path + '/subjects'
Explanation: The role of dipole orientations in distributed source localization
When performing source localization in a distributed manner
(MNE/dSPM/sLORETA/eLORETA),
the source space is defined as a grid of dipoles that spans a large portion of
the cortex. These dipoles have both a position and an orientation. In this
tutorial, we will look at the various options available to restrict the
orientation of the dipoles and the impact on the resulting source estimate.
See inverse_orientation_constrains
Loading data
Load everything we need to perform source localization on the sample dataset.
End of explanation
lh = fwd['src'][0] # Visualize the left hemisphere
verts = lh['rr'] # The vertices of the source space
tris = lh['tris'] # Groups of three vertices that form triangles
dip_pos = lh['rr'][lh['vertno']] # The position of the dipoles
white = (1.0, 1.0, 1.0) # RGB values for a white color
gray = (0.5, 0.5, 0.5) # RGB values for a gray color
red = (1.0, 0.0, 0.0) # RGB valued for a red color
mlab.figure(size=(600, 400), bgcolor=white)
# Plot the cortex
mlab.triangular_mesh(verts[:, 0], verts[:, 1], verts[:, 2], tris, color=gray)
# Mark the position of the dipoles with small red dots
mlab.points3d(dip_pos[:, 0], dip_pos[:, 1], dip_pos[:, 2], color=red,
scale_factor=1E-3)
mlab.view(azimuth=180, distance=0.25)
Explanation: The source space
Let's start by examining the source space as constructed by the
:func:mne.setup_source_space function. Dipoles are placed along fixed
intervals on the cortex, determined by the spacing parameter. The source
space does not define the orientation for these dipoles.
End of explanation
mlab.figure(size=(600, 400), bgcolor=white)
# Plot the cortex
mlab.triangular_mesh(verts[:, 0], verts[:, 1], verts[:, 2], tris, color=gray)
# Show the dipoles as arrows pointing along the surface normal
normals = lh['nn'][lh['vertno']]
mlab.quiver3d(dip_pos[:, 0], dip_pos[:, 1], dip_pos[:, 2],
normals[:, 0], normals[:, 1], normals[:, 2],
color=red, scale_factor=1E-3)
mlab.view(azimuth=180, distance=0.1)
Explanation: Fixed dipole orientations
While the source space defines the position of the dipoles, the inverse
operator defines the possible orientations of them. One of the options is to
assign a fixed orientation. Since the neural currents from which MEG and EEG
signals originate flows mostly perpendicular to the cortex [1]_, restricting
the orientation of the dipoles accordingly places a useful restriction on the
source estimate.
By specifying fixed=True when calling
:func:mne.minimum_norm.make_inverse_operator, the dipole orientations are
fixed to be orthogonal to the surface of the cortex, pointing outwards. Let's
visualize this:
End of explanation
# Compute the source estimate for the 'left - auditory' condition in the sample
# dataset.
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True)
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity.
_, time_max = stc.get_peak(hemi='lh')
brain_fixed = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
Explanation: Restricting the dipole orientations in this manner leads to the following
source estimate for the sample data:
End of explanation
mlab.figure(size=(600, 400), bgcolor=white)
# Define some more colors
green = (0.0, 1.0, 0.0)
blue = (0.0, 0.0, 1.0)
# Plot the cortex
mlab.triangular_mesh(verts[:, 0], verts[:, 1], verts[:, 2], tris, color=gray)
# Make an inverse operator with loose dipole orientations
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=1.0)
# Show the three dipoles defined at each location in the source space
dip_dir = inv['source_nn'].reshape(-1, 3, 3)
dip_dir = dip_dir[:len(dip_pos)] # Only select left hemisphere
for ori, color in zip((0, 1, 2), (red, green, blue)):
mlab.quiver3d(dip_pos[:, 0], dip_pos[:, 1], dip_pos[:, 2],
dip_dir[:, ori, 0], dip_dir[:, ori, 1], dip_dir[:, ori, 2],
color=color, scale_factor=1E-3)
mlab.view(azimuth=180, distance=0.1)
Explanation: The direction of the estimated current is now restricted to two directions:
inward and outward. In the plot, blue areas indicate current flowing inwards
and red areas indicate current flowing outwards. Given the curvature of the
cortex, groups of dipoles tend to point in the same direction: the direction
of the electromagnetic field picked up by the sensors.
Loose dipole orientations
Forcing the source dipoles to be strictly orthogonal to the cortex makes the
source estimate sensitive to the spacing of the dipoles along the cortex,
since the curvature of the cortex changes within each ~10 square mm patch.
Furthermore, misalignment of the MEG/EEG and MRI coordinate frames is more
critical when the source dipole orientations are strictly constrained [2]_.
To lift the restriction on the orientation of the dipoles, the inverse
operator has the ability to place not one, but three dipoles at each
location defined by the source space. These three dipoles are placed
orthogonally to form a Cartesian coordinate system. Let's visualize this:
End of explanation
# Compute the source estimate, indicate that we want a vector solution
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_mag = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
Explanation: When computing the source estimate, the activity at each of the three dipoles
is collapsed into the XYZ components of a single vector, which leads to the
following source estimate for the sample data:
End of explanation
# Set loose to 0.2, the default value
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=0.2)
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_loose = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
Explanation: Limiting orientations, but not fixing them
Often, the best results will be obtained by allowing the dipoles to have
somewhat free orientation, but not stray too far from a orientation that is
perpendicular to the cortex. The loose parameter of the
:func:mne.minimum_norm.make_inverse_operator allows you to specify a value
between 0 (fixed) and 1 (unrestricted or "free") to indicate the amount the
orientation is allowed to deviate from the surface normal.
End of explanation
# Only retain vector magnitudes
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity.
_, time_max = stc.get_peak(hemi='lh')
brain = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
Explanation: Discarding dipole orientation information
Often, further analysis of the data does not need information about the
orientation of the dipoles, but rather their magnitudes. The pick_ori
parameter of the :func:mne.minimum_norm.apply_inverse function allows you
to specify whether to return the full vector solution ('vector') or
rather the magnitude of the vectors (None, the default) or only the
activity in the direction perpendicular to the cortex ('normal').
End of explanation |
10,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assignment 9
Lorenzo Biasi, Julius Vernie
Step1: 1.1
We can see in the plot the fixed points. In the central point the function has a slope bigger than one, so it is not a stable point. For the other two is the opposite.
Step2: 1.2
We can see from the plot that the fixed points follow an s. The red part is unstable and it corrisponds to the central point of the previous plot. We can also see that when the function "f(x) = x" is tangent to the function there the degeneracy of two fixed point into one. That happens at around $\theta = -3, -5$.
Step3: 2.1
The fixed point for the function is the logarithm of $r$ and 0.
Step4: 2.2
In the next plot we can see the dynamics of the function with different values of r. We can see that for low r the points are stable, then they become more oscillatory, and later on they are just chaotic. This can be better seen looking at the biforcation plot. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
%matplotlib inline
def sigmoid(x):
return 1. / (1 + np.exp(-x))
def df(x, w=0, theta=0):
return w * sigmoid(x) * (1 - sigmoid(x))
def f(x, w=0, theta=0):
return w * sigmoid(x) + theta
def g(x, w, theta):
return f(x, w, theta) - x
Explanation: Assignment 9
Lorenzo Biasi, Julius Vernie
End of explanation
x = np.linspace(-5, 5, 100)
theta = -3.5
w = 8.
plt.plot(x, f(x, w, theta))
plt.plot(x, x)
plt.xlabel('x')
FP = []
for x_0 in range(-4, 4):
temp, _, ier, _ = fsolve(g, (x_0), args=(w, theta), full_output=True)
if ier == 1:
FP.append(temp)
FP = np.array(FP)
plt.plot(FP, f(FP, w, theta), '*')
Explanation: 1.1
We can see in the plot the fixed points. In the central point the function has a slope bigger than one, so it is not a stable point. For the other two is the opposite.
End of explanation
for theta in np.arange(-10, 0, .05):
x = np.arange(-10, 10, dtype='float')
stable = []
unstable = []
for i in range(len(x)):
temp, _, ier, _ = fsolve(g, (x[i]), args=(w, theta), full_output=True)
if ier == 1:
x[i] = temp
if abs(df(temp, w,theta)) < 1:
stable.append(temp)
else:
unstable.append(*temp)
else:
x[i] = None
plt.plot(theta * np.ones(len(stable)), stable, '.', color='green')
plt.plot(theta * np.ones(len(unstable)), unstable, '.', color='red')
Explanation: 1.2
We can see from the plot that the fixed points follow an s. The red part is unstable and it corrisponds to the central point of the previous plot. We can also see that when the function "f(x) = x" is tangent to the function there the degeneracy of two fixed point into one. That happens at around $\theta = -3, -5$.
End of explanation
def l(x, r):
return r * x * np.exp(-x)
y = np.linspace(-1, 4, 100)
plt.plot(y, l(y, np.exp(3)))
plt.plot(y, y)
Explanation: 2.1
The fixed point for the function is the logarithm of $r$ and 0.
End of explanation
N = 40
x = np.arange(1, N + 1) * 0.01
for r in np.exp(np.array([1, 1.5, 2, 2.3, 2.7, 3, 3.2, 4.])):
for i in range(1, N):
x[i] = l(x[i - 1], r)
plt.figure()
plt.plot(x)
plt.xlabel('x')
plt.title('log(r) = ' + str(np.log(r)))
Npre = 200
Nplot = 100
x = np.zeros((Nplot, 1))
for r in np.arange(np.exp(0), np.exp(4), .5):
x[0] = np.random.random() * 100
for n in range(Npre):
x[0] = l(x[0], r)
for n in range(Nplot - 1):
x[n + 1] = l(x[n], r)
plt.plot(r * np.ones((Nplot, 1)), x, '.')
Explanation: 2.2
In the next plot we can see the dynamics of the function with different values of r. We can see that for low r the points are stable, then they become more oscillatory, and later on they are just chaotic. This can be better seen looking at the biforcation plot.
End of explanation |
10,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: TFRecord 和 tf.Example
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step5: tf.Example
tf.Example 的数据类型
从根本上讲,tf.Example 是 {"string"
Step6: 注:为了简单起见,本示例仅使用标量输入。要处理非标量特征,最简单的方法是使用 tf.io.serialize_tensor 将张量转换为二进制字符串。在 TensorFlow 中,字符串是标量。使用 tf.io.parse_tensor 可将二进制字符串转换回张量。
下面是有关这些函数如何工作的一些示例。请注意不同的输入类型和标准化的输出类型。如果函数的输入类型与上述可强制转换的类型均不匹配,则该函数将引发异常(例如,_int64_feature(1.0) 将出错,因为 1.0 是浮点数,应该用于 _float_feature 函数):
Step7: 可以使用 .SerializeToString 方法将所有协议消息序列化为二进制字符串:
Step8: 创建 tf.Example 消息
假设您要根据现有数据创建 tf.Example 消息。在实践中,数据集可能来自任何地方,但是从单个观测值创建 tf.Example 消息的过程相同:
在每个观测结果中,需要使用上述其中一种函数,将每个值转换为包含三种兼容类型之一的 tf.train.Feature。
创建一个从特征名称字符串到第 1 步中生成的编码特征值的映射(字典)。
将第 2 步中生成的映射转换为 Features 消息。
在此笔记本中,您将使用 NumPy 创建一个数据集。
此数据集将具有 4 个特征:
具有相等 False 或 True 概率的布尔特征
从 [0, 5] 均匀随机选择的整数特征
通过将整数特征作为索引从字符串表生成的字符串特征
来自标准正态分布的浮点特征
请思考一个样本,其中包含来自上述每个分布的 10,000 个独立且分布相同的观测值:
Step10: 您可以使用 _bytes_feature、_float_feature 或 _int64_feature 将下面的每个特征强制转换为兼容 tf.Example 的类型。然后,可以通过下面的已编码特征创建 tf.Example 消息:
Step11: 例如,假设您从数据集中获得了一个观测值 [False, 4, bytes('goat'), 0.9876]。您可以使用 create_message() 创建和打印此观测值的 tf.Example 消息。如上所述,每个观测值将被写为一条 Features 消息。请注意,tf.Example 消息只是 Features 消息外围的包装器:
Step12: 要解码消息,请使用 tf.train.Example.FromString 方法。
Step13: TFRecords 格式详细信息
TFRecord 文件包含一系列记录。该文件只能按顺序读取。
每条记录包含一个字节字符串(用于数据有效负载),外加数据长度,以及用于完整性检查的 CRC32C(使用 Castagnoli 多项式的 32 位 CRC)哈希值。
每条记录会存储为以下格式:
uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_data
将记录连接起来以生成文件。此处对 CRC 进行了说明,且 CRC 的掩码为:
masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul
注:不需要在 TFRecord 文件中使用 tf.Example。tf.Example 只是将字典序列化为字节字符串的一种方法。文本行、编码的图像数据,或序列化的张量(使用 tf.io.serialize_tensor,或在加载时使用 tf.io.parse_tensor)。有关更多选项,请参阅 tf.io 模块。
使用 tf.data 的 TFRecord 文件
tf.data 模块还提供用于在 TensorFlow 中读取和写入数据的工具。
写入 TFRecord 文件
要将数据放入数据集中,最简单的方式是使用 from_tensor_slices 方法。
若应用于数组,将返回标量数据集:
Step14: 若应用于数组的元组,将返回元组的数据集:
Step15: 使用 tf.data.Dataset.map 方法可将函数应用于 Dataset 的每个元素。
映射函数必须在 TensorFlow 计算图模式下进行运算(它必须在 tf.Tensors 上运算并返回)。可以使用 tf.py_function 包装非张量函数(如 serialize_example)以使其兼容。
使用 tf.py_function 需要指定形状和类型信息,否则它将不可用:
Step16: 将此函数应用于数据集中的每个元素:
Step17: 并将它们写入 TFRecord 文件:
Step18: 读取 TFRecord 文件
您还可以使用 tf.data.TFRecordDataset 类来读取 TFRecord 文件。
有关通过 tf.data 使用 TFRecord 文件的详细信息,请参见此处。
使用 TFRecordDataset 对于标准化输入数据和优化性能十分有用。
Step19: 此时,数据集包含序列化的 tf.train.Example 消息。迭代时,它会将其作为标量字符串张量返回。
使用 .take 方法仅显示前 10 条记录。
注:在 tf.data.Dataset 上进行迭代仅在启用了 Eager Execution 时有效。
Step20: 可以使用以下函数对这些张量进行解析。请注意,这里的 feature_description 是必需的,因为数据集使用计算图执行,并且需要以下描述来构建它们的形状和类型签名:
Step21: 或者,使用 tf.parse example 一次解析整个批次。使用 tf.data.Dataset.map 方法将此函数应用于数据集中的每一项:
Step22: 使用 Eager Execution 在数据集中显示观测值。此数据集中有 10,000 个观测值,但只会显示前 10 个。数据会作为特征字典进行显示。每一项都是一个 tf.Tensor,此张量的 numpy 元素会显示特征的值:
Step23: 在这里,tf.parse_example 函数会将 tf.Example 字段解压缩为标准张量。
Python 中的 TFRecord 文件
tf.io 模块还包含用于读取和写入 TFRecord 文件的纯 Python 函数。
写入 TFRecord 文件
接下来,将 10,000 个观测值写入文件 test.tfrecord。每个观测值都将转换为一条 tf.Example 消息,然后被写入文件。随后,您可以验证是否已创建 test.tfrecord 文件:
Step24: 读取 TFRecord 文件
您可以使用 tf.train.Example.ParseFromString 轻松解析以下序列化张量:
Step25: 演练:读取和写入图像数据
下面是关于如何使用 TFRecord 读取和写入图像数据的端到端示例。您将使用图像作为输入数据,将数据写入 TFRecord 文件,然后将文件读取回来并显示图像。
如果您想在同一个输入数据集上使用多个模型,这种做法会很有用。您可以不以原始格式存储图像,而是将图像预处理为 TFRecord 格式,然后将其用于所有后续的处理和建模中。
首先,让我们下载雪中的猫的图像,以及施工中的纽约威廉斯堡大桥的照片。
提取图像
Step26: 写入 TFRecord 文件
和以前一样,将特征编码为与 tf.Example 兼容的类型。这将存储原始图像字符串特征,以及高度、宽度、深度和任意 label 特征。后者会在您写入文件以区分猫和桥的图像时使用。将 0 用于猫的图像,将 1 用于桥的图像:
Step27: 请注意,所有特征现在都存储在 tf.Example 消息中。接下来,函数化上面的代码,并将示例消息写入名为 images.tfrecords 的文件:
Step28: 读取 TFRecord 文件
现在,您有文件 images.tfrecords,并可以迭代其中的记录以将您写入的内容读取回来。因为在此示例中您只需重新生成图像,所以您只需要原始图像字符串这一个特征。使用上面描述的 getter 方法(即 example.features.feature['image_raw'].bytes_list.value[0])提取该特征。您还可以使用标签来确定哪个记录是猫,哪个记录是桥:
Step29: 从 TFRecord 文件中恢复图像: | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import numpy as np
import IPython.display as display
Explanation: TFRecord 和 tf.Example
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/tutorials/load_data/tfrecord" class=""><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" class="">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/tfrecord.ipynb" class=""><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" class="">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/tfrecord.ipynb" class=""><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" class="">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/load_data/tfrecord.ipynb" class=""><img src="https://tensorflow.google.cn/images/download_logo_32px.png" class="">下载笔记本</a></td>
</table>
为了高效地读取数据,比较有帮助的一种做法是对数据进行序列化并将其存储在一组可线性读取的文件(每个文件 100-200MB)中。这尤其适用于通过网络进行流式传输的数据。这种做法对缓冲任何数据预处理也十分有用。
TFRecord 格式是一种用于存储二进制记录序列的简单格式。
协议缓冲区是一个跨平台、跨语言的库,用于高效地序列化结构化数据。
协议消息由 .proto 文件定义,这通常是了解消息类型最简单的方法。
tf.Example 消息(或 protobuf)是一种灵活的消息类型,表示 {"string": value} 映射。它专为 TensorFlow 而设计,并被用于 TFX 等高级 API。
本笔记本将演示如何创建、解析和使用 tf.Example 消息,以及如何在 .tfrecord 文件之间对 tf.Example 消息进行序列化、写入和读取。
注:这些结构虽然有用,但并不是强制的。您无需转换现有代码即可使用 TFRecord,除非您正在使用 tf.data 且读取数据仍是训练的瓶颈。有关数据集性能的提示,请参阅数据输入流水线性能。
设置
End of explanation
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
Returns a bytes_list from a string / byte.
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
Returns a float_list from a float / double.
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
Returns an int64_list from a bool / enum / int / uint.
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
Explanation: tf.Example
tf.Example 的数据类型
从根本上讲,tf.Example 是 {"string": tf.train.Feature} 映射。
tf.train.Feature 消息类型可以接受以下三种类型(请参阅 .proto 文件)。大多数其他通用类型也可以强制转换成下面的其中一种:
tf.train.BytesList(可强制转换自以下类型)
string
byte
tf.train.FloatList(可强制转换自以下类型)
float (float32)
double (float64)
tf.train.Int64List(可强制转换自以下类型)
bool
enum
int32
uint32
int64
uint64
为了将标准 TensorFlow 类型转换为兼容 tf.Example 的 tf.train.Feature,可以使用下面的快捷函数。请注意,每个函数会接受标量输入值并返回包含上述三种 list 类型之一的 tf.train.Feature:
End of explanation
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
Explanation: 注:为了简单起见,本示例仅使用标量输入。要处理非标量特征,最简单的方法是使用 tf.io.serialize_tensor 将张量转换为二进制字符串。在 TensorFlow 中,字符串是标量。使用 tf.io.parse_tensor 可将二进制字符串转换回张量。
下面是有关这些函数如何工作的一些示例。请注意不同的输入类型和标准化的输出类型。如果函数的输入类型与上述可强制转换的类型均不匹配,则该函数将引发异常(例如,_int64_feature(1.0) 将出错,因为 1.0 是浮点数,应该用于 _float_feature 函数):
End of explanation
feature = _float_feature(np.exp(1))
feature.SerializeToString()
Explanation: 可以使用 .SerializeToString 方法将所有协议消息序列化为二进制字符串:
End of explanation
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
Explanation: 创建 tf.Example 消息
假设您要根据现有数据创建 tf.Example 消息。在实践中,数据集可能来自任何地方,但是从单个观测值创建 tf.Example 消息的过程相同:
在每个观测结果中,需要使用上述其中一种函数,将每个值转换为包含三种兼容类型之一的 tf.train.Feature。
创建一个从特征名称字符串到第 1 步中生成的编码特征值的映射(字典)。
将第 2 步中生成的映射转换为 Features 消息。
在此笔记本中,您将使用 NumPy 创建一个数据集。
此数据集将具有 4 个特征:
具有相等 False 或 True 概率的布尔特征
从 [0, 5] 均匀随机选择的整数特征
通过将整数特征作为索引从字符串表生成的字符串特征
来自标准正态分布的浮点特征
请思考一个样本,其中包含来自上述每个分布的 10,000 个独立且分布相同的观测值:
End of explanation
def serialize_example(feature0, feature1, feature2, feature3):
Creates a tf.Example message ready to be written to a file.
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
Explanation: 您可以使用 _bytes_feature、_float_feature 或 _int64_feature 将下面的每个特征强制转换为兼容 tf.Example 的类型。然后,可以通过下面的已编码特征创建 tf.Example 消息:
End of explanation
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
Explanation: 例如,假设您从数据集中获得了一个观测值 [False, 4, bytes('goat'), 0.9876]。您可以使用 create_message() 创建和打印此观测值的 tf.Example 消息。如上所述,每个观测值将被写为一条 Features 消息。请注意,tf.Example 消息只是 Features 消息外围的包装器:
End of explanation
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
Explanation: 要解码消息,请使用 tf.train.Example.FromString 方法。
End of explanation
tf.data.Dataset.from_tensor_slices(feature1)
Explanation: TFRecords 格式详细信息
TFRecord 文件包含一系列记录。该文件只能按顺序读取。
每条记录包含一个字节字符串(用于数据有效负载),外加数据长度,以及用于完整性检查的 CRC32C(使用 Castagnoli 多项式的 32 位 CRC)哈希值。
每条记录会存储为以下格式:
uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_data
将记录连接起来以生成文件。此处对 CRC 进行了说明,且 CRC 的掩码为:
masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul
注:不需要在 TFRecord 文件中使用 tf.Example。tf.Example 只是将字典序列化为字节字符串的一种方法。文本行、编码的图像数据,或序列化的张量(使用 tf.io.serialize_tensor,或在加载时使用 tf.io.parse_tensor)。有关更多选项,请参阅 tf.io 模块。
使用 tf.data 的 TFRecord 文件
tf.data 模块还提供用于在 TensorFlow 中读取和写入数据的工具。
写入 TFRecord 文件
要将数据放入数据集中,最简单的方式是使用 from_tensor_slices 方法。
若应用于数组,将返回标量数据集:
End of explanation
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
Explanation: 若应用于数组的元组,将返回元组的数据集:
End of explanation
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3)
Explanation: 使用 tf.data.Dataset.map 方法可将函数应用于 Dataset 的每个元素。
映射函数必须在 TensorFlow 计算图模式下进行运算(它必须在 tf.Tensors 上运算并返回)。可以使用 tf.py_function 包装非张量函数(如 serialize_example)以使其兼容。
使用 tf.py_function 需要指定形状和类型信息,否则它将不可用:
End of explanation
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
Explanation: 将此函数应用于数据集中的每个元素:
End of explanation
filename = 'test.tfrecord'
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
Explanation: 并将它们写入 TFRecord 文件:
End of explanation
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
Explanation: 读取 TFRecord 文件
您还可以使用 tf.data.TFRecordDataset 类来读取 TFRecord 文件。
有关通过 tf.data 使用 TFRecord 文件的详细信息,请参见此处。
使用 TFRecordDataset 对于标准化输入数据和优化性能十分有用。
End of explanation
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
Explanation: 此时,数据集包含序列化的 tf.train.Example 消息。迭代时,它会将其作为标量字符串张量返回。
使用 .take 方法仅显示前 10 条记录。
注:在 tf.data.Dataset 上进行迭代仅在启用了 Eager Execution 时有效。
End of explanation
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
Explanation: 可以使用以下函数对这些张量进行解析。请注意,这里的 feature_description 是必需的,因为数据集使用计算图执行,并且需要以下描述来构建它们的形状和类型签名:
End of explanation
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
Explanation: 或者,使用 tf.parse example 一次解析整个批次。使用 tf.data.Dataset.map 方法将此函数应用于数据集中的每一项:
End of explanation
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
Explanation: 使用 Eager Execution 在数据集中显示观测值。此数据集中有 10,000 个观测值,但只会显示前 10 个。数据会作为特征字典进行显示。每一项都是一个 tf.Tensor,此张量的 numpy 元素会显示特征的值:
End of explanation
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
!du -sh {filename}
Explanation: 在这里,tf.parse_example 函数会将 tf.Example 字段解压缩为标准张量。
Python 中的 TFRecord 文件
tf.io 模块还包含用于读取和写入 TFRecord 文件的纯 Python 函数。
写入 TFRecord 文件
接下来,将 10,000 个观测值写入文件 test.tfrecord。每个观测值都将转换为一条 tf.Example 消息,然后被写入文件。随后,您可以验证是否已创建 test.tfrecord 文件:
End of explanation
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
Explanation: 读取 TFRecord 文件
您可以使用 tf.train.Example.ParseFromString 轻松解析以下序列化张量:
End of explanation
cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
Explanation: 演练:读取和写入图像数据
下面是关于如何使用 TFRecord 读取和写入图像数据的端到端示例。您将使用图像作为输入数据,将数据写入 TFRecord 文件,然后将文件读取回来并显示图像。
如果您想在同一个输入数据集上使用多个模型,这种做法会很有用。您可以不以原始格式存储图像,而是将图像预处理为 TFRecord 格式,然后将其用于所有后续的处理和建模中。
首先,让我们下载雪中的猫的图像,以及施工中的纽约威廉斯堡大桥的照片。
提取图像
End of explanation
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
Explanation: 写入 TFRecord 文件
和以前一样,将特征编码为与 tf.Example 兼容的类型。这将存储原始图像字符串特征,以及高度、宽度、深度和任意 label 特征。后者会在您写入文件以区分猫和桥的图像时使用。将 0 用于猫的图像,将 1 用于桥的图像:
End of explanation
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
!du -sh {record_file}
Explanation: 请注意,所有特征现在都存储在 tf.Example 消息中。接下来,函数化上面的代码,并将示例消息写入名为 images.tfrecords 的文件:
End of explanation
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
Explanation: 读取 TFRecord 文件
现在,您有文件 images.tfrecords,并可以迭代其中的记录以将您写入的内容读取回来。因为在此示例中您只需重新生成图像,所以您只需要原始图像字符串这一个特征。使用上面描述的 getter 方法(即 example.features.feature['image_raw'].bytes_list.value[0])提取该特征。您还可以使用标签来确定哪个记录是猫,哪个记录是桥:
End of explanation
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
Explanation: 从 TFRecord 文件中恢复图像:
End of explanation |
10,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting house prices
Step1: As you can see, we have 404 training samples and 102 test samples. The data comprises 13 features. The 13 features in the input data are as
follow
Step2: The prices are typically between \$10,000 and \$50,000. If that sounds cheap, remember this was the mid-1970s, and these prices are not
inflation-adjusted.
Preparing the data
It would be problematic to feed into a neural network values that all take wildly different ranges. The network might be able to
automatically adapt to such heterogeneous data, but it would definitely make learning more difficult. A widespread best practice to deal
with such data is to do feature-wise normalization
Step3: Note that the quantities that we use for normalizing the test data have been computed using the training data. We should never use in our
workflow any quantity computed on the test data, even for something as simple as data normalization.
Building our network
Because so few samples are available, we will be using a very small network with two
hidden layers, each with 64 units. In general, the less training data you have, the worse overfitting will be, and using
a small network is one way to mitigate overfitting.
Step4: Our network ends with a single unit, and no activation (i.e. it will be linear layer).
This is a typical setup for scalar regression (i.e. regression where we are trying to predict a single continuous value).
Applying an activation function would constrain the range that the output can take; for instance if
we applied a sigmoid activation function to our last layer, the network could only learn to predict values between 0 and 1. Here, because
the last layer is purely linear, the network is free to learn to predict values in any range.
Note that we are compiling the network with the mse loss function -- Mean Squared Error, the square of the difference between the
predictions and the targets, a widely used loss function for regression problems.
We are also monitoring a new metric during training
Step5: As you can notice, the different runs do indeed show rather different validation scores, from 2.1 to 2.9. Their average (2.4) is a much more
reliable metric than any single of these scores -- that's the entire point of K-fold cross-validation. In this case, we are off by \$2,400 on
average, which is still significant considering that the prices range from \$10,000 to \$50,000.
Let's try training the network for a bit longer
Step6: We can then compute the average of the per-epoch MAE scores for all folds
Step7: Let's plot this
Step8: It may be a bit hard to see the plot due to scaling issues and relatively high variance. Let's
Step9: According to this plot, it seems that validation MAE stops improving significantly after 80 epochs. Past that point, we start overfitting.
Once we are done tuning other parameters of our model (besides the number of epochs, we could also adjust the size of the hidden layers), we
can train a final "production" model on all of the training data, with the best parameters, then look at its performance on the test data | Python Code:
from keras.datasets import boston_housing
(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()
train_data.shape
test_data.shape
Explanation: Predicting house prices: a regression example
This notebook contains the code samples found in Chapter 3, Section 6 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
In our two previous examples, we were considering classification problems, where the goal was to predict a single discrete label of an
input data point. Another common type of machine learning problem is "regression", which consists of predicting a continuous value instead
of a discrete label. For instance, predicting the temperature tomorrow, given meteorological data, or predicting the time that a
software project will take to complete, given its specifications.
Do not mix up "regression" with the algorithm "logistic regression": confusingly, "logistic regression" is not a regression algorithm,
it is a classification algorithm.
The Boston Housing Price dataset
We will be attempting to predict the median price of homes in a given Boston suburb in the mid-1970s, given a few data points about the
suburb at the time, such as the crime rate, the local property tax rate, etc.
The dataset we will be using has another interesting difference from our two previous examples: it has very few data points, only 506 in
total, split between 404 training samples and 102 test samples, and each "feature" in the input data (e.g. the crime rate is a feature) has
a different scale. For instance some values are proportions, which take a values between 0 and 1, others take values between 1 and 12,
others between 0 and 100...
Let's take a look at the data:
End of explanation
train_targets
Explanation: As you can see, we have 404 training samples and 102 test samples. The data comprises 13 features. The 13 features in the input data are as
follow:
Per capita crime rate.
Proportion of residential land zoned for lots over 25,000 square feet.
Proportion of non-retail business acres per town.
Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).
Nitric oxides concentration (parts per 10 million).
Average number of rooms per dwelling.
Proportion of owner-occupied units built prior to 1940.
Weighted distances to five Boston employment centres.
Index of accessibility to radial highways.
Full-value property-tax rate per $10,000.
Pupil-teacher ratio by town.
1000 * (Bk - 0.63) ** 2 where Bk is the proportion of Black people by town.
% lower status of the population.
The targets are the median values of owner-occupied homes, in thousands of dollars:
End of explanation
mean = train_data.mean(axis=0)
train_data -= mean
std = train_data.std(axis=0)
train_data /= std
test_data -= mean
test_data /= std
Explanation: The prices are typically between \$10,000 and \$50,000. If that sounds cheap, remember this was the mid-1970s, and these prices are not
inflation-adjusted.
Preparing the data
It would be problematic to feed into a neural network values that all take wildly different ranges. The network might be able to
automatically adapt to such heterogeneous data, but it would definitely make learning more difficult. A widespread best practice to deal
with such data is to do feature-wise normalization: for each feature in the input data (a column in the input data matrix), we
will subtract the mean of the feature and divide by the standard deviation, so that the feature will be centered around 0 and will have a
unit standard deviation. This is easily done in Numpy:
End of explanation
from keras import models
from keras import layers
def build_model():
# Because we will need to instantiate
# the same model multiple times,
# we use a function to construct it.
model = models.Sequential()
model.add(layers.Dense(64, activation='relu',
input_shape=(train_data.shape[1],)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
return model
Explanation: Note that the quantities that we use for normalizing the test data have been computed using the training data. We should never use in our
workflow any quantity computed on the test data, even for something as simple as data normalization.
Building our network
Because so few samples are available, we will be using a very small network with two
hidden layers, each with 64 units. In general, the less training data you have, the worse overfitting will be, and using
a small network is one way to mitigate overfitting.
End of explanation
import numpy as np
k = 4
num_val_samples = len(train_data) // k
num_epochs = 100
all_scores = []
for i in range(k):
print('processing fold #', i)
# Prepare the validation data: data from partition # k
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
# Prepare the training data: data from all other partitions
partial_train_data = np.concatenate(
[train_data[:i * num_val_samples],
train_data[(i + 1) * num_val_samples:]],
axis=0)
partial_train_targets = np.concatenate(
[train_targets[:i * num_val_samples],
train_targets[(i + 1) * num_val_samples:]],
axis=0)
# Build the Keras model (already compiled)
model = build_model()
# Train the model (in silent mode, verbose=0)
model.fit(partial_train_data, partial_train_targets,
epochs=num_epochs, batch_size=1, verbose=0)
# Evaluate the model on the validation data
val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0)
all_scores.append(val_mae)
all_scores
np.mean(all_scores)
Explanation: Our network ends with a single unit, and no activation (i.e. it will be linear layer).
This is a typical setup for scalar regression (i.e. regression where we are trying to predict a single continuous value).
Applying an activation function would constrain the range that the output can take; for instance if
we applied a sigmoid activation function to our last layer, the network could only learn to predict values between 0 and 1. Here, because
the last layer is purely linear, the network is free to learn to predict values in any range.
Note that we are compiling the network with the mse loss function -- Mean Squared Error, the square of the difference between the
predictions and the targets, a widely used loss function for regression problems.
We are also monitoring a new metric during training: mae. This stands for Mean Absolute Error. It is simply the absolute value of the
difference between the predictions and the targets. For instance, a MAE of 0.5 on this problem would mean that our predictions are off by
\$500 on average.
Validating our approach using K-fold validation
To evaluate our network while we keep adjusting its parameters (such as the number of epochs used for training), we could simply split the
data into a training set and a validation set, as we were doing in our previous examples. However, because we have so few data points, the
validation set would end up being very small (e.g. about 100 examples). A consequence is that our validation scores may change a lot
depending on which data points we choose to use for validation and which we choose for training, i.e. the validation scores may have a
high variance with regard to the validation split. This would prevent us from reliably evaluating our model.
The best practice in such situations is to use K-fold cross-validation. It consists of splitting the available data into K partitions
(typically K=4 or 5), then instantiating K identical models, and training each one on K-1 partitions while evaluating on the remaining
partition. The validation score for the model used would then be the average of the K validation scores obtained.
In terms of code, this is straightforward:
End of explanation
from keras import backend as K
# Some memory clean-up
K.clear_session()
num_epochs = 500
all_mae_histories = []
for i in range(k):
print('processing fold #', i)
# Prepare the validation data: data from partition # k
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
# Prepare the training data: data from all other partitions
partial_train_data = np.concatenate(
[train_data[:i * num_val_samples],
train_data[(i + 1) * num_val_samples:]],
axis=0)
partial_train_targets = np.concatenate(
[train_targets[:i * num_val_samples],
train_targets[(i + 1) * num_val_samples:]],
axis=0)
# Build the Keras model (already compiled)
model = build_model()
# Train the model (in silent mode, verbose=0)
history = model.fit(partial_train_data, partial_train_targets,
validation_data=(val_data, val_targets),
epochs=num_epochs, batch_size=1, verbose=0)
mae_history = history.history['val_mean_absolute_error']
all_mae_histories.append(mae_history)
Explanation: As you can notice, the different runs do indeed show rather different validation scores, from 2.1 to 2.9. Their average (2.4) is a much more
reliable metric than any single of these scores -- that's the entire point of K-fold cross-validation. In this case, we are off by \$2,400 on
average, which is still significant considering that the prices range from \$10,000 to \$50,000.
Let's try training the network for a bit longer: 500 epochs. To keep a record of how well the model did at each epoch, we will modify our training loop
to save the per-epoch validation score log:
End of explanation
average_mae_history = [
np.mean([x[i] for x in all_mae_histories]) for i in range(num_epochs)]
Explanation: We can then compute the average of the per-epoch MAE scores for all folds:
End of explanation
import matplotlib.pyplot as plt
plt.plot(range(1, len(average_mae_history) + 1), average_mae_history)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE')
plt.show()
Explanation: Let's plot this:
End of explanation
def smooth_curve(points, factor=0.9):
smoothed_points = []
for point in points:
if smoothed_points:
previous = smoothed_points[-1]
smoothed_points.append(previous * factor + point * (1 - factor))
else:
smoothed_points.append(point)
return smoothed_points
smooth_mae_history = smooth_curve(average_mae_history[10:])
plt.plot(range(1, len(smooth_mae_history) + 1), smooth_mae_history)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE')
plt.show()
Explanation: It may be a bit hard to see the plot due to scaling issues and relatively high variance. Let's:
Omit the first 10 data points, which are on a different scale from the rest of the curve.
Replace each point with an exponential moving average of the previous points, to obtain a smooth curve.
End of explanation
# Get a fresh, compiled model.
model = build_model()
# Train it on the entirety of the data.
model.fit(train_data, train_targets,
epochs=80, batch_size=16, verbose=0)
test_mse_score, test_mae_score = model.evaluate(test_data, test_targets)
test_mae_score
Explanation: According to this plot, it seems that validation MAE stops improving significantly after 80 epochs. Past that point, we start overfitting.
Once we are done tuning other parameters of our model (besides the number of epochs, we could also adjust the size of the hidden layers), we
can train a final "production" model on all of the training data, with the best parameters, then look at its performance on the test data:
End of explanation |
10,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Image As Greyscale
Step2: Enhance Image
Step3: View Image | Python Code:
# Load image
import cv2
import numpy as np
from matplotlib import pyplot as plt
Explanation: Title: Enhance Contrast Of Greyscale Image
Slug: enhance_contrast_of_greyscale_image
Summary: How to enhance the contrast of images using OpenCV in Python.
Date: 2017-09-11 12:00
Category: Machine Learning
Tags: Preprocessing Images
Authors: Chris Albon
Preliminaries
End of explanation
# Load image as grayscale
image = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_GRAYSCALE)
Explanation: Load Image As Greyscale
End of explanation
# Enhance image
image_enhanced = cv2.equalizeHist(image)
Explanation: Enhance Image
End of explanation
# Show image
plt.imshow(image_enhanced, cmap='gray'), plt.axis("off")
plt.show()
Explanation: View Image
End of explanation |
10,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Symbolic Calculator
This file shows how a simple symbolic calculator can be implemented using Ply. The grammar for the language implemented by this parser is as follows
Step1: There are only three tokens that need to be defined via regular expressions. The other tokens consist only of a single character and can therefore be defined as literals.
Step2: The token NUMBER specifies a fully featured floating point number.
Step3: The token IDENTIFIER specifies the name of a variable.
Step4: The token ASSIGN_OP specifies the assignment operator. As this operator consists of two characters, it can't be defined as a literal.
Step5: literals is a list operator symbols that consist of a single character.
Step6: Blanks and tabulators are ignored.
Step7: Newlines are counted in order to give precise error messages. Otherwise they are ignored.
Step8: Unkown characters are reported as lexical errors.
Step9: We generate the lexer.
Step10: Specification of the Parser
Step11: The start variable of our grammar is statement.
Step12: There are two grammar rules for stmnts
Step13: An expr is a sequence of prods that are combined with the operators + and -.
The corresponding grammar rules are
Step14: A prod is a sequence of factors that are combined with the operators * and /.
The corresponding grammar rules are
Step15: A factor can is either an expression in parenthesis, a number, or an identifier.
factor
Step16: The expression float('NaN') stands for an undefined number.
Step17: The method p_error is called if a syntax error occurs. The argument p is the token that could not be read. If p is None then there is a syntax error at the end of input.
Step18: Setting the optional argument write_tables to False <B style="color
Step19: Let's look at the action table that is generated.
Step20: Names2Values is the dictionary that maps variable names to their values. Initially the dictionary is empty as no variables has yet been defined.
Step21: The parser is invoked by calling the method yacc.parse(s) where s is a string that is to be parsed. | Python Code:
import ply.lex as lex
Explanation: A Simple Symbolic Calculator
This file shows how a simple symbolic calculator can be implemented using Ply. The grammar for the language implemented by this parser is as follows:
$$
\begin{array}{lcl}
\texttt{stmnt} & \rightarrow & \;\texttt{IDENTIFIER} \;\texttt{':='}\; \texttt{expr}\; \texttt{';'}\
& \mid & \;\texttt{expr}\; \texttt{';'} \[0.2cm]
\texttt{expr} & \rightarrow & \;\texttt{expr}\; \texttt{'+'} \; \texttt{product} \
& \mid & \;\texttt{expr}\; \texttt{'-'} \; \texttt{product} \
& \mid & \;\texttt{product} \[0.2cm]
\texttt{product} & \rightarrow & \;\texttt{product}\; \texttt{'*'} \;\texttt{factor} \
& \mid & \;\texttt{product}\; \texttt{'/'} \;\texttt{factor} \
& \mid & \;\texttt{factor} \[0.2cm]
\texttt{factor} & \rightarrow & \texttt{'('} \; \texttt{expr} \;\texttt{')'} \
& \mid & \;\texttt{NUMBER} \
& \mid & \;\texttt{IDENTIFIER}
\end{array}
$$
Specification of the Scanner
End of explanation
tokens = [ 'NUMBER', 'IDENTIFIER', 'ASSIGN_OP' ]
Explanation: There are only three tokens that need to be defined via regular expressions. The other tokens consist only of a single character and can therefore be defined as literals.
End of explanation
def t_NUMBER(t):
r'0|[1-9][0-9]*(\.[0-9]+)?([eE][+-]?([1-9][0-9]*))?'
t.value = float(t.value)
return t
Explanation: The token NUMBER specifies a fully featured floating point number.
End of explanation
def t_IDENTIFIER(t):
r'[a-zA-Z][a-zA-Z0-9_]*'
return t
Explanation: The token IDENTIFIER specifies the name of a variable.
End of explanation
def t_ASSIGN_OP(t):
r':='
return t
Explanation: The token ASSIGN_OP specifies the assignment operator. As this operator consists of two characters, it can't be defined as a literal.
End of explanation
literals = ['+', '-', '*', '/', '(', ')', ';']
Explanation: literals is a list operator symbols that consist of a single character.
End of explanation
t_ignore = ' \t'
Explanation: Blanks and tabulators are ignored.
End of explanation
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count('\n')
Explanation: Newlines are counted in order to give precise error messages. Otherwise they are ignored.
End of explanation
def t_error(t):
print(f"Illegal character '{t.value[0]}' at character number {t.lexer.lexpos} in line {t.lexer.lineno}.")
t.lexer.skip(1)
__file__ = 'main'
Explanation: Unkown characters are reported as lexical errors.
End of explanation
lexer = lex.lex()
Explanation: We generate the lexer.
End of explanation
import ply.yacc as yacc
Explanation: Specification of the Parser
End of explanation
start = 'stmnt'
Explanation: The start variable of our grammar is statement.
End of explanation
def p_stmnt_assign(p):
"stmnt : IDENTIFIER ASSIGN_OP expr ';'"
Names2Values[p[1]] = p[3]
def p_stmnt_expr(p):
"stmnt : expr ';'"
print(p[1])
Explanation: There are two grammar rules for stmnts:
stmnt : IDENTIFIER ":=" expr ";"
| expr ';'
;
- If a stmnt is an assignment, the expression on the right hand side of the assignment operator is
evaluated and the value is stored in the dictionary Names2Values. The key used in this dictionary
is the name of the variable on the left hand side ofthe assignment operator.
- If a stmnt is an expression, the expression is evaluated and the result of this evaluation is printed.
It is <b>very important</b> that in the grammar rules below the : is surrounded by space characters, for otherwise Ply will throw mysterious error messages at us!
Below, Names2Values is a dictionary mapping variable names to their values. It will be defined later.
End of explanation
def p_expr_plus(p):
"expr : expr '+' prod"
p[0] = p[1] + p[3]
def p_expr_minus(p):
"expr : expr '-' prod"
p[0] = p[1] - p[3]
def p_expr_prod(p):
"expr : prod"
p[0] = p[1]
Explanation: An expr is a sequence of prods that are combined with the operators + and -.
The corresponding grammar rules are:
expr : expr '+' prod
| expr '-' prod
| prod
;
End of explanation
def p_prod_mult(p):
"prod : prod '*' factor"
p[0] = p[1] * p[3]
def p_prod_div(p):
"prod : prod '/' factor"
p[0] = p[1] / p[3]
def p_prod_factor(p):
"prod : factor"
p[0] = p[1]
Explanation: A prod is a sequence of factors that are combined with the operators * and /.
The corresponding grammar rules are:
prod : prod '*' factor
| prod '/' factor
| factor
;
End of explanation
def p_factor_group(p):
"factor : '(' expr ')'"
p[0] = p[2]
def p_factor_number(p):
"factor : NUMBER"
p[0] = p[1]
def p_factor_id(p):
"factor : IDENTIFIER"
p[0] = Names2Values.get(p[1], float('NaN'))
Explanation: A factor can is either an expression in parenthesis, a number, or an identifier.
factor : '(' expr ')'
| NUMBER
| IDENTIFIER
;
End of explanation
float('NaN'), float('Inf'), float('Inf') - float('Inf')
Explanation: The expression float('NaN') stands for an undefined number.
End of explanation
def p_error(p):
if p:
print(f"Syntax error at character number {p.lexer.lexpos} at token '{p.value}' in line {p.lexer.lineno}.")
else:
print('Syntax error at end of input.')
Explanation: The method p_error is called if a syntax error occurs. The argument p is the token that could not be read. If p is None then there is a syntax error at the end of input.
End of explanation
parser = yacc.yacc(write_tables=False, debug=True)
Explanation: Setting the optional argument write_tables to False <B style="color:red">is required</B> to prevent an obscure bug where the parser generator tries to read an empty parse table.
We set debug to True so that the parse tables are dumped into the file parser.out.
End of explanation
!type parser.out
!cat parser.out
Explanation: Let's look at the action table that is generated.
End of explanation
Names2Values = {}
Explanation: Names2Values is the dictionary that maps variable names to their values. Initially the dictionary is empty as no variables has yet been defined.
End of explanation
def main():
while True:
s = input('calc> ')
if s == '':
break
yacc.parse(s)
main()
Explanation: The parser is invoked by calling the method yacc.parse(s) where s is a string that is to be parsed.
End of explanation |
10,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Network Example with Keras
(C) 2018-2019 by Damir Cavar
Version
Step1: We will use numpy as well
Step2: In his tutorial, as linked above, Jason Brownlee suggests that we initialize the random number generator with a fixed number to make sure that the results are the same at every run, since the learning algorithm makes use of a stochastic process. We initialize the random number generator with 7
Step3: The data-set suggested in Brownlee's tutorial is Pima Indians Diabetes Data Set. The required file can be downloaded using this link. It is available in the local data subfolder with the .csv filename-ending.
Step4: The data is organized as follows
Step5: Just to verify the content
Step6: We will define our model in the next step. The first layer is the input layer. It is set to have 8 inputs for the 8 variables using the attribute input_dim. The Dense class defines the layers to be fully connected. The number of neurons is specified as the first argument to the initializer. We are choosing also the activation function using the activation attribute. This should be clear from the presentations in class and other examples and discussions on related notebooks here in this collection. The output layer consists of one neuron and uses the sigmoid activation function to return a weight between $0$ and $1$
Step7: The defined network needs to be compiled. The compilation process creates a specific implementation of it using the backend (e.g. TensorFlow or Theano), decides whether a GPU or a CPU will be used, which loss and optimization function to select, and which metrics should be collected during training. In this case we use the binary cross-entropy as a loss function, the efficient implementation of a gradient decent algorithm called Adam, and we store the classification accuracy for the output and analysis.
Step8: The training of the model is achieved by calling the fit method. The parameters specify the input matrix and output vector in our case, as well as the number of iterations through the data set for training, called epochs. The batch size specifies the number of instances that are evaluated before an update of the parameters is applied.
Step9: The evaluation is available via the evaluate method. In our case we print out the accuracy
Step10: We can now make predictions by calling the predict method with the input matrix as a parameter. In this case we are using the training data to predict the output classifier. This is in general not a good idea. Here it just serves the purpose of showing how the methods are used | Python Code:
from keras.models import Sequential
from keras.layers import Dense
Explanation: Neural Network Example with Keras
(C) 2018-2019 by Damir Cavar
Version: 1.1, January 2019
License: Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0)
This is a tutorial related to the L665 course on Machine Learning for NLP focusing on Deep Learning, Spring 2018 and 2019 at Indiana University.
This material is based on Jason Brownlee's tutorial Develop Your First Neural Network in Python With Keras Step-By-Step. See for more details and explanations this page. All copyrights are his, except on a few small comments that I added.
Keras is a neural network module that is running on top of TensorFlow (among others). Make sure that you install TensorFlow on your system. Go to the Keras homepage and install the module in Python. This example also requires that Scipy and Numpy are installed in your system.
Introduction
As explained in the above tutorial, the steps are:
loading data (prepared for the process, that is vectorized and formated)
defining a model (layers)
compiling the model
fitting the model
evaluating the model
We have to import the necessary modules from Keras:
End of explanation
import numpy
Explanation: We will use numpy as well:
End of explanation
numpy.random.seed(7)
Explanation: In his tutorial, as linked above, Jason Brownlee suggests that we initialize the random number generator with a fixed number to make sure that the results are the same at every run, since the learning algorithm makes use of a stochastic process. We initialize the random number generator with 7:
End of explanation
dataset = numpy.loadtxt("data/pima-indians-diabetes.csv", delimiter=",")
Explanation: The data-set suggested in Brownlee's tutorial is Pima Indians Diabetes Data Set. The required file can be downloaded using this link. It is available in the local data subfolder with the .csv filename-ending.
End of explanation
X = dataset[:,0:8]
Y = dataset[:,8]
Explanation: The data is organized as follows: the first 8 columns per row define the features, that is the input variables for the neural network. The last column defines the output as a binary value of $0$ or $1$. We can separate those two from the dataset into two variables:
End of explanation
X
Y
Explanation: Just to verify the content:
End of explanation
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
Explanation: We will define our model in the next step. The first layer is the input layer. It is set to have 8 inputs for the 8 variables using the attribute input_dim. The Dense class defines the layers to be fully connected. The number of neurons is specified as the first argument to the initializer. We are choosing also the activation function using the activation attribute. This should be clear from the presentations in class and other examples and discussions on related notebooks here in this collection. The output layer consists of one neuron and uses the sigmoid activation function to return a weight between $0$ and $1$:
End of explanation
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
Explanation: The defined network needs to be compiled. The compilation process creates a specific implementation of it using the backend (e.g. TensorFlow or Theano), decides whether a GPU or a CPU will be used, which loss and optimization function to select, and which metrics should be collected during training. In this case we use the binary cross-entropy as a loss function, the efficient implementation of a gradient decent algorithm called Adam, and we store the classification accuracy for the output and analysis.
End of explanation
model.fit(X, Y, epochs=150, batch_size=4)
Explanation: The training of the model is achieved by calling the fit method. The parameters specify the input matrix and output vector in our case, as well as the number of iterations through the data set for training, called epochs. The batch size specifies the number of instances that are evaluated before an update of the parameters is applied.
End of explanation
scores = model.evaluate(X, Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
Explanation: The evaluation is available via the evaluate method. In our case we print out the accuracy:
End of explanation
predictions = model.predict(X)
rounded = [round(x[0]) for x in predictions]
print(rounded)
Explanation: We can now make predictions by calling the predict method with the input matrix as a parameter. In this case we are using the training data to predict the output classifier. This is in general not a good idea. Here it just serves the purpose of showing how the methods are used:
End of explanation |
10,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'gi',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
{k: vocab_to_int[k] for k in list(vocab_to_int.keys())[:30]}
"{:,}".format(len(int_words))
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
from collections import Counter
t = 1e-5
word_counts = Counter(int_words)
amount_of_total_words = len(int_words)
def subsampling_probability(threshold, current_word_count):
word_relative_frequency = current_word_count / amount_of_total_words
return 1 - np.sqrt(threshold / word_relative_frequency)
probability_per_word = { current_word: subsampling_probability(t, current_word_count) for current_word, current_word_count in word_counts.items() }
train_words = [ i for i in int_words if np.random.random() > probability_per_word[i] ]
print("Words dropped: {:,}, final size: {:,}".format(len(int_words) - len(train_words), len(train_words)))
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
r = np.random.randint(1, window_size + 1)
min_index = max(idx - r, 0)
max_index = idx + r
words_in_batch = words[min_index:idx] + words[idx + 1:max_index + 1] # avoid returning the current word on idx
return list(set(words_in_batch)) # avoid duplicates
get_target([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 4, 5)
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, (None))
labels = tf.placeholder(tf.int32, (None, None))
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 400
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
import random
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
10,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB 5b
Step1: Lab Task #1
Step2: Check our trained model files
Let's check the directory structure of our outputs of our trained model in folder we exported the model to in our last lab. We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service.
Step3: Lab Task #2
Step4: Lab Task #3
Step5: The predictions for the four instances were
Step6: Now call gcloud ai-platform predict using the JSON we just created and point to our deployed model and version.
Step7: Lab Task #4 | Python Code:
import os
Explanation: LAB 5b: Deploy and predict with Keras model on Cloud AI Platform.
Learning Objectives
Setup up the environment
Deploy trained Keras model to Cloud AI Platform
Online predict from model on Cloud AI Platform
Batch predict from model on Cloud AI Platform
Introduction
In this notebook, we'll deploying our Keras model to Cloud AI Platform and creating predictions.
We will set up the environment, deploy a trained Keras model to Cloud AI Platform, online predict from deployed model on Cloud AI Platform, and batch predict from deployed model on Cloud AI Platform.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Set up environment variables and load necessary libraries
Import necessary libraries.
End of explanation
%%bash
PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# Change these to try this notebook out
PROJECT = "cloud-training-demos" # TODO: Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # TODO: Replace with your REGION
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.0"
%%bash
gcloud config set compute/region $REGION
Explanation: Lab Task #1: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model
%%bash
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \
| tail -1)
gsutil ls ${MODEL_LOCATION}
Explanation: Check our trained model files
Let's check the directory structure of our outputs of our trained model in folder we exported the model to in our last lab. We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service.
End of explanation
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=# TODO: Add GCS path to saved_model.pb file.
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION"
# gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
# gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model=${MODEL_NAME} \
--origin=${MODEL_LOCATION} \
--runtime-version=1.14 \
--python-version=3.5
Explanation: Lab Task #2: Deploy trained model.
Deploying the trained model to act as a REST web service is a simple gcloud call. Complete #TODO by providing location of saved_model.pb file to Cloud AI Platoform model deployment service. The deployment will take a few minutes.
End of explanation
from oauth2client.client import GoogleCredentials
import requests
import json
MODEL_NAME = # TODO: Add model name
MODEL_VERSION = # TODO: Add model version
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = "https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict" \
.format(PROJECT, MODEL_NAME, MODEL_VERSION)
headers = {"Authorization": "Bearer " + token }
data = {
"instances": [
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Single(1)",
"gestation_weeks": 39
},
{
"is_male": "False",
"mother_age": 29.0,
"plurality": "Single(1)",
"gestation_weeks": 38
},
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Triplets(3)",
"gestation_weeks": 39
},
# TODO: Create another instance
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
Explanation: Lab Task #3: Use model to make online prediction.
Complete __#TODO__s for both the Python and gcloud Shell API methods of calling our deployed model on Cloud AI Platform for online prediction.
Python API
We can use the Python API to send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.
End of explanation
%%writefile inputs.json
{"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
# TODO: Create another instance
Explanation: The predictions for the four instances were: 5.33, 6.09, 2.50, and 5.86 pounds respectively when I ran it (your results might be different).
gcloud shell API
Instead we could use the gcloud shell API. Create a newline delimited JSON file with one instance per line and submit using gcloud.
End of explanation
%%bash
gcloud ai-platform predict \
--model=# TODO: Add model name \
--json-instances=inputs.json \
--version=# TODO: Add model version
Explanation: Now call gcloud ai-platform predict using the JSON we just created and point to our deployed model and version.
End of explanation
%%bash
INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json
OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs
gsutil cp inputs.json $INPUT
gsutil -m rm -rf $OUTPUT
gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \
--data-format=TEXT \
--region ${REGION} \
--input-paths=$INPUT \
--output-path=$OUTPUT \
--model=# TODO: Add model name \
--version=# TODO: Add model version
Explanation: Lab Task #4: Use model to make batch prediction.
Batch prediction is commonly used when you have thousands to millions of predictions. It will create an actual Cloud AI Platform job for prediction. Complete __#TODO__s so we can call our deployed model on Cloud AI Platform for batch prediction.
End of explanation |
10,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature
Step1: Config
Automatically discover the paths to various data folders and compose the project structure.
Step2: Identifier for storing these features on disk and referring to them later.
Step3: Make subsequent NN runs reproducible.
Step4: Read data
Word embedding lookup matrix.
Step5: Padded sequences of word indices for every question.
Step6: Magic features.
Step7: Word embedding properties.
Step9: Define models
Step10: Partition the data
Step11: Create placeholders for out-of-fold predictions.
Step12: Define hyperparameters
Step13: The path where the best weights of the current model will be saved.
Step14: Fit the folds and compute out-of-fold predictions
Step15: Save features
Step16: Explore | Python Code:
from pygoose import *
import gc
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import *
from keras import backend as K
from keras.models import Model, Sequential
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import EarlyStopping, ModelCheckpoint
Explanation: Feature: Out-Of-Fold Predictions from a Multi-Layer Perceptron (+Magic Inputs)
In addition to the MLP architecture, we'll append some of the leaky features to the intermediate feature layer.
<img src="assets/mlp-with-magic.png" alt="Network Architecture" style="height: 900px;" />
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
project = kg.Project.discover()
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
feature_list_id = 'oofp_nn_mlp_with_magic'
Explanation: Identifier for storing these features on disk and referring to them later.
End of explanation
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
Explanation: Make subsequent NN runs reproducible.
End of explanation
embedding_matrix = kg.io.load(project.aux_dir + 'fasttext_vocab_embedding_matrix.pickle')
Explanation: Read data
Word embedding lookup matrix.
End of explanation
X_train_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_train.pickle')
X_train_q2 = kg.io.load(project.preprocessed_data_dir + 'sequences_q2_fasttext_train.pickle')
X_test_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_test.pickle')
X_test_q2 = kg.io.load(project.preprocessed_data_dir + 'sequences_q2_fasttext_test.pickle')
y_train = kg.io.load(project.features_dir + 'y_train.pickle')
Explanation: Padded sequences of word indices for every question.
End of explanation
magic_feature_lists = [
'magic_frequencies',
'magic_cooccurrence_matrix',
]
X_train_magic, X_test_magic, _ = project.load_feature_lists(magic_feature_lists)
X_train_magic = X_train_magic.values
X_test_magic = X_test_magic.values
scaler = StandardScaler()
scaler.fit(np.vstack([X_train_magic, X_test_magic]))
X_train_magic = scaler.transform(X_train_magic)
X_test_magic = scaler.transform(X_test_magic)
Explanation: Magic features.
End of explanation
EMBEDDING_DIM = embedding_matrix.shape[-1]
VOCAB_LENGTH = embedding_matrix.shape[0]
MAX_SEQUENCE_LENGTH = X_train_q1.shape[-1]
print(EMBEDDING_DIM, VOCAB_LENGTH, MAX_SEQUENCE_LENGTH)
Explanation: Word embedding properties.
End of explanation
def create_model_question_branch():
input_q = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedding_q = Embedding(
VOCAB_LENGTH,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False,
)(input_q)
timedist_q = TimeDistributed(Dense(
EMBEDDING_DIM,
activation='relu',
))(embedding_q)
lambda_q = Lambda(
lambda x: K.max(x, axis=1),
output_shape=(EMBEDDING_DIM, )
)(timedist_q)
output_q = lambda_q
return input_q, output_q
def create_model(params):
q1_input, q1_output = create_model_question_branch()
q2_input, q2_output = create_model_question_branch()
magic_input = Input(shape=(X_train_magic.shape[-1], ))
merged_inputs = concatenate([q1_output, q2_output, magic_input])
dense_1 = Dense(params['num_dense_1'])(merged_inputs)
bn_1 = BatchNormalization()(dense_1)
relu_1 = Activation('relu')(bn_1)
dense_2 = Dense(params['num_dense_2'])(relu_1)
bn_2 = BatchNormalization()(dense_2)
relu_2 = Activation('relu')(bn_2)
dense_3 = Dense(params['num_dense_3'])(relu_2)
bn_3 = BatchNormalization()(dense_3)
relu_3 = Activation('relu')(bn_3)
dense_4 = Dense(params['num_dense_4'])(relu_3)
bn_4 = BatchNormalization()(dense_4)
relu_4 = Activation('relu')(bn_4)
bn_final = BatchNormalization()(relu_4)
output = Dense(1, activation='sigmoid')(bn_final)
model = Model(
inputs=[q1_input, q2_input, magic_input],
outputs=output,
)
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=0.01),
metrics=['accuracy']
)
return model
def predict(model, X_q1, X_q2, X_magic):
Mirror the pairs, compute two separate predictions, and average them.
y1 = model.predict([X_q1, X_q2, X_magic], batch_size=1024, verbose=1).reshape(-1)
y2 = model.predict([X_q2, X_q1, X_magic], batch_size=1024, verbose=1).reshape(-1)
return (y1 + y2) / 2
Explanation: Define models
End of explanation
NUM_FOLDS = 5
kfold = StratifiedKFold(
n_splits=NUM_FOLDS,
shuffle=True,
random_state=RANDOM_SEED
)
Explanation: Partition the data
End of explanation
y_train_oofp = np.zeros_like(y_train, dtype='float64')
y_test_oofp = np.zeros((len(X_test_q1), NUM_FOLDS))
Explanation: Create placeholders for out-of-fold predictions.
End of explanation
BATCH_SIZE = 2048
MAX_EPOCHS = 200
model_params = {
'num_dense_1': 400,
'num_dense_2': 200,
'num_dense_3': 400,
'num_dense_4': 100,
}
Explanation: Define hyperparameters
End of explanation
model_checkpoint_path = project.temp_dir + 'fold-checkpoint-' + feature_list_id + '.h5'
Explanation: The path where the best weights of the current model will be saved.
End of explanation
%%time
# Iterate through folds.
for fold_num, (ix_train, ix_val) in enumerate(kfold.split(X_train_q1, y_train)):
# Augment the training set by mirroring the pairs.
X_fold_train_q1 = np.vstack([X_train_q1[ix_train], X_train_q2[ix_train]])
X_fold_train_q2 = np.vstack([X_train_q2[ix_train], X_train_q1[ix_train]])
X_fold_train_magic = np.vstack([X_train_magic[ix_train], X_train_magic[ix_train]])
X_fold_val_q1 = np.vstack([X_train_q1[ix_val], X_train_q2[ix_val]])
X_fold_val_q2 = np.vstack([X_train_q2[ix_val], X_train_q1[ix_val]])
X_fold_val_magic = np.vstack([X_train_magic[ix_val], X_train_magic[ix_val]])
# Ground truth should also be "mirrored".
y_fold_train = np.concatenate([y_train[ix_train], y_train[ix_train]])
y_fold_val = np.concatenate([y_train[ix_val], y_train[ix_val]])
print()
print(f'Fitting fold {fold_num + 1} of {kfold.n_splits}')
print()
# Compile a new model.
model = create_model(model_params)
# Train.
model.fit(
[X_fold_train_q1, X_fold_train_q2, X_fold_train_magic], y_fold_train,
validation_data=([X_fold_val_q1, X_fold_val_q2, X_fold_val_magic], y_fold_val),
batch_size=BATCH_SIZE,
epochs=MAX_EPOCHS,
verbose=1,
callbacks=[
# Stop training when the validation loss stops improving.
EarlyStopping(
monitor='val_loss',
min_delta=0.001,
patience=3,
verbose=1,
mode='auto',
),
# Save the weights of the best epoch.
ModelCheckpoint(
model_checkpoint_path,
monitor='val_loss',
save_best_only=True,
verbose=2,
),
],
)
# Restore the best epoch.
model.load_weights(model_checkpoint_path)
# Compute out-of-fold predictions.
y_train_oofp[ix_val] = predict(model, X_train_q1[ix_val], X_train_q2[ix_val], X_train_magic[ix_val])
y_test_oofp[:, fold_num] = predict(model, X_test_q1, X_test_q2, X_test_magic)
# Clear GPU memory.
K.clear_session()
del X_fold_train_q1, X_fold_train_q2, X_fold_train_magic
del X_fold_val_q1, X_fold_val_q2, X_fold_val_magic
del model
gc.collect()
cv_score = log_loss(y_train, y_train_oofp)
print('CV score:', cv_score)
Explanation: Fit the folds and compute out-of-fold predictions
End of explanation
feature_names = [feature_list_id]
features_train = y_train_oofp.reshape((-1, 1))
features_test = np.mean(y_test_oofp, axis=1).reshape((-1, 1))
project.save_features(features_train, features_test, feature_names, feature_list_id)
Explanation: Save features
End of explanation
pd.DataFrame(features_test).plot.hist()
Explanation: Explore
End of explanation |
10,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
dftdecompose - Illustrate the decomposition of the image in primitive 2-D waves
This demonstration illustrates the decomposition of a step function image into cossenoidal waves of increasing frequencies.
Step1: Demonstração da recontrução parcial cumulativa das "telhas" primitivas da imagem sintética acima. É exibida cada telha primitiva, fazendo a reconstrução da iDFT de apenas valores F(u,0) e F(-u,0) para u entre 0 e M-1. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from numpy.fft import fft2
from numpy.fft import ifft2
import sys,os
ia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
f = 50 * np.ones((128,128))
f[:, : 32] = 200
f[:,64+32: ] = 200
plt.imshow(f,cmap='gray')
plt.title('Original image')
plt.colorbar()
plt.show()
Explanation: dftdecompose - Illustrate the decomposition of the image in primitive 2-D waves
This demonstration illustrates the decomposition of a step function image into cossenoidal waves of increasing frequencies.
End of explanation
H,W = f.shape
N = W;
rows = (W//2)//(2/2)+1
plt.figure(figsize=[4,rows*2])
#1) Encontre a F = DFT(f) - Transformada Discreta de Fourier;
F = fft2(f)
E = ia.dftview(F)
ia.adshow(E, title='DFT')
#2) Crie um Faux zerada de mesmo tipo e shape de F. Neste Faux, primeiro coloque o Faux[0,0] = F[0,0] e calcule a inversa de Faux.
Faux = np.zeros_like(F)
Faux[0,0] = F[0,0]
plt.subplot(rows,2,1)
plt.imshow(np.real(ifft2(Faux)), cmap='gray');
plt.title("DFT inverse (u=0)")
Fsma = np.zeros_like(F)
Fsma = Fsma + Faux
plt.subplot(rows,2,2)
plt.imshow(np.real(ifft2(Fsma)),cmap='gray')
plt.title("Acumulative (u=%s)" % 0)
#3) repita com u variando de 1 a N/2: copie também F[0,u] e F[0,-u] e calcule a inversa. Lembrar que -u = N-u, pois F é periódica.
# Desta forma você vai estar mostrando a reconstrução gradativa da imagem, acrescentando cada vez mais cossenoides.
# Eu estou pedindo também para mostrar as cossenoides individuais que serão somadas gradativamente.
row_count = 2;
for u in range(1,N//2):
Faux = np.zeros_like(F)
Faux[:,u] = F[:,u]
Faux[:,N-u] = F[:,N-u] #-u = N-u
row_count = row_count + 1;
plt.subplot(rows,2,row_count)
plt.imshow(np.real(ifft2(Faux)), cmap='gray');
plt.title("DFT inverse (u=%s)" % u)
#print('\nFaux: \n', Faux)
row_count = row_count + 1;
Fsma = Fsma + Faux
plt.subplot(rows,2,row_count)
plt.imshow(np.real(ifft2(Fsma)),cmap='gray')
plt.title("Acumulative (u=%s)" % u)
#print('\nFsma: \n', Fsma)
plt.tight_layout()
plt.show()
diff = np.abs(np.abs(ifft2(Fsma)) - f).sum() # compare the orignal and acumlated image
print('Difference between original image and reconstructed: ', diff, " (almost zero)")
Explanation: Demonstração da recontrução parcial cumulativa das "telhas" primitivas da imagem sintética acima. É exibida cada telha primitiva, fazendo a reconstrução da iDFT de apenas valores F(u,0) e F(-u,0) para u entre 0 e M-1.
End of explanation |
10,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GeometryPlot
Step1: First of all, we will create a geometry to work with
Step2: GeometryPlot allows you to quickly visualize a geometry. You can create a GeometryPlot out of a geometry very easily
Step3: Now let's see what we got
Step4: Plotting in 3D, 2D and 1D
The 3D view is great, but for big geometries it can take some time to render. If we have a 2d material, a 2D view might be more practical instead. We can get it by specifying the axes that we want
Step5: The next section goes more in depth on what the axes setting accepts. The important part for now is that asking for two axes gets you a 2D representation. Samewise, asking for 1 axis gets you a 1D representation
Step6: Notice how asking for a 1D representation leaves the Y axis of the plot at your disposal. You can control the values in the second axis using the dataaxis_1d setting.
It can be an array that explicitly sets the values
Step7: Or a function that accepts the projected coordinates and returns the values.
Step8: Asking for three axes would bring us back to the 3D representation
Step9: Specifying the axes
There are many ways in which you may want to display the coordinates of your geometry. The most common one is to display the cartesian coordinates. You indicate that you want cartesian coordinates by passing (+-){"x", "y", "z"}. You can pass them as a list
Step10: But it is usually more convenient to pass them as a multicharacter string
Step11: Notice that you can order axes in any way you want. The first one will go to the X axis of the plot, and the second to the Y axis
Step12: You are not limited to cartesian coordinates though. Passing (+-){"a", "b", "c"} will display the fractional coordinates
Step13: And you can also pass an arbitrary direction as an axis
Step14: In this case, we have projected the coordinates into the [1,1,0] and [1, -1, 0] directions. Notice that the modulus of the vector is important for the scaling. See for example what happens when we scale the second vector by a factor of two
Step15: Finally, you can even mix the different possibilities!
Step16: To summarize the different possibilities
Step17: If we want to see the "xy" axes, we would be viewing the structure from the top
Step18: but if we set the axes to yx, -xy or x-y, we will see it from the bottom
Step19: That is, we are flipping the geometry. In the above example we are doing it around the Y axis. Notice that trying to view xy from the -z perspective would show you a mirrored view of your structure!
<div class="alert alert-info">
Non-cartesian axes
The above behavior is also true for all valid axes that you can pass. However, we have made lattice vectors follow the same rules as cartesian vectors. That is, `abc` cross products follow the rules of `xyz` cross products. As a result, if you ask for `axes="ab"` you will see the structure from the `c` perspective.
</div>
Toggling bonds, atoms and cell
You might have noticed that, by default, the cell, atoms and bonds are displayed. Thanks to plotly's capabilities, you can interactively toggle them by clicking at the names in the legend, which is great!
However, if you want to make sure they are not displayed in the first place, you can set the show_bonds, show_cell and show_atoms settings to False.
Step20: Picking which atoms to display
The atoms setting of GeometryPlot allows you to pick which atoms to display. It accepts exactly the same possibilities as the atoms argument in Geometry's methods.
Therefore, you can ask for certain indices
Step21: or use sisl categories to filter the atoms, for example.
We can use it to display only those atoms that have 3 neighbours
Step22: Notice that when we picked particular atoms, only the bonds of those atoms are displayed. You can change this by using the bind_bonds_to_ats setting.
Step23: In fact, when we set show_atoms to False, all that the plot does is to act as if atoms=[] and bind_bonds_to_ats=False.
Scaling atoms
In the following section you can find extensive detail about styling atoms, but if you just one a quick rescaling of all atoms, atoms_scale is your best ally. It is very easy to use
Step24: Custom styles for atoms.
It is quite common that you have an atom-resolved property that you want to display. With GeometryPlot this is extremely easy
Step25: In the following cell we show how these properties accept multiple values. In this case, we want to give different sizes to each atom. If the number of values passed is less than the number of atoms, the values are tiled
Step26: In this case, we have drawn atoms with alternating size of 0.6 and 0.8.
The best part about atoms_style is that you can very easily give different styles to selections of atoms. In this case, it is enough to pass a list of style specifications, including (optionally) the "atoms" key to select the atoms to which these styles will be applied
Step27: Notice these aspects
Step28: Finally, "atoms" accepts anything that Geometry can sanitize, so it can accept categories, for example. This is great because it gives you a great power to easily control complex styling situations
Step29: In this case, we color all atoms whose fractional X coordinate is below 0.4 (half the ribbon) in orange. We also give some transparency to odd atoms.
As a final remark, colors can also be passed as values. In this case, they are mapped to colors by a colorscale, specified in atoms_colorscale.
Step30: Notice however that, for now, you can not mix values with strings and there is only one colorscale for all atoms.
Note that everything that we've done up to this moment is perfectly valid for the 3d view, we are just using the 2d view for convenience.
Step31: Custom styles for bonds
Just as atoms_style, there is a setting that allows you to tweak the styling of the bonds
Step32: <div class="alert alert-info">
Coloring individual bonds
It is **not possible to style bonds individually** yet, e.g. using a colorscale. However, it is one of the goals to improve ``GeometryPlot`` and some thought has already been put into how to design the interface to make it as usable as possible. Rest assured that when the right interface is found, coloring individual bonds will be allowed, as well as drawing bicolor bonds, as most rendering softwares do.
</div>
Step33: Drawing arrows
It is very common that you want to display arrows on the atoms, to show some vector property such as a force or an electric field.
This can be specified quite easily in sisl with the arrows setting. All the information of the arrows that you want to draw is passed as a dictionary, where "data" is the most important key and there are other optional keys like name, color, width, scale, arrowhead_scale and arrowhead_angle that control the aesthetics.
Step34: Notice how we only provided one vector and it was used for all our atoms. We can either do that or pass all the data. Let's build a fake forces array for the sake of this example
Step35: Since there might be more than one vector property to display, you can also pass a list of arrow specifications, and each one will be drawn separately.
Step36: Much like we did in atoms_style, we can specify the atoms for which we want the arrow specification to take effect by using the "atoms" key.
Step37: Finally, notice that in 2D and 1D views, and for axes other than {"x", "y", "z"}, the arrows get projected just as the rest of the coordinates
Step38: <div class="alert alert-warning">
Coloring individual atoms
It is still **not possible to color arrows individually**, e.g. using a colorscale. Future developments will probably work towards this goal.
</div>
Drawing supercells
All the functionality showcased in this notebook is compatible with displaying supercells. The number of supercells displayed in each direction is controlled by the nsc setting
Step39: Notice however that you can't specify different styles or arrows for the supercell atoms, they are just copied! Since what we are displaying here are supercells of a periodic system, this should make sense. If you want your supercells to have different specifications, tile the geometry before creating the plot.
This next cell is just to create the thumbnail for the notebook in the docs | Python Code:
import sisl
import sisl.viz
import numpy as np
Explanation: GeometryPlot
End of explanation
geom = sisl.geom.graphene_nanoribbon(9)
Explanation: First of all, we will create a geometry to work with
End of explanation
# GeometryPlot is the default plot of a geometry, so one can just do
plot = geom.plot()
Explanation: GeometryPlot allows you to quickly visualize a geometry. You can create a GeometryPlot out of a geometry very easily:
End of explanation
plot
Explanation: Now let's see what we got:
End of explanation
plot.update_settings(axes="xy")
Explanation: Plotting in 3D, 2D and 1D
The 3D view is great, but for big geometries it can take some time to render. If we have a 2d material, a 2D view might be more practical instead. We can get it by specifying the axes that we want:
End of explanation
plot.update_settings(axes="x")
Explanation: The next section goes more in depth on what the axes setting accepts. The important part for now is that asking for two axes gets you a 2D representation. Samewise, asking for 1 axis gets you a 1D representation:
End of explanation
plot.update_settings(axes="x", dataaxis_1d=plot.geometry.atoms.Z)
Explanation: Notice how asking for a 1D representation leaves the Y axis of the plot at your disposal. You can control the values in the second axis using the dataaxis_1d setting.
It can be an array that explicitly sets the values:
End of explanation
plot.update_settings(dataaxis_1d=np.sin)
Explanation: Or a function that accepts the projected coordinates and returns the values.
End of explanation
plot.update_settings(axes="xyz")
Explanation: Asking for three axes would bring us back to the 3D representation:
End of explanation
plot.update_settings(axes=["x", "y"])
Explanation: Specifying the axes
There are many ways in which you may want to display the coordinates of your geometry. The most common one is to display the cartesian coordinates. You indicate that you want cartesian coordinates by passing (+-){"x", "y", "z"}. You can pass them as a list:
End of explanation
plot.update_settings(axes="xy")
Explanation: But it is usually more convenient to pass them as a multicharacter string:
End of explanation
plot.update_settings(axes="yx")
Explanation: Notice that you can order axes in any way you want. The first one will go to the X axis of the plot, and the second to the Y axis:
End of explanation
plot.update_settings(axes="ab")
Explanation: You are not limited to cartesian coordinates though. Passing (+-){"a", "b", "c"} will display the fractional coordinates:
End of explanation
plot.update_settings(axes=[[1,1,0], [1, -1, 0]])
Explanation: And you can also pass an arbitrary direction as an axis:
End of explanation
plot.update_settings(axes=[[1,1,0], [2, -2, 0]])
Explanation: In this case, we have projected the coordinates into the [1,1,0] and [1, -1, 0] directions. Notice that the modulus of the vector is important for the scaling. See for example what happens when we scale the second vector by a factor of two:
End of explanation
plot.update_settings(axes=["x", [1,1,0]])
Explanation: Finally, you can even mix the different possibilities!
End of explanation
bilayer = sisl.geom.bilayer(top_atoms="C", bottom_atoms=["B", "N"], stacking="AA")
Explanation: To summarize the different possibilities:
(+-){"x", "y", "z"}: The cartesian coordinates are displayed.
(+-){"a", "b", "c"}: The fractional coordinates are displayed. Same for {0,1,2}.
np.array of shape (3, ): The coordinates are projected into that direction. If two directions are passed, the coordinates are not projected to each axis separately. The displayed coordinates are then the coefficients of the linear combination to get that point (or the projection of that point into the plane formed by the two axes).
<div class="alert alert-warning">
Some non-obvious behavior
**Fractional coordinates are only displayed if all axes are lattice vectors**. Otherwise, the plot works as if you had passed the direction of the lattice vector. Also, for now, the **3D representation only displays cartesian coordinates**.
</div>
2D perspective
It is not trivial to notice that the axes you choose determine what is your point of view. For example, if you choose to view "xy", the z axis will be pointing "outside of the screen", while if you had chosen "yx" the z axis will point "inside the screen". This affects the depth of the atoms, i.e. which atoms are on top and which are on the bottom.
To visualize it, we build a bilayer of graphene and boron nitride:
End of explanation
bilayer.plot(axes="xy")
Explanation: If we want to see the "xy" axes, we would be viewing the structure from the top:
End of explanation
bilayer.plot(axes="-xy")
Explanation: but if we set the axes to yx, -xy or x-y, we will see it from the bottom:
End of explanation
plot.update_settings(axes="xy", show_cell=False, show_atoms=False)
Explanation: That is, we are flipping the geometry. In the above example we are doing it around the Y axis. Notice that trying to view xy from the -z perspective would show you a mirrored view of your structure!
<div class="alert alert-info">
Non-cartesian axes
The above behavior is also true for all valid axes that you can pass. However, we have made lattice vectors follow the same rules as cartesian vectors. That is, `abc` cross products follow the rules of `xyz` cross products. As a result, if you ask for `axes="ab"` you will see the structure from the `c` perspective.
</div>
Toggling bonds, atoms and cell
You might have noticed that, by default, the cell, atoms and bonds are displayed. Thanks to plotly's capabilities, you can interactively toggle them by clicking at the names in the legend, which is great!
However, if you want to make sure they are not displayed in the first place, you can set the show_bonds, show_cell and show_atoms settings to False.
End of explanation
plot.update_settings(atoms=[1,2,3,4,5], show_atoms=True, show_cell="axes")
#show_cell accepts "box", "axes" and False
Explanation: Picking which atoms to display
The atoms setting of GeometryPlot allows you to pick which atoms to display. It accepts exactly the same possibilities as the atoms argument in Geometry's methods.
Therefore, you can ask for certain indices:
End of explanation
plot.update_settings(atoms={"neighbours": 3}, show_cell="box")
Explanation: or use sisl categories to filter the atoms, for example.
We can use it to display only those atoms that have 3 neighbours:
End of explanation
plot.update_settings(bind_bonds_to_ats=False)
plot = plot.update_settings(atoms=None, bind_bonds_to_ats=True)
Explanation: Notice that when we picked particular atoms, only the bonds of those atoms are displayed. You can change this by using the bind_bonds_to_ats setting.
End of explanation
plot.update_settings(atoms_scale=0.6)
plot.update_settings(atoms_scale=1)
Explanation: In fact, when we set show_atoms to False, all that the plot does is to act as if atoms=[] and bind_bonds_to_ats=False.
Scaling atoms
In the following section you can find extensive detail about styling atoms, but if you just one a quick rescaling of all atoms, atoms_scale is your best ally. It is very easy to use:
End of explanation
plot.update_settings(atoms=None, axes="yx", atoms_style={"color": "green", "size": 14})
Explanation: Custom styles for atoms.
It is quite common that you have an atom-resolved property that you want to display. With GeometryPlot this is extremely easy :)
All styles are controlled by the atoms_style setting. For example, if we want to color all atoms in green and with a size of 0.6 we can do it like this:
End of explanation
plot.update_settings(atoms_style={"color": "green", "size": [12, 20]})
Explanation: In the following cell we show how these properties accept multiple values. In this case, we want to give different sizes to each atom. If the number of values passed is less than the number of atoms, the values are tiled:
End of explanation
plot.update_settings(
atoms_style=[
{"color": "green", "size": [12, 20], "opacity": [1, 0.3]},
{"atoms": [0,1], "color": "orange"}
]
)
Explanation: In this case, we have drawn atoms with alternating size of 0.6 and 0.8.
The best part about atoms_style is that you can very easily give different styles to selections of atoms. In this case, it is enough to pass a list of style specifications, including (optionally) the "atoms" key to select the atoms to which these styles will be applied:
End of explanation
plot.update_settings(atoms_style=[{"atoms": [0,1], "color": "orange"}])
Explanation: Notice these aspects:
The first specification doesn't contain "atoms", so it applies to all atoms.
Properties that were not specified for atoms [0, 1] are "inherited" from the previous specifications. For example, size of atoms 0 and 1 is still determined by the first style specification.
If some atom is selected in more than one specification, the last one remains, that's why the color is finally set to orange for [0,1].
You don't need to include general styles. For atoms that don't have styles specified the defaults are used:
End of explanation
plot.update_settings(atoms_style=[
{"atoms": {"fx": (None, 0.4)}, "color": "orange"},
{"atoms": sisl.geom.AtomOdd(), "opacity":0.3},
])
Explanation: Finally, "atoms" accepts anything that Geometry can sanitize, so it can accept categories, for example. This is great because it gives you a great power to easily control complex styling situations:
End of explanation
# Get the Y coordinates
y = plot.geometry.xyz[:,1]
# And color atoms according to it
plot.update_settings(atoms_style=[
{"color": y},
{"atoms": sisl.geom.AtomOdd(), "opacity":0.3},
], atoms_colorscale="viridis")
Explanation: In this case, we color all atoms whose fractional X coordinate is below 0.4 (half the ribbon) in orange. We also give some transparency to odd atoms.
As a final remark, colors can also be passed as values. In this case, they are mapped to colors by a colorscale, specified in atoms_colorscale.
End of explanation
plot.update_settings(axes="xyz")
Explanation: Notice however that, for now, you can not mix values with strings and there is only one colorscale for all atoms.
Note that everything that we've done up to this moment is perfectly valid for the 3d view, we are just using the 2d view for convenience.
End of explanation
plot.update_settings(axes="yx", bonds_style={"color": "orange", "width": 5, "opacity": 0.5})
Explanation: Custom styles for bonds
Just as atoms_style, there is a setting that allows you to tweak the styling of the bonds: bonds_style. Unlike atoms_style, for now only one style specification can be provided. That is, bonds_style only accepts a dictionary, not a list of dictionaries. The dictionary can contain the following keys: color, width and opacity, but you don't need to provide all of them.
End of explanation
plot = plot.update_settings(axes="xyz", bonds_style={})
Explanation: <div class="alert alert-info">
Coloring individual bonds
It is **not possible to style bonds individually** yet, e.g. using a colorscale. However, it is one of the goals to improve ``GeometryPlot`` and some thought has already been put into how to design the interface to make it as usable as possible. Rest assured that when the right interface is found, coloring individual bonds will be allowed, as well as drawing bicolor bonds, as most rendering softwares do.
</div>
End of explanation
plot.update_settings(arrows={"data": [0,0,2], "name": "Upwards force"})
Explanation: Drawing arrows
It is very common that you want to display arrows on the atoms, to show some vector property such as a force or an electric field.
This can be specified quite easily in sisl with the arrows setting. All the information of the arrows that you want to draw is passed as a dictionary, where "data" is the most important key and there are other optional keys like name, color, width, scale, arrowhead_scale and arrowhead_angle that control the aesthetics.
End of explanation
forces = np.linspace([0,0,2], [0,3,1], 18)
plot.update_settings(arrows={"data": forces, "name": "Force", "color": "orange", "width": 4})
Explanation: Notice how we only provided one vector and it was used for all our atoms. We can either do that or pass all the data. Let's build a fake forces array for the sake of this example:
End of explanation
plot.update_settings(arrows=[
{"data": forces, "name": "Force", "color": "orange", "width": 4},
{"data": [0,0,2], "name": "Upwards force", "color": "red"}
])
Explanation: Since there might be more than one vector property to display, you can also pass a list of arrow specifications, and each one will be drawn separately.
End of explanation
plot.update_settings(arrows=[
{"data": forces, "name": "Force", "color": "orange", "width": 4},
{"atoms": {"fy": (0, 0.5)} ,"data": [0,0,2], "name": "Upwards force", "color": "red"}
])
Explanation: Much like we did in atoms_style, we can specify the atoms for which we want the arrow specification to take effect by using the "atoms" key.
End of explanation
plot.update_settings(axes="yz")
Explanation: Finally, notice that in 2D and 1D views, and for axes other than {"x", "y", "z"}, the arrows get projected just as the rest of the coordinates:
End of explanation
plot.update_settings(axes="xyz", nsc=[2,1,1])
Explanation: <div class="alert alert-warning">
Coloring individual atoms
It is still **not possible to color arrows individually**, e.g. using a colorscale. Future developments will probably work towards this goal.
</div>
Drawing supercells
All the functionality showcased in this notebook is compatible with displaying supercells. The number of supercells displayed in each direction is controlled by the nsc setting:
End of explanation
thumbnail_plot = plot
if thumbnail_plot:
thumbnail_plot.show("png")
Explanation: Notice however that you can't specify different styles or arrows for the supercell atoms, they are just copied! Since what we are displaying here are supercells of a periodic system, this should make sense. If you want your supercells to have different specifications, tile the geometry before creating the plot.
This next cell is just to create the thumbnail for the notebook in the docs
End of explanation |
10,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Network Biology
Coding Assignment 3
Submitted By
Step1: Question 1
Step2: Final Coexpression network (images exported from Cytoscape)
Step3: Degree Distribution for cases <br>
i. Gene Duplication ≪ TFBS Duplication <br>
ii. TFBS Duplication ≫ Gene Duplication <br>
iii. Gene Duplication ≈ TFBS Duplication.
Step6: Analysis and comclusion
When Gene Duplication is very less than tfbs dublication, most of the genes end up having all the tfbs in common, thereby giving all nodes the same degree. When gene duplication probability is comparable with tfbs dublication, the gene_pool expands, and at the sme time, more and more connections gets added (as tfbs are also dublicating). Thus we get a very richly connected network.
Question 2
Step8: Observations
1csp and 1pks proteins show high cllustering coefficients in LIN models, suggesting that they have high rate of foldings.
A high clustering coefficint makes a protein to fold quickly.
Question 3
Step9: Observations
Bartoli's Models, when compared with the the original RIG models show that the characteristic path length and average clustering coefficients both are comparable. Clustering coefficients is a bit lower that true values.
Step10: Question 4 | Python Code:
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import random
import copy
from Bio.PDB import *
from IPython.display import HTML, display
import tabulate
from __future__ import division
from IPython.display import Image
Explanation: Network Biology
Coding Assignment 3
Submitted By: Divyanshu Srivastava
End of explanation
## Initialization
genes = 25
tfbs_pool = 10
tfbs_per_gene = 3
gene_pool = {}
for g in range(genes):
gene_pool[g] = random.sample(range(tfbs_pool), tfbs_per_gene)
steps_of_evolution = 100
p_gene_dublication = 0.05
p_gene_deletion = 0.25
p_tfbs_dublication = 0.45
p_tfbs_deletion = 0.25
p_vector = [p_gene_dublication, p_gene_deletion, p_tfbs_dublication, p_tfbs_deletion]
filename = "network-" + str(genes) + "-" + str(tfbs_pool) + "-" + str(tfbs_per_gene)
filename = filename + "-" + str(p_vector[0]) + "-" + str(p_vector[1]) + "-"
filename = filename + str(p_vector[2])+ "-" + str(p_vector[3]) + ".gml"
## Evolution
for s in range(steps_of_evolution):
r = np.random.choice(len(p_vector), p = p_vector)
if r == 0:
print "Evolution Step : " + str (s) + " Gene Dublication"
gene_to_dublicate = random.sample(range(genes), 1)[0]
gene_pool[genes] = copy.deepcopy(gene_pool[gene_to_dublicate])
genes = genes + 1;
elif r == 1:
print "Evolution Step : " + str (s) + " Gene Deletion"
gene_to_delete = random.sample(range(genes), 1)[0]
for i in range(gene_to_delete, genes-1):
gene_pool[i] = copy.deepcopy(gene_pool[i+1])
gene_pool.pop(genes - 1)
genes = genes-1
if genes == 0:
print "Gene Pool Empty !"
break
elif r == 2:
# print "Evolution Step : " + str (s) + " TFBS Dublication"
tfbs_probability = np.array(range(0, tfbs_pool))
for g in gene_pool:
for value in gene_pool[g]:
tfbs_probability[value] = tfbs_probability[value]+1
tfbs_probability = tfbs_probability.astype(np.float)
tfbs_probability = tfbs_probability / np.sum(tfbs_probability)
tfbs_to_dublicate = np.random.choice(tfbs_pool, p = tfbs_probability)
flag = False
while not flag:
gene_target = np.random.choice(gene_pool.keys())
if tfbs_to_dublicate not in gene_pool[gene_target]:
gene_pool[gene_target].append(tfbs_to_dublicate)
flag = True
else:
# print "Evolution Step : " + str (s) + " TFBS Deletion"
gene_target = np.random.choice(gene_pool.keys())
tfbs_to_delete = np.random.choice(gene_pool[gene_target])
gene_pool[gene_target].remove(tfbs_to_delete)
if len(gene_pool[gene_target]) == 0:
gene_to_delete = gene_target
for i in range(gene_to_delete, genes-1):
gene_pool[i] = copy.deepcopy(gene_pool[i+1])
gene_pool.pop(genes - 1)
genes = genes-1
if genes == 0:
print "Gene Pool Empty !"
break
## Building coevolution network
G = nx.Graph()
for g_a in gene_pool.keys():
for g_b in gene_pool.keys():
if not g_a == g_b:
if len(set(gene_pool[g_a]).intersection(gene_pool[g_b])) > 0:
G.add_edge(g_a, g_b)
nx.write_gml(G, 'gml files/' + filename)
gene_pool
Explanation: Question 1
End of explanation
print "Genes : 15"
print "TFBS : 5"
print "TFBS per gene : 2"
print "p_gene_dublication, p_gene_deletion, p_tfbs_dublication, p_tfbs_deletion"
print "0.25, 0.25, 0.25, 0.25"
Image("gml files/network-15-5-2-0.25-0.25-0.25-0.25.gml.png")
print "Genes : 15"
print "TFBS : 50"
print "TFBS per gene : 2"
print "p_gene_dublication, p_gene_deletion, p_tfbs_dublication, p_tfbs_deletion"
print "0.25, 0.25, 0.25, 0.25"
Image("gml files/network-15-50-2-0.25-0.25-0.25-0.25.gml.png")
print "Genes : 25"
print "TFBS : 10"
print "TFBS per gene : 3"
print "p_gene_dublication, p_gene_deletion, p_tfbs_dublication, p_tfbs_deletion"
print "0.25, 0.25, 0, 0.5"
Image("gml files/network-25-10-3-0.25-0.25-0-0.5.gml.png")
Explanation: Final Coexpression network (images exported from Cytoscape)
End of explanation
plt.subplot(211)
G1 = nx.read_gml('gml files/network-25-10-3-0.05-0.25-0.45-0.25.gml')
plt.hist(G1.degree().values())
plt.title("Gene Duplication < < TFBS Duplication")
plt.show()
plt.subplot(212)
G2 = nx.read_gml('gml files/network-25-10-3-0.49-0.01-0.49-0.01.gml')
plt.hist(G2.degree().values())
plt.title("Gene Duplication ~ TFBS Duplication")
plt.show()
Explanation: Degree Distribution for cases <br>
i. Gene Duplication ≪ TFBS Duplication <br>
ii. TFBS Duplication ≫ Gene Duplication <br>
iii. Gene Duplication ≈ TFBS Duplication.
End of explanation
def get_RIG(coordinates, labels, cut_off):
this function computes residue interaction graphs
RIG = nx.Graph()
label_ids = range(len(labels))
RIG.add_nodes_from(label_ids)
for i in label_ids:
for j in label_ids:
if not i == j:
if np.linalg.norm(coordinates[i] - coordinates[j]) < 7:
RIG.add_edge(i, j)
return RIG
def get_LIN(RIG, threshold):
this function computes long range network
LIN = nx.Graph(RIG)
for e in LIN.edges():
if not abs(e[0] - e[1]) == 1:
if abs(e[0] - e[1]) < threshold:
LIN.remove_edge(e[0], e[1])
return LIN
RIG_CUTOFF = 7
LIN_THRESHOLD = 12
parser = PDBParser()
pdb_files = ['1csp.pdb', '1hrc.pdb', '1pks.pdb', '2abd.pdb','3mef.pdb']
RIG = []
LIN = []
for pdb_file in pdb_files:
structure = parser.get_structure('pdb_file', 'pdb files/' + pdb_file)
coordinates = []
labels = []
for model in structure:
for chain in model:
for residue in chain:
try:
coordinates.append(residue['CA'].get_coord())
labels.append(residue.get_resname())
except KeyError:
pass
RIG.append(get_RIG(coordinates, labels, RIG_CUTOFF))
LIN.append(get_LIN(RIG[-1], LIN_THRESHOLD))
break ## working on chain id A only
break ## Working on model id 0 only
output = [['PBD ID', 'Nodes', 'Edges (RIG)', 'L (RIG)', 'C (RIG)', 'Edges (LIN)', 'L (LIN)', 'C (LIN)']]
for i in range(len(pdb_files)):
append_list = [pdb_files[i], RIG[i].number_of_nodes(), RIG[i].number_of_edges()]
append_list.append(nx.average_shortest_path_length(RIG[i]))
append_list.append(nx.average_clustering(RIG[i]))
append_list.append(LIN[i].number_of_edges())
append_list.append(nx.average_shortest_path_length(LIN[i]))
append_list.append(nx.average_clustering(LIN[i]))
output.append(append_list)
display(HTML(tabulate.tabulate(output, tablefmt='html')))
Explanation: Analysis and comclusion
When Gene Duplication is very less than tfbs dublication, most of the genes end up having all the tfbs in common, thereby giving all nodes the same degree. When gene duplication probability is comparable with tfbs dublication, the gene_pool expands, and at the sme time, more and more connections gets added (as tfbs are also dublicating). Thus we get a very richly connected network.
Question 2
End of explanation
def get_Bartoli_RIG_Model(nodes, edges):
this function computes bartoli's model of residue interaction graphs
Bartoli_RIG_Model = nx.Graph()
Bartoli_RIG_Model.add_nodes_from(range(nodes))
# adding backbone chain
Bartoli_RIG_Model.add_path(range(nodes))
# making other links
d = {} # dictionary key: absolute difference, values: possible pairs
for i in range(nodes):
for j in range(nodes):
if abs(i-j) in d:
d[abs(i-j)].append((i, j))
else:
d[abs(i-j)] = [(i, j)]
del(d[0]) # not required
del(d[1]) # already handled in backbone
p = np.asarray([len(x) for x in d.values()]).astype(np.float)
p = p/np.sum(p)
while not nx.number_of_edges(Bartoli_RIG_Model) > edges:
x = random.choice(d[np.random.choice(d.keys())])
# np.random.choice(d[np.random.choice(d.keys(), p)])
Bartoli_RIG_Model.add_edge(x[0]-1, x[1]-1)
Bartoli_RIG_Model.add_edge(x[0]-1, x[1])
Bartoli_RIG_Model.add_edge(x[0]-1, x[1]+1)
Bartoli_RIG_Model.add_edge(x[0], x[1]-1)
Bartoli_RIG_Model.add_edge(x[0], x[1])
Bartoli_RIG_Model.add_edge(x[0], x[1]+1)
Bartoli_RIG_Model.add_edge(x[0]+1, x[1]-1)
Bartoli_RIG_Model.add_edge(x[0]+1, x[1])
Bartoli_RIG_Model.add_edge(x[0]+1, x[1]+1)
return Bartoli_RIG_Model
## Bartoli's model for protein contact map models.
Bartoli_RIG_Model = []
for rig in RIG:
nodes = nx.number_of_nodes(rig)
edges = nx.number_of_edges(rig)
Bartoli_RIG_Model.append(get_Bartoli_RIG_Model(nodes, edges))
output = [['PBD ID', 'Nodes', 'Edges (RIG)', 'L (RIG)', 'C (RIG)', 'Edges (Bartoli)', 'L (Bartoli)', 'C (Bartoli)']]
for i in range(len(pdb_files)):
append_list = [pdb_files[i], RIG[i].number_of_nodes(), RIG[i].number_of_edges()]
append_list.append(nx.average_shortest_path_length(RIG[i]))
append_list.append(nx.average_clustering(RIG[i]))
append_list.append(Bartoli_RIG_Model[i].number_of_edges())
append_list.append(nx.average_shortest_path_length(Bartoli_RIG_Model[i]))
append_list.append(nx.average_clustering(Bartoli_RIG_Model[i]))
output.append(append_list)
display(HTML(tabulate.tabulate(output, tablefmt='html')))
Explanation: Observations
1csp and 1pks proteins show high cllustering coefficients in LIN models, suggesting that they have high rate of foldings.
A high clustering coefficint makes a protein to fold quickly.
Question 3
End of explanation
x = []
for e in Bartoli_RIG_Model[0].edges():
x.append(np.linalg.norm(e))
plt.hist(x, bins = 30)
plt.xlabel('cartesian distance amond amino acids')
plt.ylabel('number of amino acid contacts made')
plt.show()
Explanation: Observations
Bartoli's Models, when compared with the the original RIG models show that the characteristic path length and average clustering coefficients both are comparable. Clustering coefficients is a bit lower that true values.
End of explanation
def display_graph(G):
print "Nodes : " + str(G.number_of_nodes())
print "Edges : " + str(G.number_of_edges())
density = G.number_of_edges() / (G.number_of_nodes()*G.number_of_nodes()/2)
print "Sparseness : " + str(1-density)
try:
print "Characteristic Path Length (L) : " + str(nx.average_shortest_path_length(G))
except Exception as e:
print "Characteristic Path Length (L) : " + str(e)
print "Average Clustering Coefficient (C) : " + str(nx.average_clustering(G))
nx.draw_networkx(G)
plt.title('Network Layout')
plt.show()
plt.hist(G.degree()[0])
plt.title('Degree distribution')
plt.show()
n_protein = 100
n_protein_domains = 50
domain_per_protein = 3
protein_domains = range(n_protein_domains)
protein = np.asmatrix([random.sample(range(n_protein_domains), domain_per_protein) for x in range(n_protein)])
G_odd_odd = nx.Graph()
G_odd_odd.add_nodes_from(range(n_protein))
for x in range(n_protein):
for y in range(n_protein):
if not x == y:
if np.any(protein[x,] % 2) and np.any(protein[y,] % 2):
G_odd_odd.add_edge(x, y)
print "ODD-ODD PIN"
print "~~~~~~~~~~~"
print ""
display_graph(G_odd_odd)
G_even_even = nx.Graph()
G_even_even.add_nodes_from(range(n_protein))
for x in range(n_protein):
for y in range(n_protein):
if not x == y:
if not np.all(protein[x,] % 2) and not np.all(protein[y,] % 2):
G_even_even.add_edge(x, y)
print "EVEN-EVEN PIN"
print "~~~~~~~~~~~~~"
print ""
display_graph(G_even_even)
G_odd_even = nx.Graph()
G_odd_even.add_nodes_from(range(n_protein))
for x in range(n_protein):
for y in range(n_protein):
if not x == y:
if (np.any(protein[x,] % 2) and not np.all(protein[y,] % 2)) or (not np.all(protein[x,] % 2) and np.any(protein[y,] % 2)):
G_odd_even.add_edge(x, y)
print "ODD-EVEN PIN"
print "~~~~~~~~~~~~~"
print ""
display_graph(G_odd_even)
def is_prime(a):
if a < 3:
return False
return all(a % i for i in xrange(2, a))
G_prime_prime = nx.Graph()
G_prime_prime.add_nodes_from(range(n_protein))
for x in range(n_protein):
for y in range(n_protein):
if not x == y:
x_prime = []
y_prime = []
for z in range(domain_per_protein):
x_prime.append(is_prime(protein[x, z]))
y_prime.append(is_prime(protein[y, z]))
if any(x_prime) and any(y_prime):
G_prime_prime.add_edge(x, y)
print "PRIME PRIME PIN"
print "~~~~~~~~~~~~~~~"
print ""
display_graph(G_prime_prime)
Explanation: Question 4
End of explanation |
10,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
텐서플로우 라이브러리를 임포트 하세요.
텐서플로우에는 MNIST 데이터를 자동으로 로딩해 주는 헬퍼 함수가 있습니다. "MNIST_data" 폴더에 데이터를 다운로드하고 훈련, 검증, 테스트 데이터를 자동으로 읽어 들입니다. one_hot 옵션을 설정하면 정답 레이블을 원핫벡터로 바꾸어 줍니다.
Step1: minist.train.images에는 훈련용 이미지 데이터가 있고 mnist.test.images에는 테스트용 이미지 데이터가 있습니다. 이 데이터의 크기를 확인해 보세요.
matplotlib에는 이미지를 그려주는 imshow() 함수가 있습니다. 우리가 읽어 들인 mnist.train.images는 길이 784의 배열입니다. 55,000개 중에서 원하는 하나를 출력해 보세요.
이미지로 표현하려면 원본 이미지 사각형 크기인 [28, 28]로 변경해 줍니다. 그리고 흑백이미지 이므로 컬러맵을 그레이 스케일로 지정합니다.
Step2: mnist.train.labels에는 정답값 y 가 들어 있습니다. 원핫벡터로 로드되었는지 55,000개의 정답 데이터 중 하나를 확인해 보세요.
Step3: 훈련 데이터는 55,000개로 한꺼번에 처리하기에 너무 많습니다. 그래서 미니배치 그래디언트 디센트 방식을 사용하려고 합니다. 미니배치 방식을 사용하려면 훈련 데이터에서 일부를 쪼개어 반복하여 텐서플로우 모델에 주입해 주어야 합니다.
텐서플로우 모델이 동작하면서 입력 데이터를 받기위해 플레이스 홀더를 정의합니다. 플레이스 홀더는 x(이미지), y(정답 레이블) 두가지입니다.
x = tf.placeholder("float32", [None, 784])
y = tf.placeholder("float32", shape=[None, 10])
첫번째 레이어의 행렬식을 만듭니다. 이 식은 입력 데이터 x와 첫번째 레이어의 가중치 W1을 곱하고 편향 b1을 더합니다.
첫번째 레이어의 뉴런(유닛) 개수를 100개로 지정하겠습니다. 입력 데이터 x 는 [None, 784] 사이즈의 플레이스 홀더이므로 가중치의 크기는 [784, 100] 이 되어야 결과 행렬이 [None, 100] 이 되어 다음 레이어로 전달됩니다.
W1 = tf.Variable(tf.truncated_normal([784, 100], stddev=0.1))
b1 = tf.Variable(tf.constant(0.1, shape=[100]))
tf.matmul 함수를 사용하여 행렬곱을 한다음 편향을 더하고 첫번째 레이어의 활성화 함수인 시그모이드 함수를 적용합니다. 텐서플로우에는 시그모이드 함수를 내장하고 있습니다.
t = tf.sigmoid(tf.matmul(x,W1) + b1)
출력 레이어의 계산식을 만들기 위해 가중치 W2와 b2 변수를 만듭니다. 직전의 히든 레이어의 출력 사이즈가 [None, 100]이고 출력 유닛의 개수는 10개 이므로 가중치 W2의 크기는 [100, 10] 이 됩니다. 편향 b2의 크기는 [10]입니다.
W2 = tf.Variable(tf.truncated_normal([100, 10], stddev=0.1))
b2 = tf.Variable(tf.constant(0.1, shape=[10]))
출력 레이어의 행렬곱을 계산합니다. 이전 히든 레이어의 출력 t와 W2를 곱하고 b2를 더합니다.
z = tf.matmul(t,W2) + b2
출력 값을 정규화하여 정답과 비교하려면 소프트맥스 함수를 적용해야 합니다. 텐서플로우에는 소프트맥스 함수가 내장되어 있습니다.
y_hat = tf.nn.softmax(z)
손실 함수 크로스 엔트로피를 계산하기 위해 위에서 구한 y_hat을 사용해도 되지만 텐서플로우에는 소프트맥스를 통과하기 전의 값 z 를 이용하여 소프트맥스 크로스 엔트로피를 계산해 주는 함수를 내장하고 있습니다. softmax_cross_entropy를 이용하여 z 와 정답 y 의 손실을 계산합니다.
loss = tf.losses.softmax_cross_entropy(y, z)
학습속도 0.5로 경사하강법을 적용하고 위에서 만든 손실 함수를 이용해 훈련 노드를 생성합니다.
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
올바르게 분류된 정확도를 계산하려면 정답을 가지고 있는 원핫벡터인 y 와 소프트맥스를 통과한 원핫벡터인 y_hat을 비교해야 합니다. 이 두 텐서는 [None, 10]의 크기를 가지고 있습니다. 따라서 행방향(1)으로 가장 큰 값을 가진 인덱스(argmax)를 찾아서 같은지(equal) 확인하면 됩니다.
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_hat,1))
correct_prediction은 [True, False, ...] 와 같은 배열이므로 불리언을 숫자(1,0)로 바꾼다음(cast) 전체를 합하여 평균을 내면 정확도 값을 얻을 수 있습니다.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
세션 객체를 만들고 모델에 사용할 변수를 초기화합니다.
Step4: 1000번 반복을 하면서 훈련 데이터에서 100개씩 뽑아내어(mnist.train.next_batch) 모델에 주입합니다. 모델의 플레이스 홀더에 주입하려면 플레이스 홀더의 이름과 넘겨줄 값을 딕셔너리 형태로 묶어서 feed_dict 매개변수에 전달합니다.
계산할 값은 훈련 노드 train 과 학습 과정을 그래프로 출력하기 위해 손실함수 값을 계산하여 costs 리스트에 누적합니다.
Step5: costs 리스트를 그래프로 출력합니다.
Step6: 정확도를 계산하기 위해 만든 노드 accuracy를 실행합니다. 이때 입력 데이터는 mnist.test 로 훈련시에 사용하지 않았던 데이터입니다. 이 정확도 계산은 위에서 학습시킨 W1, b1, W2, b2 를 이용하여 레이블을 예측한 결과입니다.
sess.run(accuracy, feed_dict={x | Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Explanation: 텐서플로우 라이브러리를 임포트 하세요.
텐서플로우에는 MNIST 데이터를 자동으로 로딩해 주는 헬퍼 함수가 있습니다. "MNIST_data" 폴더에 데이터를 다운로드하고 훈련, 검증, 테스트 데이터를 자동으로 읽어 들입니다. one_hot 옵션을 설정하면 정답 레이블을 원핫벡터로 바꾸어 줍니다.
End of explanation
plt.imshow(mnist.train.images[..].reshape([.., ..]), cmap=plt.get_cmap('gray_r'))
Explanation: minist.train.images에는 훈련용 이미지 데이터가 있고 mnist.test.images에는 테스트용 이미지 데이터가 있습니다. 이 데이터의 크기를 확인해 보세요.
matplotlib에는 이미지를 그려주는 imshow() 함수가 있습니다. 우리가 읽어 들인 mnist.train.images는 길이 784의 배열입니다. 55,000개 중에서 원하는 하나를 출력해 보세요.
이미지로 표현하려면 원본 이미지 사각형 크기인 [28, 28]로 변경해 줍니다. 그리고 흑백이미지 이므로 컬러맵을 그레이 스케일로 지정합니다.
End of explanation
mnist.train.labels[..]
Explanation: mnist.train.labels에는 정답값 y 가 들어 있습니다. 원핫벡터로 로드되었는지 55,000개의 정답 데이터 중 하나를 확인해 보세요.
End of explanation
sess = tf.Session()
sess.run(tf.global_variables_initializer())
Explanation: 훈련 데이터는 55,000개로 한꺼번에 처리하기에 너무 많습니다. 그래서 미니배치 그래디언트 디센트 방식을 사용하려고 합니다. 미니배치 방식을 사용하려면 훈련 데이터에서 일부를 쪼개어 반복하여 텐서플로우 모델에 주입해 주어야 합니다.
텐서플로우 모델이 동작하면서 입력 데이터를 받기위해 플레이스 홀더를 정의합니다. 플레이스 홀더는 x(이미지), y(정답 레이블) 두가지입니다.
x = tf.placeholder("float32", [None, 784])
y = tf.placeholder("float32", shape=[None, 10])
첫번째 레이어의 행렬식을 만듭니다. 이 식은 입력 데이터 x와 첫번째 레이어의 가중치 W1을 곱하고 편향 b1을 더합니다.
첫번째 레이어의 뉴런(유닛) 개수를 100개로 지정하겠습니다. 입력 데이터 x 는 [None, 784] 사이즈의 플레이스 홀더이므로 가중치의 크기는 [784, 100] 이 되어야 결과 행렬이 [None, 100] 이 되어 다음 레이어로 전달됩니다.
W1 = tf.Variable(tf.truncated_normal([784, 100], stddev=0.1))
b1 = tf.Variable(tf.constant(0.1, shape=[100]))
tf.matmul 함수를 사용하여 행렬곱을 한다음 편향을 더하고 첫번째 레이어의 활성화 함수인 시그모이드 함수를 적용합니다. 텐서플로우에는 시그모이드 함수를 내장하고 있습니다.
t = tf.sigmoid(tf.matmul(x,W1) + b1)
출력 레이어의 계산식을 만들기 위해 가중치 W2와 b2 변수를 만듭니다. 직전의 히든 레이어의 출력 사이즈가 [None, 100]이고 출력 유닛의 개수는 10개 이므로 가중치 W2의 크기는 [100, 10] 이 됩니다. 편향 b2의 크기는 [10]입니다.
W2 = tf.Variable(tf.truncated_normal([100, 10], stddev=0.1))
b2 = tf.Variable(tf.constant(0.1, shape=[10]))
출력 레이어의 행렬곱을 계산합니다. 이전 히든 레이어의 출력 t와 W2를 곱하고 b2를 더합니다.
z = tf.matmul(t,W2) + b2
출력 값을 정규화하여 정답과 비교하려면 소프트맥스 함수를 적용해야 합니다. 텐서플로우에는 소프트맥스 함수가 내장되어 있습니다.
y_hat = tf.nn.softmax(z)
손실 함수 크로스 엔트로피를 계산하기 위해 위에서 구한 y_hat을 사용해도 되지만 텐서플로우에는 소프트맥스를 통과하기 전의 값 z 를 이용하여 소프트맥스 크로스 엔트로피를 계산해 주는 함수를 내장하고 있습니다. softmax_cross_entropy를 이용하여 z 와 정답 y 의 손실을 계산합니다.
loss = tf.losses.softmax_cross_entropy(y, z)
학습속도 0.5로 경사하강법을 적용하고 위에서 만든 손실 함수를 이용해 훈련 노드를 생성합니다.
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
올바르게 분류된 정확도를 계산하려면 정답을 가지고 있는 원핫벡터인 y 와 소프트맥스를 통과한 원핫벡터인 y_hat을 비교해야 합니다. 이 두 텐서는 [None, 10]의 크기를 가지고 있습니다. 따라서 행방향(1)으로 가장 큰 값을 가진 인덱스(argmax)를 찾아서 같은지(equal) 확인하면 됩니다.
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_hat,1))
correct_prediction은 [True, False, ...] 와 같은 배열이므로 불리언을 숫자(1,0)로 바꾼다음(cast) 전체를 합하여 평균을 내면 정확도 값을 얻을 수 있습니다.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
세션 객체를 만들고 모델에 사용할 변수를 초기화합니다.
End of explanation
costs = []
for i in range(1000):
x_data, y_data = mnist.train.next_batch(100)
_, cost = sess.run([train, loss], feed_dict={x: x_data, y: y_data})
costs.append(cost)
Explanation: 1000번 반복을 하면서 훈련 데이터에서 100개씩 뽑아내어(mnist.train.next_batch) 모델에 주입합니다. 모델의 플레이스 홀더에 주입하려면 플레이스 홀더의 이름과 넘겨줄 값을 딕셔너리 형태로 묶어서 feed_dict 매개변수에 전달합니다.
계산할 값은 훈련 노드 train 과 학습 과정을 그래프로 출력하기 위해 손실함수 값을 계산하여 costs 리스트에 누적합니다.
End of explanation
plt.plot(costs)
Explanation: costs 리스트를 그래프로 출력합니다.
End of explanation
for i in range(5):
plt.imshow(mnist.test.images[i].reshape([28, 28]), cmap=plt.get_cmap('gray_r'))
plt.show()
print(sess.run(tf.argmax(y_hat,1), feed_dict={x: mnist.test.images[i].reshape([1,784])}))
Explanation: 정확도를 계산하기 위해 만든 노드 accuracy를 실행합니다. 이때 입력 데이터는 mnist.test 로 훈련시에 사용하지 않았던 데이터입니다. 이 정확도 계산은 위에서 학습시킨 W1, b1, W2, b2 를 이용하여 레이블을 예측한 결과입니다.
sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels})
실제 이미지와 예측 값이 동일한지 확인하기 위해 테스트 데이터 앞의 5개 이미지와 예측 값을 차례대로 출력해 봅니다.
End of explanation |
10,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
모형 결합
모형 결합(model combining) 방법은 앙상블 방법론(ensemble methods)이라고도 한다. 이는 특정한 하나의 예측 방법이 아니라 복수의 예측 모형을 결합하여 더 나은 성능의 예측을 하려는 시도이다.
모형 결합 방법을 사용하면 일반적으로 계산량은 증가하지만 다음과 같은 효과가 있다.
단일 모형을 사용할 때 보다 성능 분산이 감소하고
과최적화를 방지한다.
모형 결합 방법은 크게 나누어 평균(averaging, aggregation) 방법론과 부스팅(boosting) 방법론으로 나눌 수 있다.
평균 방법론은 사용할 모형의 집합이 이미 결정되어 있지만
부스팅 방법론은 사용할 모형을 점진적으로 늘려간다.
각 방법론의 대표적인 방법들은 아래와 같다.
평균 방법론
다수결 (Majority Voting)
배깅 (Bagging)
랜덤 포레스트 (Random Forests)
부스팅 방법론
에이다부스트 (AdaBoost)
경사 부스트 (Gradient Boost)
다수결 방법
다수결 방법은 가장 단순한 모형 결합 방법으로 전혀 다른 모형도 결합할 수 있다. 다수결 방법은 Hard Voting 과 Soft Voting 두 가지로 나뉘어진다.
hard voting
Step1: 다수결 모형이 개별 모형보다 더 나은 성능을 보이는 이유는 다음 실험에서도 확인 할 수 있다.
만약 어떤 개별 모형이 오차를 출력할 확률이 $p$인 경우에 이러한 모형을 $N$ 개 모아서 다수결 모형을 만들면 오차를 출력할 확률이 다음과 같아진다.
$$ \sum_{k>\frac{N}{2}}^N \binom N k p^k (1-p)^{N-k} $$
Step2: 배깅
배깅(bagging)은 동일한 모형과 모형 모수를 사용하는 대신 부트스트래핑(bootstrapping)과 유사하게 트레이닝 데이터를 랜덤하게 선택해서 다수결 모형을 적용한다.
트레이닝 데이터를 선택하는 방법에 따라 다음과 같이 부르기도 한다.
같은 데이터 샘플을 중복사용(replacement)하지 않으면
Step3: 랜덤 포레스트
랜덤 포레스트(Random Forest)는 의사 결정 나무(Decision Tree)를 개별 모형으로 사용하는 모형 결합 방법을 말한다.
배깅과 마찬가지로 데이터 샘플의 일부만 선택하여 사용한다. 하지만 노드 분리시 모든 독립 변수들을 비교하여 최선의 독립 변수를 선택하는 것이 아니라 독립 변수 차원을 랜덤하게 감소시킨 다음 그 중에서 독립 변수를 선택한다. 이렇게 하면 개별 모형들 사이의 상관관계가 줄어들기 때문에 모형 성능의 변동이 감소하는 효과가 있다.
이러한 방법을 극단적으로 적용한 것이 Extremely Randomized Trees 모형으로 이 경우에는 각 노드에서 랜덤하게 독립 변수를 선택한다.
랜덤 포레스트와 Extremely Randomized Trees 모형은 각각 RandomForestClassifier클래스와 ExtraTreesClassifier 클래스로 구현되어 있다.
Step4: 랜덤 포레스트의 장점 중 하나는 각 독립 변수의 중요도(feature importance)를 계산할 수 있다는 점이다.
포레스트 안에서 사용된 모든 노드에 대해 어떤 독립 변수를 사용하였고 그 노드에서 얻은 information gain을 구할 수 있으므로 각각의 독립 변수들이 얻어낸 information gain의 평균을 비교하면 어떤 독립 변수가 중요한지를 비교할 수 있다.
Step5: 예
Step6: 에이다 부스트
에이다 부스트와 같은 부스트(boost) 방법은 미리 정해진 모형 집합을 사용하는 것이 아니라 단계적으로 모형 집합에 포함할 개별 모형을 선택한다. 부스트 방법에서 성능이 떨어지는 개별 모형을 weak classifier라고 한다.
또한 다수결 방법을 사용하지 않고 각 weak classifier $k$개에 대한 가중치를 주고 선형 결합하여 최종 모형인 boosted classifier $C$를 생성한다.
$$ C_{(m-1)}(x_i) = \alpha_1k_1(x_i) + \cdots + \alpha_{m-1}k_{m-1}(x_i) $$
$$ C_{m}(x_i) = C_{(m-1)}(x_i) + \alpha_m k_m(x_i) $$
$k_m$ 선택 방법
가중치 오차가 가장 적은 $k_m$
$$ E = \sum_{y_i \neq k_m(x_i)} w_i^{(m)} = \sum_{y_i \neq k_m(x_i)} e^{-y_i C_{m-1}(x_i)}$$
$\alpha_m$ 결정 방법
$$ \epsilon_m = \dfrac{\sum_{y_i \neq k_m(x_i)} w_i^{(m)} }{ \sum_{i=1}^N w_i^{(m)}} $$
$$ \alpha_m = \frac{1}{2}\ln\left( \frac{1 - \epsilon_m}{\epsilon_m}\right) $$
속도 조절을 위해 learning rate $\nu$를 추가
$$ C_{m}(x_i) = C_{(m-1)}(x_i) + \nu\alpha_m k_m(x_i) $$
에이다 부스트 클래스는 AdaBoostClassifier 이다. | Python Code:
X = np.array([[-1.0, -1.0], [-1.2, -1.4], [1, -0.5], [-3.4, -2.2], [1.1, 1.2], [-2.1, -0.2]])
y = np.array([1, 1, 1, 2, 2, 2])
x_new = [0, 0]
plt.scatter(X[y==1,0], X[y==1,1], s=100, c='r')
plt.scatter(X[y==2,0], X[y==2,1], s=100, c='b')
plt.scatter(x_new[0], x_new[1], s=100, c='g')
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import VotingClassifier
clf1 = LogisticRegression(random_state=1)
clf2 = SVC(random_state=1, probability=True)
clf3 = GaussianNB()
eclf = VotingClassifier(estimators=[('lr', clf1), ('ksvc', clf2), ('gnb', clf3)], voting='soft', weights=[2, 1, 1])
probas = [c.fit(X, y).predict_proba([x_new]) for c in (clf1, clf2, clf3, eclf)]
class1_1 = [pr[0, 0] for pr in probas]
class2_1 = [pr[0, 1] for pr in probas]
ind = np.arange(4)
width = 0.35 # bar width
p1 = plt.bar(ind, np.hstack(([class1_1[:-1], [0]])), width, align="center", color='green')
p2 = plt.bar(ind + width, np.hstack(([class2_1[:-1], [0]])), width, align="center", color='lightgreen')
p3 = plt.bar(ind, [0, 0, 0, class1_1[-1]], width, align="center", color='blue')
p4 = plt.bar(ind + width, [0, 0, 0, class2_1[-1]], width, align="center", color='steelblue')
plt.xticks(ind + 0.5 * width, ['LogisticRegression\nweight 2',
'Kernel SVC\nweight 1',
'GaussianNB\nweight 1',
'VotingClassifier'])
plt.ylim([0, 1.1])
plt.title('Class probabilities for sample 1 by different classifiers')
plt.legend([p1[0], p2[0]], ['class 1', 'class 2'], loc='upper left')
plt.show()
from itertools import product
x_min, x_max = -4, 2
y_min, y_max = -3, 2
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.025), np.arange(y_min, y_max, 0.025))
f, axarr = plt.subplots(2, 2)
for idx, clf, tt in zip(product([0, 1], [0, 1]),
[clf1, clf2, clf3, eclf],
['LogisticRegression', 'Kernel SVC', 'GaussianNB', 'VotingClassifier']):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx[0], idx[1]].contourf(xx, yy, Z, alpha=0.2, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].scatter(X[:, 0], X[:, 1], c=y, alpha=0.5, s=50, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].set_title(tt)
plt.tight_layout()
plt.show()
from itertools import product
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import VotingClassifier
iris = load_iris()
X, y = iris.data[:, [0, 2]], iris.target
model1 = DecisionTreeClassifier(max_depth=4).fit(X, y)
model2 = LogisticRegression().fit(X, y)
model3 = SVC(probability=True).fit(X, y)
model4 = VotingClassifier(estimators=[('dt', model1), ('lr', model2), ('svc', model3)],
voting='soft', weights=[1, 2, 3]).fit(X, y)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.025), np.arange(y_min, y_max, 0.025))
f, axarr = plt.subplots(2, 2)
for idx, clf, tt in zip(product([0, 1], [0, 1]),
[model1, model2, model3, model4],
['Decision Tree', 'Logistic Regression', 'Kernel SVM', 'Soft Voting']):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx[0], idx[1]].contourf(xx, yy, Z, alpha=0.2, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].scatter(X[:, 0], X[:, 1], c=y, alpha=1, s=50, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].set_title(tt)
plt.tight_layout()
plt.show()
Explanation: 모형 결합
모형 결합(model combining) 방법은 앙상블 방법론(ensemble methods)이라고도 한다. 이는 특정한 하나의 예측 방법이 아니라 복수의 예측 모형을 결합하여 더 나은 성능의 예측을 하려는 시도이다.
모형 결합 방법을 사용하면 일반적으로 계산량은 증가하지만 다음과 같은 효과가 있다.
단일 모형을 사용할 때 보다 성능 분산이 감소하고
과최적화를 방지한다.
모형 결합 방법은 크게 나누어 평균(averaging, aggregation) 방법론과 부스팅(boosting) 방법론으로 나눌 수 있다.
평균 방법론은 사용할 모형의 집합이 이미 결정되어 있지만
부스팅 방법론은 사용할 모형을 점진적으로 늘려간다.
각 방법론의 대표적인 방법들은 아래와 같다.
평균 방법론
다수결 (Majority Voting)
배깅 (Bagging)
랜덤 포레스트 (Random Forests)
부스팅 방법론
에이다부스트 (AdaBoost)
경사 부스트 (Gradient Boost)
다수결 방법
다수결 방법은 가장 단순한 모형 결합 방법으로 전혀 다른 모형도 결합할 수 있다. 다수결 방법은 Hard Voting 과 Soft Voting 두 가지로 나뉘어진다.
hard voting: 단순 투표. 개별 모형의 결과 기준
soft voting: 가중치 투표. 개별 모형의 조건부 확률의 합 기준
Scikit-Learn 의 ensemble 서브패키지는 다수결 방법을 위한 VotingClassifier 클래스를 제공한다.
sklearn.ensemble.VotingClassifier(estimators, voting='hard', weights=None)
입력 인수:
estimators :
개별 모형 목록, 리스트나 named parameter 형식으로 입력
voting : 문자열 {‘hard’, ‘soft’} (디폴트 ’hard’)
hard voting 과 soft voting 선택
weights : 리스트
사용자 가중치
End of explanation
def total_error(p, N):
te = 0.0
for k in range(int(np.ceil(N/2)), N + 1):
te += sp.misc.comb(N, k) * p**k * (1-p)**(N-k)
return te
x = np.linspace(0, 1, 100)
plt.plot(x, x, 'g:', lw=3, label="individual model")
plt.plot(x, total_error(x, 10), 'b-', label="voting model (N=10)")
plt.plot(x, total_error(x, 100), 'r-', label="voting model (N=100)")
plt.xlabel("performance of individual model")
plt.ylabel("performance of voting model")
plt.legend(loc=0)
plt.show()
Explanation: 다수결 모형이 개별 모형보다 더 나은 성능을 보이는 이유는 다음 실험에서도 확인 할 수 있다.
만약 어떤 개별 모형이 오차를 출력할 확률이 $p$인 경우에 이러한 모형을 $N$ 개 모아서 다수결 모형을 만들면 오차를 출력할 확률이 다음과 같아진다.
$$ \sum_{k>\frac{N}{2}}^N \binom N k p^k (1-p)^{N-k} $$
End of explanation
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import BaggingClassifier
iris = load_iris()
X, y = iris.data[:, [0, 2]], iris.target
model1 = DecisionTreeClassifier().fit(X, y)
model2 = BaggingClassifier(DecisionTreeClassifier(), bootstrap_features=True, random_state=0).fit(X, y)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1), np.arange(y_min, y_max, 0.1))
plt.figure(figsize=(8,12))
plt.subplot(211)
Z1 = model1.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.contourf(xx, yy, Z1, alpha=0.6, cmap=mpl.cm.jet)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=1, s=50, cmap=mpl.cm.jet)
plt.subplot(212)
Z2 = model2.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.contourf(xx, yy, Z2, alpha=0.6, cmap=mpl.cm.jet)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=1, s=50, cmap=mpl.cm.jet)
plt.tight_layout()
plt.show()
Explanation: 배깅
배깅(bagging)은 동일한 모형과 모형 모수를 사용하는 대신 부트스트래핑(bootstrapping)과 유사하게 트레이닝 데이터를 랜덤하게 선택해서 다수결 모형을 적용한다.
트레이닝 데이터를 선택하는 방법에 따라 다음과 같이 부르기도 한다.
같은 데이터 샘플을 중복사용(replacement)하지 않으면: Pasting
같은 데이터 샘플을 중복사용(replacement)하면 Bagging
데이터가 아니라 다차원 독립 변수 중 일부 차원을 선택하는 경우에는: Random Subspaces
데이터 샘플과 독립 변수 차원 모두 일부만 랜덤하게 사용하면: Random Patches
성능 평가시에는 트레이닝용으로 선택한 데이터가 아닌 다른 데이터를 사용할 수도 있다. 이런 데이터를 OOB(out-of-bag) 데이터라고 한다.
Scikit-Learn 의 ensemble 서브패키지는 배깅 모형 결합을 위한 BaggingClassifier 클래스를 제공한다. 사용법은 다음과 같다.
sklearn.ensemble.BaggingClassifier(base_estimator=None, n_estimators=10, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=None, verbose=0)
인수:
base_estimator:
기본 모형
n_estimators: 정수. 디폴트 10
모형 갯수
max_samples: 정수 혹은 실수. 디폴트 1.0
데이터 샘플 중 선택할 샘플의 수 혹은 비율
max_features: 정수 혹은 실수. 디폴트 1.0
다차원 독립 변수 중 선택할 차원의 수 혹은 비율
bootstrap: 불리언, 디폴트 True
데이터 중복 사용 여부
bootstrap_features: 불리언, 디폴트 False
차원 중복 사용 여부
oob_score: 불리언 디폴트 False
성능 평가시 OOB(out-of-bag) 샘플 사용 여부
End of explanation
from sklearn import clone
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
n_classes = 3
n_estimators = 30
plot_colors = "ryb"
cmap = plt.cm.RdYlBu
plot_step = 0.02
RANDOM_SEED = 13
models = [DecisionTreeClassifier(max_depth=4),
RandomForestClassifier(max_depth=4, n_estimators=n_estimators),
ExtraTreesClassifier(max_depth=4, n_estimators=n_estimators)]
plot_idx = 1
plt.figure(figsize=(12, 12))
for pair in ([0, 1], [0, 2], [2, 3]):
for model in models:
X = iris.data[:, pair]
y = iris.target
idx = np.arange(X.shape[0])
np.random.seed(RANDOM_SEED)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
clf = clone(model)
clf = model.fit(X, y)
plt.subplot(3, 3, plot_idx)
model_title = str(type(model)).split(".")[-1][:-2][:-len("Classifier")]
if plot_idx <= len(models):
plt.title(model_title)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
if isinstance(model, DecisionTreeClassifier):
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=cmap)
else:
estimator_alpha = 1.0 / len(model.estimators_)
for tree in model.estimators_:
Z = tree.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha=estimator_alpha, cmap=cmap)
for i, c in zip(range(n_classes), plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=c, label=iris.target_names[i], cmap=cmap)
plot_idx += 1
plt.tight_layout()
plt.show()
Explanation: 랜덤 포레스트
랜덤 포레스트(Random Forest)는 의사 결정 나무(Decision Tree)를 개별 모형으로 사용하는 모형 결합 방법을 말한다.
배깅과 마찬가지로 데이터 샘플의 일부만 선택하여 사용한다. 하지만 노드 분리시 모든 독립 변수들을 비교하여 최선의 독립 변수를 선택하는 것이 아니라 독립 변수 차원을 랜덤하게 감소시킨 다음 그 중에서 독립 변수를 선택한다. 이렇게 하면 개별 모형들 사이의 상관관계가 줄어들기 때문에 모형 성능의 변동이 감소하는 효과가 있다.
이러한 방법을 극단적으로 적용한 것이 Extremely Randomized Trees 모형으로 이 경우에는 각 노드에서 랜덤하게 독립 변수를 선택한다.
랜덤 포레스트와 Extremely Randomized Trees 모형은 각각 RandomForestClassifier클래스와 ExtraTreesClassifier 클래스로 구현되어 있다.
End of explanation
from sklearn.datasets import make_classification
from sklearn.ensemble import ExtraTreesClassifier
X, y = make_classification(n_samples=1000, n_features=10, n_informative=3, n_redundant=0, n_repeated=0,
n_classes=2, random_state=0, shuffle=False)
forest = ExtraTreesClassifier(n_estimators=250, random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices], color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
from sklearn.datasets import fetch_olivetti_faces
from sklearn.ensemble import ExtraTreesClassifier
data = fetch_olivetti_faces()
X = data.images.reshape((len(data.images), -1))
y = data.target
mask = y < 5 # Limit to 5 classes
X = X[mask]
y = y[mask]
forest = ExtraTreesClassifier(n_estimators=1000, max_features=128, random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
importances = importances.reshape(data.images[0].shape)
plt.figure(figsize=(8, 8))
plt.imshow(importances, cmap=plt.cm.bone_r)
plt.grid(False)
plt.title("Pixel importances with forests of trees")
plt.show()
Explanation: 랜덤 포레스트의 장점 중 하나는 각 독립 변수의 중요도(feature importance)를 계산할 수 있다는 점이다.
포레스트 안에서 사용된 모든 노드에 대해 어떤 독립 변수를 사용하였고 그 노드에서 얻은 information gain을 구할 수 있으므로 각각의 독립 변수들이 얻어낸 information gain의 평균을 비교하면 어떤 독립 변수가 중요한지를 비교할 수 있다.
End of explanation
from sklearn.datasets import fetch_olivetti_faces
from sklearn.utils.validation import check_random_state
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.linear_model import LinearRegression
data = fetch_olivetti_faces()
targets = data.target
data = data.images.reshape((len(data.images), -1))
train = data[targets < 30]
test = data[targets >= 30]
n_faces = 5
rng = check_random_state(4)
face_ids = rng.randint(test.shape[0], size=(n_faces, ))
test = test[face_ids, :]
n_pixels = data.shape[1]
X_train = train[:, :int(np.ceil(0.5 * n_pixels))] # Upper half of the faces
y_train = train[:, int(np.floor(0.5 * n_pixels)):] # Lower half of the faces
X_test = test[:, :int(np.ceil(0.5 * n_pixels))]
y_test = test[:, int(np.floor(0.5 * n_pixels)):]
ESTIMATORS = {
"Linear regression": LinearRegression(),
"Extra trees": ExtraTreesRegressor(n_estimators=10, max_features=32, random_state=0),
}
y_test_predict = dict()
for name, estimator in ESTIMATORS.items():
estimator.fit(X_train, y_train)
y_test_predict[name] = estimator.predict(X_test)
image_shape = (64, 64)
n_cols = 1 + len(ESTIMATORS)
plt.figure(figsize=(3*n_cols, 3*n_faces))
plt.suptitle("Face completion with multi-output estimators", size=16)
for i in range(n_faces):
true_face = np.hstack((X_test[i], y_test[i]))
if i:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 1)
else:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 1, title="true faces")
sub.axis("off")
sub.imshow(true_face.reshape(image_shape), cmap=plt.cm.gray, interpolation="nearest")
for j, est in enumerate(ESTIMATORS):
completed_face = np.hstack((X_test[i], y_test_predict[est][i]))
if i:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 2 + j)
else:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 2 + j, title=est)
sub.axis("off")
sub.imshow(completed_face.reshape(image_shape), cmap=plt.cm.gray, interpolation="nearest");
Explanation: 예: 이미지 완성
End of explanation
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_gaussian_quantiles
# Construct dataset
X1, y1 = make_gaussian_quantiles(cov=2.,
n_samples=200, n_features=2,
n_classes=2, random_state=1)
X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5,
n_samples=300, n_features=2,
n_classes=2, random_state=1)
X = np.concatenate((X1, X2))
y = np.concatenate((y1, - y2 + 1))
# Create and fit an AdaBoosted decision tree
bdt = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1),
algorithm="SAMME",
n_estimators=200)
bdt.fit(X, y)
plot_colors = "br"
plot_step = 0.02
class_names = "AB"
plt.figure(figsize=(12,6))
plt.subplot(121)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = bdt.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.axis("tight")
for i, n, c in zip(range(2), class_names, plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1],
c=c, cmap=plt.cm.Paired,
label="Class %s" % n)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(loc='upper right')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Decision Boundary')
twoclass_output = bdt.decision_function(X)
plot_range = (twoclass_output.min(), twoclass_output.max())
plt.subplot(122)
for i, n, c in zip(range(2), class_names, plot_colors):
plt.hist(twoclass_output[y == i],
bins=10,
range=plot_range,
facecolor=c,
label='Class %s' % n,
alpha=.5)
x1, x2, y1, y2 = plt.axis()
plt.axis((x1, x2, y1, y2 * 1.2))
plt.legend(loc='upper right')
plt.ylabel('Samples')
plt.xlabel('Score')
plt.title('Decision Scores')
plt.tight_layout()
plt.subplots_adjust(wspace=0.35)
plt.show()
Explanation: 에이다 부스트
에이다 부스트와 같은 부스트(boost) 방법은 미리 정해진 모형 집합을 사용하는 것이 아니라 단계적으로 모형 집합에 포함할 개별 모형을 선택한다. 부스트 방법에서 성능이 떨어지는 개별 모형을 weak classifier라고 한다.
또한 다수결 방법을 사용하지 않고 각 weak classifier $k$개에 대한 가중치를 주고 선형 결합하여 최종 모형인 boosted classifier $C$를 생성한다.
$$ C_{(m-1)}(x_i) = \alpha_1k_1(x_i) + \cdots + \alpha_{m-1}k_{m-1}(x_i) $$
$$ C_{m}(x_i) = C_{(m-1)}(x_i) + \alpha_m k_m(x_i) $$
$k_m$ 선택 방법
가중치 오차가 가장 적은 $k_m$
$$ E = \sum_{y_i \neq k_m(x_i)} w_i^{(m)} = \sum_{y_i \neq k_m(x_i)} e^{-y_i C_{m-1}(x_i)}$$
$\alpha_m$ 결정 방법
$$ \epsilon_m = \dfrac{\sum_{y_i \neq k_m(x_i)} w_i^{(m)} }{ \sum_{i=1}^N w_i^{(m)}} $$
$$ \alpha_m = \frac{1}{2}\ln\left( \frac{1 - \epsilon_m}{\epsilon_m}\right) $$
속도 조절을 위해 learning rate $\nu$를 추가
$$ C_{m}(x_i) = C_{(m-1)}(x_i) + \nu\alpha_m k_m(x_i) $$
에이다 부스트 클래스는 AdaBoostClassifier 이다.
End of explanation |
10,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: MNIST on TPU (Tensor Processing Unit)<br>or GPU using tf.Keras and tf.data.Dataset
<table><tr><td><img valign="middle" src="https
Step2: (you can double-ckick on collapsed cells to view the non-essential code inside)
Colab-only auth for this notebook and the TPU
Step3: TPU or GPU detection
Step4: Parameters
Step5: tf.data.Dataset
Step6: Let's have a look at the data
Step7: Keras model
Step8: Train and validate the model
Step9: Visualize predictions
Step10: Deploy the trained model to AI Platform model serving
Push your trained model to production on AI Platform for a serverless, autoscaled, REST API experience.
You will need a GCS (Google Cloud Storage) bucket and a GCP project for this.
Models deployed on AI Platform autoscale to zero if not used. There will be no AI Platform charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Configuration
Step11: Export the model for serving from AI Platform
Step12: Deploy the model
This uses the command-line interface. You can do the same thing through the AI Platform UI at https
Step13: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work. | Python Code:
import os, re, time, json
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
print("Tensorflow version " + tf.__version__)
#@title visualization utilities [RUN ME]
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
batch_train_ds = training_dataset.apply(tf.data.experimental.unbatch()).batch(N)
# eager execution: loop through datasets normally
if tf.executing_eagerly():
for validation_digits, validation_labels in validation_dataset:
validation_digits = validation_digits.numpy()
validation_labels = validation_labels.numpy()
break
for training_digits, training_labels in batch_train_ds:
training_digits = training_digits.numpy()
training_labels = training_labels.numpy()
break
else:
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = batch_train_ds.make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
# utility to display training and validation curves
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.grid(linewidth=1, color='white')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
Explanation: MNIST on TPU (Tensor Processing Unit)<br>or GPU using tf.Keras and tf.data.Dataset
<table><tr><td><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/keras-tensorflow-tpu300px.png" width="300" alt="Keras+Tensorflow+Cloud TPU"></td></tr></table>
This sample trains an "MNIST" handwritten digit
recognition model on a GPU or TPU backend using a Keras
model. Data are handled using the tf.data.Datset API. This is
a very simple sample provided for educational purposes. Do
not expect outstanding TPU performance on a dataset as
small as MNIST.
<h3><a href="https://cloud.google.com/gpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/gpu-hexagon.png" width="50"></a> Train on GPU or TPU <a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a></h3>
Select a GPU or TPU backend (Runtime > Change runtime type)
Runtime > Run All <br/>(Watch out: the "Colab-only auth" cell requires user input. <br/>The "Deploy" part at the end requires cloud project and bucket configuration.)
<h3><a href="https://cloud.google.com/ml-engine/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/mlengine-hexagon.png" width="50"></a> Deploy to AI Platform</h3>
At the bottom of this notebook you can deploy your trained model to AI Platform for a serverless, autoscaled, REST API experience. You will need a Google Cloud project and a GCS (Google Cloud Storage) bucket for this last part.
TPUs are located in Google Cloud, for optimal performance, they read data directly from Google Cloud Storage.
Imports
End of explanation
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
if IS_COLAB_BACKEND:
from google.colab import auth
# Authenticates the Colab machine and also the TPU using your
# credentials so that they can access your private GCS buckets.
auth.authenticate_user()
Explanation: (you can double-ckick on collapsed cells to view the non-essential code inside)
Colab-only auth for this notebook and the TPU
End of explanation
# Detect hardware
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
except ValueError:
tpu = None
gpus = tf.config.experimental.list_logical_devices("GPU")
# Select appropriate distribution strategy
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu, steps_per_run=128) # Going back and forth between TPU and host is expensive. Better to run 128 batches on the TPU before reporting back.
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
elif len(gpus) > 1:
strategy = tf.distribute.MirroredStrategy([gpu.name for gpu in gpus])
print('Running on multiple GPUs ', [gpu.name for gpu in gpus])
elif len(gpus) == 1:
strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
print('Running on single GPU ', gpus[0].name)
else:
strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
print('Running on CPU')
print("Number of accelerators: ", strategy.num_replicas_in_sync)
Explanation: TPU or GPU detection
End of explanation
BATCH_SIZE = 64 * strategy.num_replicas_in_sync # Gobal batch size.
# The global batch size will be automatically sharded across all
# replicas by the tf.data.Dataset API. A single TPU has 8 cores.
# The best practice is to scale the batch size by the number of
# replicas (cores). The learning rate should be increased as well.
LEARNING_RATE = 0.01
LEARNING_RATE_EXP_DECAY = 0.6 if strategy.num_replicas_in_sync == 1 else 0.7
# Learning rate computed later as LEARNING_RATE * LEARNING_RATE_EXP_DECAY**epoch
# 0.7 decay instead of 0.6 means a slower decay, i.e. a faster learnign rate.
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
Explanation: Parameters
End of explanation
def read_label(tf_bytestring):
label = tf.io.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.io.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(-1) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.repeat() # Mandatory for Keras for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
Explanation: tf.data.Dataset: parse files and prepare training and validation datasets
Please read the best practices for building input pipelines with tf.data.Dataset
End of explanation
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
Explanation: Let's have a look at the data
End of explanation
# This model trains to 99.4% accuracy in 10 epochs (with a batch size of 64)
def make_model():
model = tf.keras.Sequential(
[
tf.keras.layers.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1), name="image"),
tf.keras.layers.Conv2D(filters=12, kernel_size=3, padding='same', use_bias=False), # no bias necessary before batch norm
tf.keras.layers.BatchNormalization(scale=False, center=True), # no batch norm scaling necessary before "relu"
tf.keras.layers.Activation('relu'), # activation after batch norm
tf.keras.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(filters=32, kernel_size=6, padding='same', use_bias=False, strides=2),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(200, use_bias=False),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(0.4), # Dropout on dense layer only
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', # learning rate will be set by LearningRateScheduler
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
with strategy.scope():
model = make_model()
# print model layers
model.summary()
# set up learning rate decay
lr_decay = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: LEARNING_RATE * LEARNING_RATE_EXP_DECAY**epoch,
verbose=True)
Explanation: Keras model: 3 convolutional layers, 2 dense layers
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: Tensorflow and deep learning without a PhD
End of explanation
EPOCHS = 10
steps_per_epoch = 60000//BATCH_SIZE # 60,000 items in this dataset
print("Steps per epoch: ", steps_per_epoch)
# Little wrinkle: in the present version of Tensorfow (1.14), switching a TPU
# between training and evaluation is slow (approx. 10 sec). For small models,
# it is recommeneded to run a single eval at the end.
history = model.fit(training_dataset,
steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
callbacks=[lr_decay])
final_stats = model.evaluate(validation_dataset, steps=1)
print("Validation accuracy: ", final_stats[1])
Explanation: Train and validate the model
End of explanation
# recognize digits from local fonts
probabilities = model.predict(font_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
probabilities = model.predict(validation_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
Explanation: Visualize predictions
End of explanation
PROJECT = "" #@param {type:"string"}
BUCKET = "gs://" #@param {type:"string", default:"jddj"}
NEW_MODEL = True #@param {type:"boolean"}
MODEL_NAME = "mnist" #@param {type:"string"}
MODEL_VERSION = "v1" #@param {type:"string"}
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
assert re.search(r'gs://.+', BUCKET), 'For this part, you need a GCS bucket. Head to http://console.cloud.google.com/storage and create one.'
Explanation: Deploy the trained model to AI Platform model serving
Push your trained model to production on AI Platform for a serverless, autoscaled, REST API experience.
You will need a GCS (Google Cloud Storage) bucket and a GCP project for this.
Models deployed on AI Platform autoscale to zero if not used. There will be no AI Platform charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Configuration
End of explanation
# Wrap the model so that we can add a serving function
class ExportModel(tf.keras.Model):
def __init__(self, model):
super().__init__(self)
self.model = model
# The serving function performig data pre- and post-processing.
# Pre-processing: images are received in uint8 format converted
# to float32 before being sent to through the model.
# Post-processing: the Keras model outputs digit probabilities. We want
# the detected digits. An additional tf.argmax is needed.
# @tf.function turns the code in this function into a Tensorflow graph that
# can be exported. This way, the model itself, as well as its pre- and post-
# processing steps are exported in the SavedModel and deployed in a single step.
@tf.function(input_signature=[tf.TensorSpec([None, 28*28], dtype=tf.uint8)])
def my_serve(self, images):
images = tf.cast(images, tf.float32)/255 # pre-processing
probabilities = self.model(images) # prediction from model
classes = tf.argmax(probabilities, axis=-1) # post-processing
return {'digits': classes}
# Must copy the model from TPU to CPU to be able to compose them.
restored_model = make_model()
restored_model.set_weights(model.get_weights()) # this copies the weights from TPU, does nothing on GPU
# create the ExportModel and export it to the Tensorflow standard SavedModel format
serving_model = ExportModel(restored_model)
export_path = os.path.join(BUCKET, 'keras_export', str(time.time()))
tf.keras.backend.set_learning_phase(0) # inference only
tf.saved_model.save(serving_model, export_path, signatures={'serving_default': serving_model.my_serve})
print("Model exported to: ", export_path)
# Note: in Tensorflow 2.0, it will also be possible to
# export to the SavedModel format using model.save():
# serving_model.save(export_path, save_format='tf')
# saved_model_cli: a useful too for troubleshooting SavedModels (the tool is part of the Tensorflow installation)
!saved_model_cli show --dir {export_path}
!saved_model_cli show --dir {export_path} --tag_set serve
!saved_model_cli show --dir {export_path} --tag_set serve --signature_def serving_default
# A note on naming:
# The "serve" tag set (i.e. serving functionality) is the only one exported by tf.saved_model.save
# All the other names are defined by the user in the fllowing lines of code:
# def myserve(self, images):
# ******
# return {'digits': classes}
# ******
# tf.saved_model.save(..., signatures={'serving_default': serving_model.myserve})
# ***************
Explanation: Export the model for serving from AI Platform
End of explanation
# Create the model
if NEW_MODEL:
!gcloud ai-platform models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions
# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter
!echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}"
!gcloud ai-platform versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=1.14 --python-version=3.5
Explanation: Deploy the model
This uses the command-line interface. You can do the same thing through the AI Platform UI at https://console.cloud.google.com/mlengine/models
End of explanation
# prepare digits to send to online prediction endpoint
digits_float32 = np.concatenate((font_digits, validation_digits[:100-N])) # pixel values in [0.0, 1.0] float range
digits_uint8 = np.round(digits_float32*255).astype(np.uint8) # pixel values in [0, 255] int range
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits_uint8:
# the format for AI Platform online predictions is: one JSON object per line
data = json.dumps({"images": digit.tolist()}) # "images" because that was the name you gave this parametr in the serving funtion my_serve
f.write(data+'\n')
# Request online predictions from deployed model (REST API) using the "gcloud ml-engine" command line.
predictions = !gcloud ai-platform predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
print(predictions)
predictions = np.stack([json.loads(p) for p in predictions[1:]]) # first elemet is the name of the output layer: drop it, parse the rest
display_top_unrecognized(digits_float32, predictions, labels, N, 100//N)
Explanation: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work.
End of explanation |
10,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sequence alignment
Genetic material such as DNA and proteins comes in sequences made up of different blocks. The study of these sequences and their correlation is invaluable to the field of biology. Since genetic material is likely to mutate over time, it is not easy to be certain of these correlations in most cases. In order to know if two or more sequences have a common origin, we must examine their similarity in a formal way, and be able to quantify it. Since the modifications of the sequences can introduce new blocks as well as delete some, the first step to comparing sequences is aligning them by making the similar regions match. Even then, high levels of similarity do not necessarily imply a common ancestor exists (homology). Our goal in this section is to create a tool that allows us to perform such alignments and give a numeric value of the chance of homology.
ADTs
We will first look at an implementation of Abstract Data Types (ADTs) that will allow us to represent Sequences.
AminoAcid
The choice of genetic material is proteins. In the case of proteins, the blocks that make up the sequences are called amino acids. Therefore, they are the first thing we must be able to represent. The following tuple lists all amino acids and miscellaneous options.
The following class allows us to create AminoAcid objects that represent any of the amino acids from the list. We can then compare, print, hash, test them with any of the defined methods. Do note that we can create them by copying other AminoAcids or by providing their name in any form (long for complete, medium for 3 chars or short for 1 char).
Let's test it a little bit.
Step1: Sequence
That's good and all, but we don't want to align single amino acids do we ? What we want to align is sequences. We can view a Sequence as an ordered list of AminoAcid objects. It should be iterable
Step2: Score
Now that the easy part is done, we need to remember why we're here (it's sequence alignment, just in case). We can align two sequences any number of ways, but we're only interested in alignments that could represent two descendants of the same original sequence. Therefore, we need to assign a score value to each alignment, that will help us see how significative it is.
Because of the way mutations happen, all modifications should not be treated equally. Some amino acids are more related between them than others, therefore making them more likely to mutate into each other. Some changes are more or less likely to happen because of the very structure of the protein. Also, when an amino acid is inserted into or deleted from a sequence (thus creating a gap in the alignment), it usually is not the only one. This tells us that, not only should the score of the alignment depend on each pair that's aligned (meaning, on the same location), but it should also depend on how many gaps there are and how big they are.
The Score class allows us to give a numerical value to each possible pair of amino acids. These values can be set manually one by one, just as they can be loaded from files. The chosen format here was .iij.
Here's an example on how to load a Score object from a .iij file, and access the scores for certains pairs
Step3: Needleman-Wunsch Alignment
I know, I know
Step4: Besides the second one being much more readable, their output is pretty similar. There may be slight differences in the BLOSUM matrix used, responsible for the discrepancy between the scores.
Here is the result of a local alignment between the first two sequences from "maguk-sequences.fasta", calculated by LALIGN with the BLOSUM62 scoring matrix, initial and extended gap penalties of $-12$ and $-2$
Step5: Once more, alignments are quite similar in terms of gap locations, scores and identity/similarity. The results suggest that these sequences might be related, given their high identity and similarity.
For each other pair of sequences (condensed results), we don't get such high changes of homology, as suggested by these condensed results | Python Code:
a1 = AminoAcid("A")
print(a1)
a2 = AminoAcid(a1)
print(a1 == a2)
a3 = AminoAcid("K")
print(a3.getName("long"))
Explanation: Sequence alignment
Genetic material such as DNA and proteins comes in sequences made up of different blocks. The study of these sequences and their correlation is invaluable to the field of biology. Since genetic material is likely to mutate over time, it is not easy to be certain of these correlations in most cases. In order to know if two or more sequences have a common origin, we must examine their similarity in a formal way, and be able to quantify it. Since the modifications of the sequences can introduce new blocks as well as delete some, the first step to comparing sequences is aligning them by making the similar regions match. Even then, high levels of similarity do not necessarily imply a common ancestor exists (homology). Our goal in this section is to create a tool that allows us to perform such alignments and give a numeric value of the chance of homology.
ADTs
We will first look at an implementation of Abstract Data Types (ADTs) that will allow us to represent Sequences.
AminoAcid
The choice of genetic material is proteins. In the case of proteins, the blocks that make up the sequences are called amino acids. Therefore, they are the first thing we must be able to represent. The following tuple lists all amino acids and miscellaneous options.
The following class allows us to create AminoAcid objects that represent any of the amino acids from the list. We can then compare, print, hash, test them with any of the defined methods. Do note that we can create them by copying other AminoAcids or by providing their name in any form (long for complete, medium for 3 chars or short for 1 char).
Let's test it a little bit.
End of explanation
s = Sequence("ABX")
print(s)
s.extend("cysteine")
print(s)
print(AminoAcid("alanine") in s)
sCopy = Sequence(s) #This is a deep copy
sAlias = s #This is the same object
del s[1:3]
sAlias[0] = "K"
s.setSeparator("-")
s.setNameMode("long")
print(s, sAlias, sCopy, sep=", ")
sequences = [seq for seq in getSequencesFromFasta(normpath("resources/fasta/SH3-sequence.fasta"))]
print(sequences[0])
Explanation: Sequence
That's good and all, but we don't want to align single amino acids do we ? What we want to align is sequences. We can view a Sequence as an ordered list of AminoAcid objects. It should be iterable : we'll want to go through, access, and count its items. We'll also want to change them, insert new ones, delete some, and do all that transparently, as we would with a list. Finally it would be useful to check whether a sequence contains some other sub-sequence or single amino acid. The following is a class that does just that.
Since we may not want to type every sequence we use by hand, we can also read them from files. The format chosen here is .fasta, but this can be adapted for any other format.
Let's test a few of the capabilities of this class :
End of explanation
scoring = ScoreMatrix(normpath("resources/blosum/blosum62.iij"), "BLOSUM 62")
print(scoring)
a1, a2 = Sequence("HN") #That's call unpacking, pretty neat huh ?
print(a1, a2, " : ", scoring.getScore(a1, a2))
Explanation: Score
Now that the easy part is done, we need to remember why we're here (it's sequence alignment, just in case). We can align two sequences any number of ways, but we're only interested in alignments that could represent two descendants of the same original sequence. Therefore, we need to assign a score value to each alignment, that will help us see how significative it is.
Because of the way mutations happen, all modifications should not be treated equally. Some amino acids are more related between them than others, therefore making them more likely to mutate into each other. Some changes are more or less likely to happen because of the very structure of the protein. Also, when an amino acid is inserted into or deleted from a sequence (thus creating a gap in the alignment), it usually is not the only one. This tells us that, not only should the score of the alignment depend on each pair that's aligned (meaning, on the same location), but it should also depend on how many gaps there are and how big they are.
The Score class allows us to give a numerical value to each possible pair of amino acids. These values can be set manually one by one, just as they can be loaded from files. The chosen format here was .iij.
Here's an example on how to load a Score object from a .iij file, and access the scores for certains pairs :
End of explanation
a = Align(scoring)
for align in a.globalAlign(sequences[0], sequences[1], -12, -2, False):
print(align)
Explanation: Needleman-Wunsch Alignment
I know, I know : all these lines and still no alignment in sight. What does that title even mean ? Well, in 1970 Saul B. Needleman and Christian D. Wunsch came up with an effective algorithm for aligning sequences. It provides us with the alignments that get the best score, given certain conditions. It uses a scoring system like the one we've just covered, with the addition of gap penalties : negative values added to the score when the alignment creates (initial penalty) and extends (extended penalty) a gap. Here's an overview of its steps :
* Create a matrix with enough rows to fit one sequence ($A$) and enough columns to fit the other ($B$) : each cell $(i, j)$ represents a possible alignment between two amino acids $A_i$ and $B_j$.
* Add an initial row and column to the matrix, with values (scores) determined a certain number of ways. Keep in mind that these cells represent the beginning of an alignment where one sequence only has gaps, same with the last row and column for the end of the alignment.
* Go through every cell in the matrix and calculate its score based on the previous (left, top, left and top) cells, using the following formula where $(V,W,S)_{i,j}$ are 3 values contained in cell $(i,j)$ of the matrix, $Score(A_i, B_j)$ is the score between amino acids $A_i$ and $B_j$, $I$ is the initial gap penalty and $E$ the extended gap penalty.
$$
V(i,j) = max
\left{
\begin{array}{ll}
S(i-1, j) + I\
V(i-1, j) - E
\end{array}
\right.
\quad
W(i,j) = max
\left{
\begin{array}{ll}
S(i, j-1) + I\
V(i, j-1) - E
\end{array}
\right.
\quad
S(i,j) = max
\left{
\begin{array}{ll}
S(i-1, j-1) + Score(A_i, B_j)\
V(i, j)\
W(i, j)
\end{array}
\right.
$$
* Backtrack from some point of the matrix (the end of the alignment) to some other (the beginning), only passing by permitted cells. The cells allowed after cell $(i,j)$ are the ones where the value $S(i,j)$ comes from :
* left if $S(i,j)=W(i,j)$ : sequence $A$ has a gap
* top if $S(i,j)=V(i,j)$ : sequence $B$ has a gap
* diagonal if $S(i,j)=S(i-1,j-1) + Score(A_i, B_j)$ : sequences $A$ and $B$ are aligned
Different types of alignments can be obtained by tweaking this algorithm.
* Global alignments aim to align both sequences completely. In order to do that, we initialize the first row and sequences with multiples of $I$ and $E$, thus giving us negative values matching the gap required to get there. Backtracking starts at the end of the matrix and ends at the beginning.
* Local alignments aim to produce the alignment with the best score, whithout regard for their lenght. We do not initialize the first row and column : completing an alignment with only gaps has no interest score-wise. Backtracking starts at the highest value(s) in the matrix and ends as soon as we reach a value of $0$. Local suboptimal alignments can be found by clearing the values of the local optimal alignment in the matrix, reevaluating scores for further rows and columns, and backtracking again.
* Semiglobal alignments are intended for a global-like alignment between sequences that only overlap partially, or that have great difference in size (one is included in the other). The first row and column are not initialized for the same reason as with local alignments. Backtracking starts and tha highest value(s) but ends when we reach either the first line or first column (therefore finishing one sequence).
The following is a class that allows us to represent two aligned sequences, along with information about the way they were aligned and the result. Identity is the number of equal amino acids that are aligned, similarity is the number of similar (meaning equal or with a non negative score) amino acids that are aligned.
In order to use this class, we must (finally) implement the actual alignment algorithm. This is done by the following class.
Here is the result of a global alignment between the first two sequences from "SH3-sequences.fasta", calculated by LALIGN with the BLOSUM62 scoring matrix, initial and extended gap penalties of $-12$ and $-2$ :
```
n-w opt: 69 Z-score: 152.4 bits: 31.6 E(1): 6.7e-25
global/global (N-W) score: 69; 29.0% identity (62.9% similar) in 62 aa overlap (1-62:1-58)
GGVTTFVALYDYESRTETDLSFKKGERLQIVNNTEGDWWLAHSLSTGQTGYIPSNYVAPSDS
.: ::... .. .::::.:. :...:. . : .:. :. :.::.::. .
---MEAIAKYDFKATADDELSFKRGDILKVLNEECDQNWYKAELN-GKDGFIPKNYIEMKPH
```
Next is the result from the AlignMatrix class
End of explanation
for aligned in a.localAlign(sequences[0], sequences[1], -12, -2):
aligned.chunkSize = 60
print(aligned)
break
Explanation: Besides the second one being much more readable, their output is pretty similar. There may be slight differences in the BLOSUM matrix used, responsible for the discrepancy between the scores.
Here is the result of a local alignment between the first two sequences from "maguk-sequences.fasta", calculated by LALIGN with the BLOSUM62 scoring matrix, initial and extended gap penalties of $-12$ and $-2$ :
```
Waterman-Eggert score: 2677; 1048.2 bits; E(1) < 0
69.5% identity (86.3% similar) in 767 aa overlap (153-904:61-817)
SHSHISPIKPTEA-VLPSPPTVPVIPVLPVPAENTVILP-TIPQANPPPVLVNTDSLETP
:.. .: :.: ..:. : .: :::...: : . :. : . .: : :
SQAGATPTPRTKAKLIPTGRDVGPVPPKPVPGKSTPKLNGSGPSWWPECTCTNRDWYEQ-
TYVNGTDADYEYEEITLERGNSGLGFSIAGGTDNPHIGDDSSIFITKIITGGAAAQDGRL
:::.:. ..::::.::::::::::::::: ::::. :: .::::::: :::::.::::
--VNGSDGMFKYEEIVLERGNSGLGFSIAGGIDNPHVPDDPGIFITKIIPGGAAAMDGRL
RVNDCILRVNEVDVRDVTHSKAVEALKEAGSIVRLYVKRRKPVSEKIMEIKLIKGPKGLG
::::.:::::::: .:.::.::::::::: .::: :.::.: : :::..:.:::::::
GVNDCVLRVNEVDVSEVVHSRAVEALKEAGPVVRLVVRRRQPPPETIMEVNLLKGPKGLG
FSIAGGVGNQHIPGDNSIYVTKIIEGGAAHKDGKLQIGDKLLAVNNVCLEEVTHEEAVTA
::::::.::::::::::::.:::::::::.:::.:::::.::::::. :..: :::::..
FSIAGGIGNQHIPGDNSIYITKIIEGGAAQKDGRLQIGDRLLAVNNTNLQDVRHEEAVAS
LKNTSDFVYLKVAKPTSMYMNDGYAPPDITNSSSQPVDNHVSPSSFLGQTPA--------
::::::.:::::::: :...:: ::::: ... . .:::.: .: :: :
LKNTSDMVYLKVAKPGSLHLNDMYAPPDYASTFTALADNHISHNSSLGYLGAVESKVSYP
-----SPARYSPVSKAVLGDDEITREPRKVVLHRGSTGLGFNIVGGEDGEGIFISFILAG
:.::::. . .:.....::::::..::.:::::::::::::::::::.::::::
APPQVPPTRYSPIPRHMLAEEDFTREPRKIILHKGSTGLGFNIVGGEDGEGIFVSFILAG
GPADLSGELRKGDRIISVNSVDLRAASHEQAAAALKNAGQAVTIVAQYRPEEYSRFEAKI
::::::::::.::::.:::.:.:: :.:::::::::.:::.::::::::::::::::.::
GPADLSGELRRGDRILSVNGVNLRNATHEQAAAALKRAGQSVTIVAQYRPEEYSRFESKI
HDLREQMMNSSISSGSGSLRTSQKRSLYVRALFDYDKTKDSGLPSQGLNFKFGDILHVIN
:::::::::::.::::::::::.:::::::::::::.:.:: ::::::.:..::::::::
HDLREQMMNSSMSSGSGSLRTSEKRSLYVRALFDYDRTRDSCLPSQGLSFSYGDILHVIN
ASDDEWWQARQVTPDGESDEVGVIPSKRRVEKKERARLKTVKFNSKTRDKGEIPDDMGSK
:::::::::: ::: :::...::::::.:::::::::::::::...: : : ..
ASDDEWWQARLVTPHGESEQIGVIPSKKRVEKKERARLKTVKFHART---GMIESNRDFP
GLKHVTSNASDSESSYRGQEEYVLSYEPVNQQEVNYTRPVIILGPMKDRINDDLISEFPD
:: :. . .. .:::. .::::::..::..:.::::::::::::.:::::::::
GL----SDDYYGAKNLKGQEDAILSYEPVTRQEIHYARPVIILGPMKDRVNDDLISEFPH
KFGSCVPHTTRPKRDYEVDGRDYHFVTSREQMEKDIQEHKFIEAGQYNNHLYGTSVQSVR
::::::::::::.:: ::::.:::::.::::::::::..:::::::.:..:::::.::::
KFGSCVPHTTRPRRDNEVDGQDYHFVVSREQMEKDIQDNKFIEAGQFNDNLYGTSIQSVR
EVAEKGKHCILDVSGNAIKRLQIAQLYPISIFIKPKSMENIMEMNKRLTEEQARKTFERA
:::.::::::::::::::::: ::::::.:::::::.: .::::.: : :::.: ...:
AVAERGKHCILDVSGNAIKRLQQAQLYPIAIFIKPKSIEALMEMNRRQTYEQANKIYDKA
MKLEQEFTEHFTAIVQGDTLEDIYNQVKQIIEEQSGSYIWVPAKEKL
::::::: :.::::::::.::.:::..:::::.::: :::::. :::
MKLEQEFGEYFTAIVQGDSLEEIYNKIKQIIEDQSGHYIWVPSPEKL
```
End of explanation
for i in range(len(sequences)-1):
for j in range(i+1, len(sequences)):
for align in a.localAlign(sequences[i], sequences[j], -12, -2):
align.condensed = True
print(align)
break
Explanation: Once more, alignments are quite similar in terms of gap locations, scores and identity/similarity. The results suggest that these sequences might be related, given their high identity and similarity.
For each other pair of sequences (condensed results), we don't get such high changes of homology, as suggested by these condensed results :
End of explanation |
10,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiment
Step1: Load and check data
Step2: ## Analysis
Experiment Details
Step3: Results | Python Code:
%load_ext autoreload
%autoreload 2
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from nupic.research.frameworks.dynamic_sparse.common.browser import *
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import rcParams
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set(style="whitegrid")
sns.set_palette("colorblind")
Explanation: Experiment:
Replicate how so dense experiments using the new dynamic sparse framework. Compare results with published in the paper
Motivation.
Ensure our code has no known bugs before proceeding with further experimentation.
Ensure How so Dense experiments are replicable
Conclusion
End of explanation
# exps = ['replicate_hsd_test2']
# exps = ['replicate_hsd_debug2']
exps = ['replicate_hsd_debug5_2x']
# exps = ['replicate_hsd_debug5_2x', 'replicate_hsd_debug6_8x']
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df.head(5)
# replace hebbian prine
df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)
df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)
df.columns
df.shape
df.iloc[1]
df.groupby('model')['model'].count()
Explanation: Load and check data
End of explanation
num_epochs = 25
# Did any trials failed?
df[df["epochs"]<num_epochs]["epochs"].count()
# Removing failed or incomplete trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=num_epochs]
df.shape
# which ones failed?
# failed, or still ongoing?
df_origin['failed'] = df_origin["epochs"]<num_epochs
df_origin[df_origin['failed']]['epochs']
# helper functions
def mean_and_std(s):
return "{:.2f} ± {:.2f}".format(s.mean()*100, s.std()*100)
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'val_acc_last': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'val_acc_last': stats,
'model': ['count']})).round(round)
def agg_paper(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max': mean_and_std,
'val_acc_last': mean_and_std,
'train_acc_last': mean_and_std,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max': mean_and_std,
'val_acc_last': mean_and_std,
'train_acc_last': mean_and_std,
'model': ['count']})).round(round)
Explanation: ## Analysis
Experiment Details
End of explanation
agg(['model'])
agg_paper(['model'])
Explanation: Results
End of explanation |
10,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABC calibration of $I_\text{Na}$ in Nygren model using original dataset.
Step1: Initial set-up
Load experiments used by Nygren $I_\text{Na}$ model in the publication
Step2: Load the myokit modelfile for this channel.
Step3: Combine model and experiments to produce
Step4: Set up prior ranges for each parameter in the model.
See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.
Step5: Run ABC-SMC inference
Set-up path to results database.
Step6: Test theoretical number of particles for approximately 2 particles per dimension in the initial sampling of the parameter hyperspace.
Step7: Initialise ABCSMC (see pyABC documentation for further details).
IonChannelDistance calculates the weighting applied to each datapoint based on the experimental variance.
Step8: Run calibration with stopping criterion of particle 1\% acceptance rate.
Step9: Analysis of results
Step10: Plot summary statistics compared to calibrated model output.
Step11: Plot parameter distributions
Step12: Plot traces
Step13: Custom plotting | Python Code:
import os, tempfile
import logging
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from ionchannelABC import theoretical_population_size
from ionchannelABC import IonChannelDistance, EfficientMultivariateNormalTransition, IonChannelAcceptor
from ionchannelABC.experiment import setup
from ionchannelABC.visualization import plot_sim_results, plot_kde_matrix_custom
import myokit
from pyabc import Distribution, RV, History, ABCSMC
from pyabc.epsilon import MedianEpsilon
from pyabc.sampler import MulticoreEvalParallelSampler, SingleCoreSampler
from pyabc.populationstrategy import ConstantPopulationSize
Explanation: ABC calibration of $I_\text{Na}$ in Nygren model using original dataset.
End of explanation
from experiments.ina_sakakibara import (sakakibara_act,
sakakibara_inact)
Explanation: Initial set-up
Load experiments used by Nygren $I_\text{Na}$ model in the publication:
- Steady-state activation [Sakakibara1992]
- Steady-state inactivation [Sakakibara1992]
End of explanation
modelfile = 'models/nygren_ina.mmt'
Explanation: Load the myokit modelfile for this channel.
End of explanation
observations, model, summary_statistics = setup(modelfile,
sakakibara_act,
sakakibara_inact)
assert len(observations)==len(summary_statistics(model({}))) # check output correct
# Test the output of the unaltered model.
g = plot_sim_results(modelfile,
sakakibara_act,
sakakibara_inact)
Explanation: Combine model and experiments to produce:
- observations dataframe
- model function to run experiments and return traces
- summary statistics function to accept traces
End of explanation
limits = {'ina.s1': (0, 1),
'ina.r1': (0, 100),
'ina.r2': (0, 20),
'ina.q1': (0, 200),
'ina.q2': (0, 20),
'log_ina.r3': (-6, -3),
'ina.r4': (0, 100),
'ina.r5': (0, 20),
'log_ina.r6': (-6, -3),
'log_ina.q3': (-3., 0.),
'ina.q4': (0, 200),
'ina.q5': (0, 20),
'log_ina.q6': (-5, -2),
'log_ina.q7': (-3., 0.),
'log_ina.q8': (-4, -1)}
prior = Distribution(**{key: RV("uniform", a, b - a)
for key, (a,b) in limits.items()})
# Test this works correctly with set-up functions
assert len(observations) == len(summary_statistics(model(prior.rvs())))
Explanation: Set up prior ranges for each parameter in the model.
See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.
End of explanation
db_path = "sqlite:///" + os.path.join(tempfile.gettempdir(), "nygren_ina_original.db")
# Add logging for additional information during run.
logging.basicConfig()
abc_logger = logging.getLogger('ABC')
abc_logger.setLevel(logging.DEBUG)
eps_logger = logging.getLogger('Epsilon')
eps_logger.setLevel(logging.DEBUG)
Explanation: Run ABC-SMC inference
Set-up path to results database.
End of explanation
pop_size = theoretical_population_size(2, len(limits))
print("Theoretical minimum population size is {} particles".format(pop_size))
Explanation: Test theoretical number of particles for approximately 2 particles per dimension in the initial sampling of the parameter hyperspace.
End of explanation
abc = ABCSMC(models=model,
parameter_priors=prior,
distance_function=IonChannelDistance(
exp_id=list(observations.exp_id),
variance=list(observations.variance),
delta=0.05),
population_size=ConstantPopulationSize(10000),
summary_statistics=summary_statistics,
transitions=EfficientMultivariateNormalTransition(),
eps=MedianEpsilon(initial_epsilon=20),
sampler=MulticoreEvalParallelSampler(n_procs=8),
acceptor=IonChannelAcceptor())
# Convert observations to dictionary format for calibration
obs = observations.to_dict()['y']
obs = {str(k): v for k, v in obs.items()}
# Initialise run and set ID for this run.
abc_id = abc.new(db_path, obs)
Explanation: Initialise ABCSMC (see pyABC documentation for further details).
IonChannelDistance calculates the weighting applied to each datapoint based on the experimental variance.
End of explanation
history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01)
Explanation: Run calibration with stopping criterion of particle 1\% acceptance rate.
End of explanation
history = History(db_path)
df, w = history.get_distribution(m=0)
df.describe()
Explanation: Analysis of results
End of explanation
sns.set_context('poster')
mpl.rcParams['font.size'] = 14
mpl.rcParams['legend.fontsize'] = 14
g = plot_sim_results(modelfile,
sakakibara_act,
sakakibara_inact,
df=df, w=w)
#xlabels = ["voltage (mV)"]*2
#ylabels = ["steady-state activation", "steady-state inactivation"]
#for ax, xl in zip(g.axes.flatten(), xlabels):
# ax.set_xlabel(xl)
#for ax, yl in zip(g.axes.flatten(), ylabels):
# ax.set_ylabel(yl)
#for ax in g.axes.flatten():
# ax.set_title('')
plt.tight_layout()
#g.savefig('figures/ina/nyg_original_sum_stats.pdf')
Explanation: Plot summary statistics compared to calibrated model output.
End of explanation
m,_,_ = myokit.load(modelfile)
originals = {}
for name in limits.keys():
if name.startswith("log"):
name_ = name[4:]
else:
name_ = name
val = m.value(name_)
if name.startswith("log"):
val_ = np.log10(val)
else:
val_ = val
originals[name] = val_
act_params = ['ina.r1','ina.r2','log_ina.r3','ina.r4','ina.r5','log_ina.r6']
df_act = df[act_params]
limits_act = dict([(key, limits[key]) for key in act_params])
originals_act = dict([(key, originals[key]) for key in act_params])
sns.set_context('paper')
g = plot_kde_matrix_custom(df_act, w, limits=limits_act, refval=originals_act)
inact_params = ['ina.q1','ina.q2','log_ina.q3','ina.q4','ina.q5','log_ina.q6','log_ina.q7','log_ina.q8','ina.s1']
df_inact = df[inact_params]
limits_inact = dict([(key, limits[key]) for key in inact_params])
originals_inact = dict([(key, originals[key]) for key in inact_params])
sns.set_context('paper')
g = plot_kde_matrix_custom(df_inact, w, limits=limits_inact, refval=originals_inact)
Explanation: Plot parameter distributions
End of explanation
from ionchannelABC.visualization import plot_experiment_traces
h_nyg_orig = History("sqlite:///results/nygren/ina/original/nygren_ina_original.db")
df, w = h_nyg_orig.get_distribution(m=0)
modelfile = 'models/nygren_ina.mmt'
# Functions to extract a portion of the trace from experiments
def split_act(data):
out = []
for d in data.split_periodic(11000, adjust=True):
d = d.trim(9950, 10200, adjust=False)
out.append(d)
return out
def split_inact(data):
out = []
for d in data.split_periodic(11030, adjust=True):
d = d.trim(10950, 11030, adjust=False)
out.append(d)
return out
sns.set_context('talk')
mpl.rcParams['font.size'] = 14
mpl.rcParams['legend.fontsize'] = 14
g = plot_experiment_traces(modelfile, ['ina.g'],
[split_act, split_inact],
sakakibara_act,
sakakibara_inact,
df=df, w=w,
log_interval=1)
xlabel = "time (ms)"
ylabels = ["voltage (mV)", "normalised current"]
for ax in g.axes[0,:]:
ax.set_xlabel(xlabel)
for ax, yl in zip(g.axes, ylabels):
ax[0].set_ylabel(yl)
for ax in g.axes.flatten():
ax.set_title('')
for ax in g.axes[1,:]:
ax.set_ylim([-0.05, 1.05])
plt.tight_layout()
#g.savefig('figures/ina/nyg_original_traces.pdf')
Explanation: Plot traces
End of explanation
from pyabc.visualization import plot_kde_1d, plot_kde_2d
from pyabc.visualization.kde import kde_1d
import seaborn as sns
sns.set_context('poster')
x = 'ina.q1'
y = 'ina.q2'
f, ax = plt.subplots(nrows=2, ncols=2, figsize=(9,8),
sharex='col', sharey='row',
gridspec_kw={'width_ratios': [5, 1],
'height_ratios': [1, 5],
'hspace': 0,
'wspace': 0})
plot_kde_1d(df, w, x, xmin=limits[x][0], xmax=limits[x][1], refval=originals_inact, ax=ax[0][0], numx=500)
x_vals, pdf = kde_1d(df, w, y, xmin=limits[y][0], xmax=limits[y][1], numx=500, kde=None)
ax[1][1].plot(pdf, x_vals)
ax[1][1].set_ylim(limits[y][0], limits[y][1])
ax[1][1].axhline(originals_inact[y], color='C1', linestyle='dashed')
alpha = w / w.max()
colors = np.zeros((alpha.size, 4))
colors[:, 3] = alpha
ax[1][0].scatter(df[x], df[y], color=colors)
ax[1][0].scatter([originals_inact[x]], [originals_inact[y]], color='C1')
# cleaning up
ax[0][0].set_xlabel('')
ax[0][0].set_ylabel('')
ax[1][0].set_ylabel(y[-2:])
ax[1][0].set_xlabel(x[-2:])
labels = [item.get_text() for item in ax[0][1].get_yticklabels()]
ax[0][1].set_yticklabels(['',]*len(labels))
ax[1][1].set_ylabel('')
plt.tight_layout()
#f.savefig('figures/ina/nygren_original_q1q2_scatter.pdf')
Explanation: Custom plotting
End of explanation |
10,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IPython Notebook for turning in solutions to the problems in the Essentials of Paleomagnetism Textbook by L. Tauxe
Problems in Chapter 1
Problem 1
Step1: Problem 2a
Step2: Problem 2b
Step3: Problem 3a
Step4: Problem 3b
b) To compare 10 $\mu$T with the field produced by an axial dipole of 80 ZAm$^2$, we need the second part of Equation 1.8 in the text | Python Code:
# code to calculate H_r and H_theta
import numpy as np
deg2rad=np.pi/180. # converts degrees to radians
# write code here to calculate H_r and H_theta and convert to B_r, B_theta
# This is how you print out nice formatted numbers
# floating point variables have the syntax:
# '%X.Yf'%(FP_variable) where X is the number of digits and Y is the
# number of didgets after the decimal.
# uncomment this line to print
#print 'H_r= ','%7.1f'%(H_r), 'H_theta= ', '%7.1f'%(H_theta)
# to format integers: use the syntax:
# '%i'%(INT_variable)
#print 'B_r = ','%i'%(B_r*1e6), 'uT' # B_r in microtesla
#print 'B_theta =','%i'%(B_theta*1e6),'uT' # B_theta in microtesla
Explanation: IPython Notebook for turning in solutions to the problems in the Essentials of Paleomagnetism Textbook by L. Tauxe
Problems in Chapter 1
Problem 1:
Given that:
$$
\nabla V_m = - \bigl(
{ {\partial}\over {\partial r} }
{ {m \cos \theta} \over {4 \pi r^2}} +
{ {1\over r} }
{ {\partial}\over {\partial \theta} }
{ { m\cos \theta}\over { 4 \pi r^2} }
\bigr)
$$
it follows that:
Complete this text using LaTeX formatting. see the above example. Notice how stand alone equations look like this:
$$
\hbox{Type your equation here}
$$
and inline math looks like this: $\alpha,\beta,\gamma$
End of explanation
# write a function here with the form
def myfunc(B_in): # edit this line for your own input variables!
# do some math here to define OUTPUT_VARIABLES
B_out=B_in*1.
return B_out
B=42 # define your input variables here
print myfunc(B)
Explanation: Problem 2a:
Some text to describe what you are doing. (Edit this!)
End of explanation
# take your program from 2a and modify it to respond to some input flag
# e.g.:
def myfunc(B_in,units):
if units=='cgs':
# do cgs conversion.....
pass
elif units=='SI':
# do SI conversion
pass
Explanation: Problem 2b:
End of explanation
# Write code here to calculate the moment, m and print it in ZAm^2
Explanation: Problem 3a:
a) This problem boils down to finding the value for ${\bf m}$ in Equation 1.8 in Chapter 1 that would give rise to a radial field of 10$\mu$T at a depth of 2890 km (radius of the Earth minus radius of the dipole source).
Write text here about how you solve the problem....
End of explanation
# Write some code here that calculates H_r, H_theta, the total field
# in H and converted to microtesla. Use nicely formated print statements
# display your results.
Explanation: Problem 3b
b) To compare 10 $\mu$T with the field produced by an axial dipole of 80 ZAm$^2$, we need the second part of Equation 1.8 in the text:
Type your answer here with nice LaTeX formatting.
End of explanation |
10,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
你的第一个神经网络
在此项目中,你将构建你的第一个神经网络,并用该网络预测每日自行车租客人数。我们提供了一些代码,但是需要你来实现神经网络(大部分内容)。提交此项目后,欢迎进一步探索该数据和模型。
Step1: 加载和准备数据
构建神经网络的关键一步是正确地准备数据。不同尺度级别的变量使网络难以高效地掌握正确的权重。我们在下方已经提供了加载和准备数据的代码。你很快将进一步学习这些代码!
Step2: 数据简介
此数据集包含的是从 2011 年 1 月 1 日到 2012 年 12 月 31 日期间每天每小时的骑车人数。骑车用户分成临时用户和注册用户,cnt 列是骑车用户数汇总列。你可以在上方看到前几行数据。
下图展示的是数据集中前 10 天左右的骑车人数(某些天不一定是 24 个条目,所以不是精确的 10 天)。你可以在这里看到每小时租金。这些数据很复杂!周末的骑行人数少些,工作日上下班期间是骑行高峰期。我们还可以从上方的数据中看到温度、湿度和风速信息,所有这些信息都会影响骑行人数。你需要用你的模型展示所有这些数据。
Step3: 查看每天的骑行数据,对比2011年和2012年
Step4: 虚拟变量(哑变量)
下面是一些分类变量,例如季节、天气、月份。要在我们的模型中包含这些数据,我们需要创建二进制虚拟变量。用 Pandas 库中的 get_dummies() 就可以轻松实现。
Step5: 调整目标变量
为了更轻松地训练网络,我们将对每个连续变量标准化,即转换和调整变量,使它们的均值为 0,标准差为 1。
我们会保存换算因子,以便当我们使用网络进行预测时可以还原数据。
将数据拆分为训练、测试和验证数据集
我们将大约最后 21 天的数据保存为测试数据集,这些数据集会在训练完网络后使用。我们将使用该数据集进行预测,并与实际的骑行人数进行对比。
Step6: 我们将数据拆分为两个数据集,一个用作训练,一个在网络训练完后用来验证网络。因为数据是有时间序列特性的,所以我们用历史数据进行训练,然后尝试预测未来数据(验证数据集)。
Step7: 开始构建网络
下面你将构建自己的网络。我们已经构建好结构和反向传递部分。你将实现网络的前向传递部分。还需要设置超参数:学习速率、隐藏单元的数量,以及训练传递数量。
<img src="assets/neural_network.png" width=300px>
该网络有两个层级,一个隐藏层和一个输出层。隐藏层级将使用 S 型函数作为激活函数。输出层只有一个节点,用于递归,节点的输出和节点的输入相同。即激活函数是 $f(x)=x$。这种函数获得输入信号,并生成输出信号,但是会考虑阈值,称为激活函数。我们完成网络的每个层级,并计算每个神经元的输出。一个层级的所有输出变成下一层级神经元的输入。这一流程叫做前向传播(forward propagation)。
我们在神经网络中使用权重将信号从输入层传播到输出层。我们还使用权重将错误从输出层传播回网络,以便更新权重。这叫做反向传播(backpropagation)。
提示:你需要为反向传播实现计算输出激活函数 ($f(x) = x$) 的导数。如果你不熟悉微积分,其实该函数就等同于等式 $y = x$。该等式的斜率是多少?也就是导数 $f(x)$。
你需要完成以下任务:
实现 S 型激活函数。将 __init__ 中的 self.activation_function 设为你的 S 型函数。
在 train 方法中实现前向传递。
在 train 方法中实现反向传播算法,包括计算输出错误。
在 run 方法中实现前向传递。
Step8: 单元测试
运行这些单元测试,检查你的网络实现是否正确。这样可以帮助你确保网络已正确实现,然后再开始训练网络。这些测试必须成功才能通过此项目。
Step9: 训练网络
现在你将设置网络的超参数。策略是设置的超参数使训练集上的错误很小但是数据不会过拟合。如果网络训练时间太长,或者有太多的隐藏节点,可能就会过于针对特定训练集,无法泛化到验证数据集。即当训练集的损失降低时,验证集的损失将开始增大。
你还将采用随机梯度下降 (SGD) 方法训练网络。对于每次训练,都获取随机样本数据,而不是整个数据集。与普通梯度下降相比,训练次数要更多,但是每次时间更短。这样的话,网络训练效率更高。稍后你将详细了解 SGD。
选择迭代次数
也就是训练网络时从训练数据中抽样的批次数量。迭代次数越多,模型就与数据越拟合。但是,如果迭代次数太多,模型就无法很好地泛化到其他数据,这叫做过拟合。你需要选择一个使训练损失很低并且验证损失保持中等水平的数字。当你开始过拟合时,你会发现训练损失继续下降,但是验证损失开始上升。
选择学习速率
速率可以调整权重更新幅度。如果速率太大,权重就会太大,导致网络无法与数据相拟合。建议从 0.1 开始。如果网络在与数据拟合时遇到问题,尝试降低学习速率。注意,学习速率越低,权重更新的步长就越小,神经网络收敛的时间就越长。
选择隐藏节点数量
隐藏节点越多,模型的预测结果就越准确。尝试不同的隐藏节点的数量,看看对性能有何影响。你可以查看损失字典,寻找网络性能指标。如果隐藏单元的数量太少,那么模型就没有足够的空间进行学习,如果太多,则学习方向就有太多的选择。选择隐藏单元数量的技巧在于找到合适的平衡点。
Step10: 检查预测结果
使用测试数据看看网络对数据建模的效果如何。如果完全错了,请确保网络中的每步都正确实现。 | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: 你的第一个神经网络
在此项目中,你将构建你的第一个神经网络,并用该网络预测每日自行车租客人数。我们提供了一些代码,但是需要你来实现神经网络(大部分内容)。提交此项目后,欢迎进一步探索该数据和模型。
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: 加载和准备数据
构建神经网络的关键一步是正确地准备数据。不同尺度级别的变量使网络难以高效地掌握正确的权重。我们在下方已经提供了加载和准备数据的代码。你很快将进一步学习这些代码!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt', figsize=(10,4))
Explanation: 数据简介
此数据集包含的是从 2011 年 1 月 1 日到 2012 年 12 月 31 日期间每天每小时的骑车人数。骑车用户分成临时用户和注册用户,cnt 列是骑车用户数汇总列。你可以在上方看到前几行数据。
下图展示的是数据集中前 10 天左右的骑车人数(某些天不一定是 24 个条目,所以不是精确的 10 天)。你可以在这里看到每小时租金。这些数据很复杂!周末的骑行人数少些,工作日上下班期间是骑行高峰期。我们还可以从上方的数据中看到温度、湿度和风速信息,所有这些信息都会影响骑行人数。你需要用你的模型展示所有这些数据。
End of explanation
day_rides = pd.read_csv('Bike-Sharing-Dataset/day.csv')
day_rides = day_rides.set_index(['dteday'])
day_rides.loc['2011-08-01':'2011-12-31'].plot(y='cnt', figsize=(10,4))
day_rides.loc['2012-08-01':'2012-12-31'].plot(y='cnt', figsize=(10,4))
Explanation: 查看每天的骑行数据,对比2011年和2012年
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: 虚拟变量(哑变量)
下面是一些分类变量,例如季节、天气、月份。要在我们的模型中包含这些数据,我们需要创建二进制虚拟变量。用 Pandas 库中的 get_dummies() 就可以轻松实现。
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: 调整目标变量
为了更轻松地训练网络,我们将对每个连续变量标准化,即转换和调整变量,使它们的均值为 0,标准差为 1。
我们会保存换算因子,以便当我们使用网络进行预测时可以还原数据。
将数据拆分为训练、测试和验证数据集
我们将大约最后 21 天的数据保存为测试数据集,这些数据集会在训练完网络后使用。我们将使用该数据集进行预测,并与实际的骑行人数进行对比。
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: 我们将数据拆分为两个数据集,一个用作训练,一个在网络训练完后用来验证网络。因为数据是有时间序列特性的,所以我们用历史数据进行训练,然后尝试预测未来数据(验证数据集)。
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
# def sigmoid(x):
# return 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation here
# self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
#print('features',features)
#print('targets',targets)
# nCount = 0
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
# nCount += 1
# if(nCount > 1):
# break
#print('#######################################')
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
#print('X.shape',X.shape)
#print('X',X)
#print('X[None,:].shape', X[None,:].shape)
#print('X[None,:]', X[None,:])
#print('y.shape',y.shape)
#print('y',y)
#print('weights_input_to_hidden.shape', self.weights_input_to_hidden.shape)
#print('weights_hidden_to_output.shape', self.weights_hidden_to_output.shape)
hidden_inputs = X[None,:] @ self.weights_input_to_hidden # signals into hidden layer
#print('hidden_inputs.shape', hidden_inputs.shape)
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
#print('hidden_outputs.shape',hidden_outputs.shape)
# TODO: Output layer - Replace these values with your calculations.
final_inputs = hidden_outputs @ self.weights_hidden_to_output # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#print('final_inputs.shape', final_inputs.shape)
#print('final_outputs.shape', final_outputs.shape)
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
#print('error.shape', error.shape)
#print('y',y)
#print('final_outputs',final_outputs)
#print('error',error)
output_error_term = error * 1
#print('output_error_term',output_error_term)
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = output_error_term @ self.weights_hidden_to_output.T
# hidden_error = output_error_term * self.weights_hidden_to_output.T
#print('hidden_error.shape',hidden_error.shape)
# print('hidden_error1',hidden_error1)
# print('hidden_error',hidden_error)
# TODO: Backpropagated error terms - Replace these values with your calculations.
#output_error_term = None
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
#print('hidden_error_term.shape', hidden_error_term.shape)
# Weight step (input to hidden)
tmp = X[:,None] @ hidden_error_term
#print('X[:,None]',X[:,None])
#print('hidden_error_term',hidden_error_term)
#print('tmp.shape',tmp.shape)
#print('tmp',tmp)
delta_weights_i_h += tmp
#print('delta_weights_i_h.shape',delta_weights_i_h.shape)
#print('-------------------')
# Weight step (hidden to output)
#print('hidden_outputs', hidden_outputs)
#print('output_error_term', output_error_term)
tmp = hidden_outputs.T * output_error_term
#print('tmp.shape', tmp.shape)
#print('tmp', tmp)
delta_weights_h_o += tmp
#print('delta_weights_h_o.shape', delta_weights_h_o.shape)
# TODO: Update the weights - Replace these values with your calculations.
#print('self.lr, n_records',self.lr, n_records)
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
#print('self.weights_hidden_to_output', self.weights_hidden_to_output)
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
#print('self.weights_input_to_hidden', self.weights_input_to_hidden)
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
# #print('features.shape', features.shape)
# #print(features)
hidden_inputs = features @ self.weights_input_to_hidden # signals into hidden layer
# #print('hedden_inputs', hidden_inputs)
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# #print('hidden_outputs', hidden_outputs)
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = hidden_outputs @ self.weights_hidden_to_output # signals into final output layer
# #print('final_inputs', final_inputs)
final_outputs = final_inputs # signals from final output layer
# #print('final_outputs', final_outputs)
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: 开始构建网络
下面你将构建自己的网络。我们已经构建好结构和反向传递部分。你将实现网络的前向传递部分。还需要设置超参数:学习速率、隐藏单元的数量,以及训练传递数量。
<img src="assets/neural_network.png" width=300px>
该网络有两个层级,一个隐藏层和一个输出层。隐藏层级将使用 S 型函数作为激活函数。输出层只有一个节点,用于递归,节点的输出和节点的输入相同。即激活函数是 $f(x)=x$。这种函数获得输入信号,并生成输出信号,但是会考虑阈值,称为激活函数。我们完成网络的每个层级,并计算每个神经元的输出。一个层级的所有输出变成下一层级神经元的输入。这一流程叫做前向传播(forward propagation)。
我们在神经网络中使用权重将信号从输入层传播到输出层。我们还使用权重将错误从输出层传播回网络,以便更新权重。这叫做反向传播(backpropagation)。
提示:你需要为反向传播实现计算输出激活函数 ($f(x) = x$) 的导数。如果你不熟悉微积分,其实该函数就等同于等式 $y = x$。该等式的斜率是多少?也就是导数 $f(x)$。
你需要完成以下任务:
实现 S 型激活函数。将 __init__ 中的 self.activation_function 设为你的 S 型函数。
在 train 方法中实现前向传递。
在 train 方法中实现反向传播算法,包括计算输出错误。
在 run 方法中实现前向传递。
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: 单元测试
运行这些单元测试,检查你的网络实现是否正确。这样可以帮助你确保网络已正确实现,然后再开始训练网络。这些测试必须成功才能通过此项目。
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 4000
learning_rate = 0.5
hidden_nodes = 20
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
axes = plt.gca()
axes.plot(losses['train'], label='Training loss')
axes.plot(losses['validation'], label='Validation loss')
axes.legend()
_ = axes.set_ylim([0,3])
Explanation: 训练网络
现在你将设置网络的超参数。策略是设置的超参数使训练集上的错误很小但是数据不会过拟合。如果网络训练时间太长,或者有太多的隐藏节点,可能就会过于针对特定训练集,无法泛化到验证数据集。即当训练集的损失降低时,验证集的损失将开始增大。
你还将采用随机梯度下降 (SGD) 方法训练网络。对于每次训练,都获取随机样本数据,而不是整个数据集。与普通梯度下降相比,训练次数要更多,但是每次时间更短。这样的话,网络训练效率更高。稍后你将详细了解 SGD。
选择迭代次数
也就是训练网络时从训练数据中抽样的批次数量。迭代次数越多,模型就与数据越拟合。但是,如果迭代次数太多,模型就无法很好地泛化到其他数据,这叫做过拟合。你需要选择一个使训练损失很低并且验证损失保持中等水平的数字。当你开始过拟合时,你会发现训练损失继续下降,但是验证损失开始上升。
选择学习速率
速率可以调整权重更新幅度。如果速率太大,权重就会太大,导致网络无法与数据相拟合。建议从 0.1 开始。如果网络在与数据拟合时遇到问题,尝试降低学习速率。注意,学习速率越低,权重更新的步长就越小,神经网络收敛的时间就越长。
选择隐藏节点数量
隐藏节点越多,模型的预测结果就越准确。尝试不同的隐藏节点的数量,看看对性能有何影响。你可以查看损失字典,寻找网络性能指标。如果隐藏单元的数量太少,那么模型就没有足够的空间进行学习,如果太多,则学习方向就有太多的选择。选择隐藏单元数量的技巧在于找到合适的平衡点。
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: 检查预测结果
使用测试数据看看网络对数据建模的效果如何。如果完全错了,请确保网络中的每步都正确实现。
End of explanation |
10,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Catedra 01
Primera tarea sera anunciada hoy.
Entrega el segundo miercoles, despues de haber tenido 2 auxiliares.
La primera parte del curso
Derivadas e Integrales numericas.
Manejo de errores debido a precision.
Step1: Nota
Step3: Manejo de errores
Step4: Efectos de precision sobre integrales
Error de Redondeo
Calculando la derivada de $\sin(x)$
Ahora usamos numpy
Step5: Catedra 02 (Continuacion de Catedra 01)
Integrales Numericas
$$g(x) = \int_{a}^{x} f(x')dx'$$
Equivale a resolver una ecuacion diferencial
Step6: Regla de Simpson
Idea | Python Code:
import matplotlib.pyplot as plt
import matplotlib as mp
import numpy as np
import math
# Esta linea hace que los graficos aparezcan en el notebook en vez de una ventana nueva.
%matplotlib inline
Explanation: Catedra 01
Primera tarea sera anunciada hoy.
Entrega el segundo miercoles, despues de haber tenido 2 auxiliares.
La primera parte del curso
Derivadas e Integrales numericas.
Manejo de errores debido a precision.
End of explanation
# Cambia el tamano de los ticks (los puntos de los ejes)
mp.rcParams['xtick.labelsize'] = 13
mp.rcParams['ytick.labelsize'] = 13
Explanation: Nota:
matplotlib.org -> documentacion de pyplot.
End of explanation
# Parametros iniciales.
e = 1.
k_factorial = 1.
N_max = 10
e_vs_n = [e] # Lista que va a contener los elementos de la serie.
for i in range(1,N_max): # Ciclo que calcula los elementos de la serie y los suma.
k_factorial *= i
e += 1. / k_factorial
e_vs_n.append(e)
# Instruccion para imprimir nuestra aproximacion a e en cada iteracion.
for i in range(N_max):
print(i, e_vs_n[i])
plt.plot(range(N_max), e_vs_n) # Genera un grafico de e_vs_n v
plt.axhline(math.e, color='0.5') # Genera una linea en 'e'
plt.ylim(0,3) # Cambia los limites del eje y del grafico
# Cambia los labels del grafico.
plt.xlabel('N', fontsize=20)
plt.ylabel('$e_{N}$', fontsize=20)
# Se hace un ciclo for dentro de una lista. i.e. Comprehension list
diferencia = [math.fabs(e_i - math.e) for e_i in e_vs_n] # Nota. fabs convierte a float y luego calcular abs
# Se hace un grafico con escala logaritmica en el eje y.
plt.plot(range(N_max), diferencia)
plt.yscale('log')
plt.xlabel('N', fontsize=20)
plt.ylabel('$e_N - e_{real}$', fontsize=20)
NOTA:
Lo anterior es equivalente a hacer
plt.semilogy(range(N_max), diferencia))
plt.xlabel('N', fontsize=20)
plt.ylabel('$e_N - e_{real}$', fontsize=20)
Explanation: Manejo de errores:
Error de truncacion
Calculo del numero e a traves de la expansion de Taylor: $\sum_{k=0}^{\infty}\frac{x^{k}}{k!}$
Forma basica:
End of explanation
import numpy as np
epsilon = np.logspace(-1, -15, 14, base=10.)
print(epsilon)
print(type(e_vs_n))
print(type(epsilon))
dsindx = (np.sin(1.+epsilon) - np.sin(1.)) / epsilon
print(dsindx)
plt.semilogx(epsilon, dsindx - np.cos(1.), 'o')
plt.axhline(0, color='0.8')
plt.xlabel('epsilon', fontsize=20)
plt.ylabel('$\\frac{d}{dx}\\sin(x) - \\cos(x)$', fontsize=20)
_ = plt.xticks(epsilon[::2])
Explanation: Efectos de precision sobre integrales
Error de Redondeo
Calculando la derivada de $\sin(x)$
Ahora usamos numpy
End of explanation
def integral_trap(f, intervalo):
'''
Integracion numerica utilizando el metodo o regla trapezoidal.
Note que el intervalo debe estar equiespaciado para este metodo.
Parameters
----------
f : numpy.ndarray
Integrando.
intervalo : numpy.ndarray
El intervalo de integracion.
Returns
-------
res : double
Resultado de la integracion.
'''
res = f[0]+f[len(intervalo-1)] # Se inicializa la variable que contendra el resultado final.
dx = intervalo[1] - intervalo[0] # Se escribe el delta x.
for i in range(len(intervalo)-1): # Se escribe el ciclo que calculara la integral numericamente.
res += 2*(f[i]+f[i+1])
res *= dx/2.
return res
Explanation: Catedra 02 (Continuacion de Catedra 01)
Integrales Numericas
$$g(x) = \int_{a}^{x} f(x')dx'$$
Equivale a resolver una ecuacion diferencial:
$$g'(x) = f(x)$$ con condiciones de borde:
$$\int_{a}^{x}g'(x')dx' = \int_{a}^{x}f(x')dx'$$
$$g(x)-g(a) = \int_{a}^{x}f(x')dx'$$
Integracion Numerica Directa
Regla Trapezoidal
Idea: Aproximar el area de la curva como el area bajo un trapecio entre dos puntos de la funcion
$$
\int_{x_{0}}^{x_{0}+\Delta x}f(x')dx'
$$
Haciendo una expansion de taylor sobre la integral se obtiene:
$$
\int_{x_{0}}^{x_{0}+\Delta x}\left[
f(x_{0}) + f'(x_{o})(x'-x_{0}) + \frac{1}{2}f''(x_{0})(x'-x_{0})^{2} + \ldots
\right]dx'
$$
$$
= f(x_{0}\Delta x) + \frac{f'(x_{0})(x-x_{0})^{2}}{2}\biggr\vert_{x_{0}}^{x} + \frac{1}{2}f''(x_{0})\frac{(x-x_{0})^{3}}{3}\biggr\vert_{x_{0}}^{x} + \ldots
$$
$$
= f(x_{0})\Delta x + f'(x_{0})\frac{\Delta x^{2}}{2} + \frac{1}{6}f''(x_{0})\Delta x^{3} + \ldots
$$
$$
= \frac{\Delta x}{2}\left[
f(x_{0}) + \left(
f(x_{0}) + f'(x_{0})\Delta x + f''(x_{0})\frac{\Delta x^{2}}{2} + \ldots
\right) - f''(x_{0})\frac{\Delta x^{2}}{6}
\right]
$$
Note que el error de truncacion es del mismo orden en $\Delta x$ que el ultimo termino previo a truncar.
$$
\int_{x_0}^{x_{0}+\Delta x}f(x')dx' = \frac{\Delta x}{2}\left[
f(x_{0}) + f(x_{0}+\Delta x) + O(\Delta x^2)
\right] = \left(
f(x_{0}) + f(x_{0}+\Delta x)
\right)\frac{\Delta x}{2} + O(\Delta x^3)
$$
Donde $O(\Delta x^3) \sim -f''(x_{0})\frac{\Delta x^3}{12}$
(graficos van aca)
dividir [a,b] en $N = \frac{b-a}{\Delta x}$ tramos.
$$
\int_{a}^{b}f(x)dx \sim \sum_{i=0}^{N-1}\left[
\frac{f(a+\Delta xi)+f(a+\Delta x(i+1))}{2}\Delta x + O^{*}(\Delta x^3)
\right]
$$
Nos interesa encontrar una cota superior para nuestro error. Como estamos sumando N tramos, estamos incluyendo N errores, y dado que el error de cada tramo va como $\Delta x^3$ y N va como $\Delta x$, el error finalmente queda como $O(\Delta x^2)$
$$
\sim \left[
\sum_{i=0}^{N-1}\frac{f(a+\Delta xi)+f(a+\Delta x(i+1))}{2}\Delta x
\right] + NO^{*}(\Delta x^3) \rightarrow O(\Delta x^2)
$$
$$
\int_{a}^{b} f(x)dx \sim \frac{f(a)\Delta x}{2} + \sum_{i=1}^{N-1}f(a+i\Delta x)\Delta x + \frac{f(b)\Delta x}{2} + O(\Delta x^2)
$$
La regla compuesta asumiendo $\Delta x$ constante
Una implementacion simple de este algoritmo es la siguiente:
End of explanation
def simpson(f, a, b, n):
'''
Integracion numerica utilizando el metodo o regla trapezoidal.
Note que el intervalo debe estar equiespaciado para este metodo.
Parameters
----------
f : function
Integrando.
a : int or double
Valor inical del intervalo sobre el cual se va a integrar.
b : int or double
Valor final del intervalo sobre el cual se va a integrar.
n : int
Numero de particiones del intervalo. Debe ser par.
Returns
-------
res : double
Resultado de la integracion.
Raises
------
ValueError
Cuando n es impar.
'''
if n % 2: # El algoritmo funciona solo para n par. Aca nos aseguramos de recibir un n par.
raise ValueError("n debe ser par (se recibio n=%d)." % n)
dx = float(b - a) / n # Se inicializa el dx.
i = a # Se inicializa en a un iterador
res = 0. # Se inicializa res, en donde se guardara el resultado de la integral.
sumaPar = sumaImp = 0. # Se inicializan las sumas parciales.
k = 0
while k < n/2-1: # El ciclo que calcula la integral.
i += dx
sumaImp += f(i)
i += dx
sumaPar += f(i)
k += 1
sumaImp += f(i+dx) # Se agrega el ultimo termino de la suma impar.
res += dx/3.*(f(a) + 4 * sumaImp + 2 * sumaPar + f(b))
return res
Explanation: Regla de Simpson
Idea: Mas evaluaciones (terminos de la expansion) a cambio de mayor precision.
Se hace una suerte de "doble Taylor"
$$
\int_{x_{0}}^{x_{0}+2\Delta x}f(x)dx = \int_{x_{0}}^{x_{0}+2\Delta x}\left[
f(x_{0} + f'(x_{0})(x-x_{0}) + f''(x_{0})\frac{(x-x_{0})^{2}}{2} + f'''(x_{0})\frac{(x-x_{0})^{3}}{6} + f^{\text{iv}}(x_{0})\frac{(x-x_{0})^{4}}{24} + \ldots
\right]
$$
$$
= f(x_{0})2\Delta x + f'(x_{0})\frac{(2\Delta x)^{2}}{2} + \frac{f''(x_{0})}{2}\frac{(2\Delta x)^{3}}{3} + \frac{f'''(x_{0})}{6}\frac{(2\Delta x)^{4}}{4} + \frac{f^{\text{iv}}(x_{0})}{24}\frac{(2\Delta x)^{5}}{5}
$$
$$
= \frac{\Delta x}{3}\left[
f(x_{0}) + 4f(x_{0}) + 4f'(x_{0})\Delta x + 4 f''(x_{0})\frac{\Delta x^2}{2} + 4f'''(x_{0})\frac{\Delta x^3}{6} \
+ f(x_{0}) + f'(x_{0})(2\Delta x) + f''(x_{0})\frac{(2\Delta x)^{2}}{2} + f'''(x_{0})\frac{(2\Delta x)^{3}}{6} + O(\Delta x^4)
\right]
$$
$$
= \frac{\Delta x}{3}\left[
f(x_{0}) + 4f(x_{0}+\Delta x) + f(x_{0} + 2\Delta x)
\right] + O(\Delta x^5)
$$
Note que para la regla compuesta el orden del error va como $\Delta x^4$
Regla compuesta
$$
\int_{a}^{b}f(x)dx = \left[
\sum_{i=1}^{N-1}\frac{\Delta x}{3}\left[
f_{i-1} + 4f_{i} + f_{i+1}
\right]
\right] + O(\Delta x^4)
$$
Donde $f_{i} = f(a+i\Delta x)$
Note que este metodo tambien asume $\Delta x$ constante.
(grafico aca)
Note que la regla anterior tambien puede ser escrita como:
$$
\int_{a}^{b}f(x)dx = \frac{\Delta x}{3}\left(
f(a) + 4\sum_{i=1}^{N/2-1}f_{2i+1}+2\sum_{i=1}^{N/2-1}f_{2i} + f(b)
\right)
$$
Una implementacion simple de este algoritmo es la siguiente:
End of explanation |
10,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
정규화 선형 회귀
정규화(regularized) 선형 회귀 방법은 선형 회귀 계수(weight)에 대한 제약 조건을 추가함으로써 모형이 과도하게 최적화되는 현상, 즉 과최적화를 막는 방법이다. Regularized Method, Penalized Method, Contrained Least Squares 이라고도 불리운다.
모형이 과도하게 최적화되면 모형 계수의 크기도 과도하게 증가하는 경향이 나타난다. 따라서 정규화 방법에서 추가하는 제약 조건은 일반적으로 계수의 크기를 제한하는 방법이다. 일반적으로 다음과 같은 세가지 방법이 사용된다.
Ridge 회귀 모형
Lasso 회귀 모형
Elastic Net 회귀 모형
Ridge 회귀 모형
Ridge 회귀 모형에서는 가중치들의 제곱합(squared sum of weights)을 최소화하는 것을 추가적인 제약 조건으로 한다.
$$
\begin{eqnarray}
\text{cost}
&=& \sum e_i^2 + \lambda \sum w_i^2
\end{eqnarray}
$$
$\lambda$는 기존의 잔차 제곱합과 추가적 제약 조건의 비중을 조절하기 위한 하이퍼 모수(hyper parameter)이다. $\lambda$가 크면 정규화 정도가 커지고 가중치의 값들이 작아진다. $\lambda$가 작아지면 정규화 정도가 작아지며 $\lambda$ 가 0이 되면 일반적인 선형 회귀 모형이 된다.
Lasso 회귀 모형
Lasso(Least Absolute Shrinkage and Selection Operator) 회귀 모형은 가중치의 절대값의 합을 최소화하는 것을 추가적인 제약 조건으로 한다.
$$
\begin{eqnarray}
\text{cost}
&=& \sum e_i^2 + \lambda \sum | w_i |
\end{eqnarray}
$$
Elastic Net 회귀 모형
Elastic Net 회귀 모형은 가중치의 절대값의 합과 제곱합을 동시에 제약 조건으로 가지는 모형이다.
$$
\begin{eqnarray}
\text{cost}
&=& \sum e_i^2 + \lambda_1 \sum | w_i | + \lambda_2 \sum w_i^2
\end{eqnarray}
$$
$\lambda_1$, $\lambda_2$ 두 개의 하이퍼 모수를 가진다.
statsmodels의 정규화 회귀 모형
statsmodels 패키지는 OLS 선형 회귀 모형 클래스의 fit_regularized 메서드를 사용하여 Elastic Net 모형 계수를 구할 수 있다.
http
Step1: 모수 L1_wt가 0 이면 순수 Ridge 모형이 된다.
Step2: 반대로 모수 L1_wt가 1 이면 순수 Lasso 모형이 된다.
Step3: 모수 L1_wt가 0과 1 사이이면 Elastic Net 모형이다.
Step4: Scikit-Learn의 정규화 회귀 모형
Scikit-Learn 패키지에서는 정규화 회귀 모형을 위한 Ridge, Lasso, ElasticNet 이라는 별도의 클래스를 제공한다. 각 모형에 대한 최적화 목적 함수는 다음과 같다.
http
Step5: 정규화 모형의 장점
정규화 모형은 회귀 분석에 사용된 데이터가 달라져도 계수가 크게 달라지지 않도록 한다.
Step6: Ridge 모형과 Lasso 모형의 차이
Ridge 모형은 가중치 계수를 한꺼번에 축소시키는데 반해 Lasso 모형은 일부 가중치 계수가 먼저 0으로 수렴하는 특성이 있다.
<img src="https
Step7: path 메서드
Lasso 와 ElasticNet 클래스는 하이퍼 모수 alpha 값의 변화에 따른 계수의 변화를 자동으로 계산하는 path 메서드를 제공한다.
lasso_path(), enet_path() 명령어도 path 메서드와 동일한 기능을 수행한다. | Python Code:
np.random.seed(0)
n_samples = 30
X = np.sort(np.random.rand(n_samples))
y = np.cos(1.5 * np.pi * X) + np.random.randn(n_samples) * 0.1
dfX = pd.DataFrame(X, columns=["x"])
dfX = sm.add_constant(dfX)
dfy = pd.DataFrame(y, columns=["y"])
df = pd.concat([dfX, dfy], axis=1)
model = sm.OLS.from_formula("y ~ x + I(x**2) + I(x**3) + I(x**4) + I(x**5) + I(x**6) + I(x**7) + I(x**8) + I(x**9)", data=df)
result1 = model.fit()
result1.params
def plot_statsmodels(result):
plt.scatter(X, y)
xx = np.linspace(0, 1, 1000)
dfxx = pd.DataFrame(xx, columns=["x"])
dfxx = sm.add_constant(dfxx)
plt.plot(xx, result.predict(dfxx).values)
plt.show()
plot_statsmodels(result1)
Explanation: 정규화 선형 회귀
정규화(regularized) 선형 회귀 방법은 선형 회귀 계수(weight)에 대한 제약 조건을 추가함으로써 모형이 과도하게 최적화되는 현상, 즉 과최적화를 막는 방법이다. Regularized Method, Penalized Method, Contrained Least Squares 이라고도 불리운다.
모형이 과도하게 최적화되면 모형 계수의 크기도 과도하게 증가하는 경향이 나타난다. 따라서 정규화 방법에서 추가하는 제약 조건은 일반적으로 계수의 크기를 제한하는 방법이다. 일반적으로 다음과 같은 세가지 방법이 사용된다.
Ridge 회귀 모형
Lasso 회귀 모형
Elastic Net 회귀 모형
Ridge 회귀 모형
Ridge 회귀 모형에서는 가중치들의 제곱합(squared sum of weights)을 최소화하는 것을 추가적인 제약 조건으로 한다.
$$
\begin{eqnarray}
\text{cost}
&=& \sum e_i^2 + \lambda \sum w_i^2
\end{eqnarray}
$$
$\lambda$는 기존의 잔차 제곱합과 추가적 제약 조건의 비중을 조절하기 위한 하이퍼 모수(hyper parameter)이다. $\lambda$가 크면 정규화 정도가 커지고 가중치의 값들이 작아진다. $\lambda$가 작아지면 정규화 정도가 작아지며 $\lambda$ 가 0이 되면 일반적인 선형 회귀 모형이 된다.
Lasso 회귀 모형
Lasso(Least Absolute Shrinkage and Selection Operator) 회귀 모형은 가중치의 절대값의 합을 최소화하는 것을 추가적인 제약 조건으로 한다.
$$
\begin{eqnarray}
\text{cost}
&=& \sum e_i^2 + \lambda \sum | w_i |
\end{eqnarray}
$$
Elastic Net 회귀 모형
Elastic Net 회귀 모형은 가중치의 절대값의 합과 제곱합을 동시에 제약 조건으로 가지는 모형이다.
$$
\begin{eqnarray}
\text{cost}
&=& \sum e_i^2 + \lambda_1 \sum | w_i | + \lambda_2 \sum w_i^2
\end{eqnarray}
$$
$\lambda_1$, $\lambda_2$ 두 개의 하이퍼 모수를 가진다.
statsmodels의 정규화 회귀 모형
statsmodels 패키지는 OLS 선형 회귀 모형 클래스의 fit_regularized 메서드를 사용하여 Elastic Net 모형 계수를 구할 수 있다.
http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.fit_regularized.html
하이퍼 모수는 다음과 같이 모수 $\text{alpha} $ 와 $\text{L1_wt}$ 로 정의된다.
$$
0.5 \times \text{RSS}/N + \text{alpha} \times \big( 0.5 \times (1-\text{L1_wt})\sum w_i^2 + \text{L1_wt} \sum |w_i| \big)
$$
End of explanation
result2 = model.fit_regularized(alpha=0.01, L1_wt=0)
print(result2.params)
plot_statsmodels(result2)
Explanation: 모수 L1_wt가 0 이면 순수 Ridge 모형이 된다.
End of explanation
result3 = model.fit_regularized(alpha=0.01, L1_wt=1)
print(result3.params)
plot_statsmodels(result3)
Explanation: 반대로 모수 L1_wt가 1 이면 순수 Lasso 모형이 된다.
End of explanation
result4 = model.fit_regularized(alpha=0.01, L1_wt=0.5)
print(result4.params)
plot_statsmodels(result4)
Explanation: 모수 L1_wt가 0과 1 사이이면 Elastic Net 모형이다.
End of explanation
def plot_sklearn(model):
plt.scatter(X, y)
xx = np.linspace(0, 1, 1000)
plt.plot(xx, model.predict(xx[:, np.newaxis]))
plt.show()
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet
poly = PolynomialFeatures(9)
model = make_pipeline(poly, LinearRegression()).fit(X[:, np.newaxis], y)
print(model.steps[1][1].coef_)
plot_sklearn(model)
model = make_pipeline(poly, Ridge(alpha=0.01)).fit(X[:, np.newaxis], y)
print(model.steps[1][1].coef_)
plot_sklearn(model)
model = make_pipeline(poly, Lasso(alpha=0.01)).fit(X[:, np.newaxis], y)
print(model.steps[1][1].coef_)
plot_sklearn(model)
model = make_pipeline(poly, ElasticNet(alpha=0.01, l1_ratio=0.5)).fit(X[:, np.newaxis], y)
print(model.steps[1][1].coef_)
plot_sklearn(model)
Explanation: Scikit-Learn의 정규화 회귀 모형
Scikit-Learn 패키지에서는 정규화 회귀 모형을 위한 Ridge, Lasso, ElasticNet 이라는 별도의 클래스를 제공한다. 각 모형에 대한 최적화 목적 함수는 다음과 같다.
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html
$$
\text{RSS} + \text{alpha} \sum w_i^2
$$
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html
$$
0.5 \times \text{RSS}/N + \text{alpha} \sum |w_i|
$$
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html
$$
0.5 \times \text{RSS}/N + 0.5 \times \text{alpha} \times \big(0.5 \times (1-\text{l1_ratio})\sum w_i^2 + \text{l1_ratio} \sum |w_i| \big)
$$
End of explanation
X_train = np.c_[.5, 1].T
y_train = [.5, 1]
X_test = np.c_[-1, 3].T
np.random.seed(0)
models = {"LinearRegression": LinearRegression(),
"Ridge": Ridge(alpha=0.1)}
for i, (name, model) in enumerate(models.iteritems()):
ax = plt.subplot(1, 2, i+1)
for _ in range(10):
this_X = .1 * np.random.normal(size=(2, 1)) + X_train
model.fit(this_X, y_train)
ax.plot(X_test, model.predict(X_test), color='.5')
ax.scatter(this_X, y_train, s=100, c='.5', marker='o', zorder=10)
model.fit(X_train, y_train)
ax.plot(X_test, model.predict(X_test), linewidth=3, color='blue', alpha=0.5)
ax.scatter(X_train, y_train, s=100, c='r', marker='D', zorder=10)
plt.title(name)
ax.set_xlim(-0.5, 2)
ax.set_ylim(0, 1.6)
Explanation: 정규화 모형의 장점
정규화 모형은 회귀 분석에 사용된 데이터가 달라져도 계수가 크게 달라지지 않도록 한다.
End of explanation
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
X = diabetes.data
y = diabetes.target
ridge0 = Ridge(alpha=0).fit(X, y)
p0 = pd.Series(np.hstack([ridge0.intercept_, ridge0.coef_]))
ridge1 = Ridge(alpha=1).fit(X, y)
p1 = pd.Series(np.hstack([ridge1.intercept_, ridge1.coef_]))
ridge2 = Ridge(alpha=2).fit(X, y)
p2 = pd.Series(np.hstack([ridge2.intercept_, ridge2.coef_]))
pd.DataFrame([p0, p1, p2]).T
lasso0 = Lasso(alpha=0.0001).fit(X, y)
p0 = pd.Series(np.hstack([lasso0.intercept_, lasso0.coef_]))
lasso1 = Lasso(alpha=0.1).fit(X, y)
p1 = pd.Series(np.hstack([lasso1.intercept_, lasso1.coef_]))
lasso2 = Lasso(alpha=10).fit(X, y)
p2 = pd.Series(np.hstack([lasso2.intercept_, lasso2.coef_]))
pd.DataFrame([p0, p1, p2]).T
Explanation: Ridge 모형과 Lasso 모형의 차이
Ridge 모형은 가중치 계수를 한꺼번에 축소시키는데 반해 Lasso 모형은 일부 가중치 계수가 먼저 0으로 수렴하는 특성이 있다.
<img src="https://datascienceschool.net/upfiles/10a19727037b4898984a4330c1285486.png">
End of explanation
lasso = Lasso()
alphas, coefs, _ = lasso.path(X, y, alphas=np.logspace(-6, 1, 8))
df = pd.DataFrame(coefs, columns=alphas)
df
df.T.plot(logx=True)
plt.show()
Explanation: path 메서드
Lasso 와 ElasticNet 클래스는 하이퍼 모수 alpha 값의 변화에 따른 계수의 변화를 자동으로 계산하는 path 메서드를 제공한다.
lasso_path(), enet_path() 명령어도 path 메서드와 동일한 기능을 수행한다.
End of explanation |
10,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TensorFlowアドオンオプティマイザ:ConditionalGradient
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: モデルの構築
Step3: データの準備
Step5: カスタムコールバック関数の定義
Step6: トレーニングと評価
Step7: トレーニングと評価
Step8: 重みのフロベニウスノルム
Step9: トレーニングと検証の精度:CGとSGDの比較 | Python Code:
#@title Licensed under the Apache License, Version 2.0
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
from matplotlib import pyplot as plt
# Hyperparameters
batch_size=64
epochs=10
Explanation: TensorFlowアドオンオプティマイザ:ConditionalGradient
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/addons/tutorials/optimizers_conditionalgradient"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
概要
このノートブックでは、アドオンパッケージのConditional Gradientオプティマイザの使用方法を紹介します。
ConditionalGradient
ニューラルネットワークのパラメーターを制約すると根本的な正則化の効果があるため、トレーニングに有益であることが示されています。多くの場合、パラメーターはソフトペナルティ(制約充足を保証しない)または投影操作(計算コストが高い)によって制約されます。一方、Conditional Gradient(CG)オプティマイザは、費用のかかる投影ステップを必要とせずに、制約を厳密に適用します。これは、制約内のオブジェクトの線形近似を最小化することによって機能します。このガイドでは、MNISTデータセットのCGオプティマイザーを介してフロベニウスノルム制約を適用する方法を紹介します。CGは、tensorflow APIとして利用可能になりました。オプティマイザの詳細は、https://arxiv.org/pdf/1803.06453.pdfを参照してください。
セットアップ
End of explanation
model_1 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
Explanation: モデルの構築
End of explanation
# Load MNIST dataset as NumPy arrays
dataset = {}
num_validation = 10000
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Preprocess the data
x_train = x_train.reshape(-1, 784).astype('float32') / 255
x_test = x_test.reshape(-1, 784).astype('float32') / 255
Explanation: データの準備
End of explanation
def frobenius_norm(m):
This function is to calculate the frobenius norm of the matrix of all
layer's weight.
Args:
m: is a list of weights param for each layers.
total_reduce_sum = 0
for i in range(len(m)):
total_reduce_sum = total_reduce_sum + tf.math.reduce_sum(m[i]**2)
norm = total_reduce_sum**0.5
return norm
CG_frobenius_norm_of_weight = []
CG_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: CG_frobenius_norm_of_weight.append(
frobenius_norm(model_1.trainable_weights).numpy()))
Explanation: カスタムコールバック関数の定義
End of explanation
# Compile the model
model_1.compile(
optimizer=tfa.optimizers.ConditionalGradient(
learning_rate=0.99949, lambda_=203), # Utilize TFA optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
history_cg = model_1.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[CG_get_weight_norm])
Explanation: トレーニングと評価: CG をオプティマイザとして使用する
一般的なkerasオプティマイザを新しいtfaオプティマイザに置き換えるだけです。
End of explanation
model_2 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
SGD_frobenius_norm_of_weight = []
SGD_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: SGD_frobenius_norm_of_weight.append(
frobenius_norm(model_2.trainable_weights).numpy()))
# Compile the model
model_2.compile(
optimizer=tf.keras.optimizers.SGD(0.01), # Utilize SGD optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
history_sgd = model_2.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[SGD_get_weight_norm])
Explanation: トレーニングと評価: SGD をオプティマイザとして使用する
End of explanation
plt.plot(
CG_frobenius_norm_of_weight,
color='r',
label='CG_frobenius_norm_of_weights')
plt.plot(
SGD_frobenius_norm_of_weight,
color='b',
label='SGD_frobenius_norm_of_weights')
plt.xlabel('Epoch')
plt.ylabel('Frobenius norm of weights')
plt.legend(loc=1)
Explanation: 重みのフロベニウスノルム: CG と SGD の比較
現在の CG オプティマイザの実装はフロベニウスノルムに基づいており、フロベニウスノルムをターゲット関数の正則化機能と見なしています。ここでは、CG の正則化された効果を、フロベニウスノルム正則化機能のない SGD オプティマイザと比較します。
End of explanation
plt.plot(history_cg.history['accuracy'], color='r', label='CG_train')
plt.plot(history_cg.history['val_accuracy'], color='g', label='CG_test')
plt.plot(history_sgd.history['accuracy'], color='pink', label='SGD_train')
plt.plot(history_sgd.history['val_accuracy'], color='b', label='SGD_test')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc=4)
Explanation: トレーニングと検証の精度:CGとSGDの比較
End of explanation |
10,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Balance Scale Classification - UCI</h1>
Analysis of the <a href="http
Step1: Check for Class Imbalance
Step2: Feature Importances
Now we check for feature importances. However, this requires all feature values to be positive. | Python Code:
import pandas as pd
import numpy as np
%pylab inline
pylab.style.use('ggplot')
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/balance-scale/balance-scale.data'
balance_df = pd.read_csv(url, header=None)
balance_df.columns = ['class_name', 'left_weight', 'left_distance', 'right_weight', 'right_distance']
balance_df.head()
Explanation: <h1 align="center">Balance Scale Classification - UCI</h1>
Analysis of the <a href="http://archive.ics.uci.edu/ml/machine-learning-databases/balance-scale">UCI Balance Scale Dataset. </a>
Get the Data
End of explanation
counts = balance_df['class_name'].value_counts()
counts.plot(kind='bar')
Explanation: Check for Class Imbalance
End of explanation
from sklearn.feature_selection import f_classif
features = balance_df.drop('class_name', axis=1)
names = balance_df['class_name']
# check for negative feature values
features[features < 0].sum(axis=0)
t_stats, p_vals = f_classif(features, names)
feature_importances = pd.DataFrame(np.column_stack([t_stats, p_vals]),
index=features.columns.copy(),
columns=['t_stats', 'p_vals'])
feature_importances.plot(subplots=True, kind='bar')
plt.xticks(rotation=30)
import seaborn as sns
for colname in balance_df.columns.drop('class_name'):
fg = sns.FacetGrid(col='class_name', data=balance_df)
fg = fg.map(pylab.hist, colname)
from sklearn.model_selection import cross_val_score, StratifiedKFold
from sklearn.naive_bayes import GaussianNB
estimator = GaussianNB()
cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=12345)
f1 = cross_val_score(estimator, features, names, cv=cv, scoring='f1_micro')
pd.Series(f1).plot(title='F1 Score (Micro)', kind='bar')
estimator = GaussianNB()
cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=12345)
f1 = cross_val_score(estimator, features, names, cv=cv, scoring='accuracy')
pd.Series(f1).plot(title='Accuracy', kind='bar')
Explanation: Feature Importances
Now we check for feature importances. However, this requires all feature values to be positive.
End of explanation |
10,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Tutorial
Step1: Hello World!
Step2: # starts a comment line.
f(x,y) is a function call. f is the function name, x and y are the arguments.
Values, Types, Variables
Step3: RHS of = (10.0, "Hello World") are values.
LHS of = (s, x, ...) are variables.
Values have a type.
We can assign more than one variable in a single assignment.
type(x) can be used to determine the type of x.
variables can change their type.
Strings
Step4: Python has strings.
Various functions to modify strings.
These functions return the new value.
The strings itself are immutable!
Ints and Floats
Step5: "Standard" math operators exist
Step6: Boolean values are True and False.
and, or and not are logical operators.
==, !=, <, >, <=, >= are comparison operators.
Formatted Output
Step7: str.format() to format output (was "%")
{} is replaced by function arguments in order
{1} explicitely denotes the second argument
{
Step8: Python provides lists.
[] is an emtpy list.
list.append(value) can be used to append anything to a list.
Nested lists are possible.
range(start, end) creates a list of numbers
len(list) returns the length of the list.
list[i] to access the ith element of the list (start counting with 0).
Negative indices denote elements from the end.
Slices can be used to take sublists.
A list can be unpacked in an assignment.
_ can be used to ignore an element.
in to find an element in a list.
Control Structures
Step9: while, if, else, for control structures
Indentation is significant! All lines with the same indent belong to one block
Step10: def to define a function
return defines the return values of the function
* to unpack lists/take argument lists
Variable Scoping
Step11: Local variables override global variables.
Global variables are not modified when changing local variables.
Still, in a function definition, global variables can be accessed (i.e., they are copied).
When the global variable should be modified inside a function, explicitely declare it with global.
Variables contain References
Step12: Variables are references to values.
When an value is modified, all variables that reference it get changed.
Basic types are immutable, i.e. they cannot be modiified.
When they are assigned to another variable, a new copy is created.
Functions modifying them return a new, modified version.
Not so with other types!
Classes and Objects
Classical Python programs contain variables that have a value of a certain data type, and functions.
Observation
Step13: file is a class.
When using a class like a function, this will create an object (a.k.a. an instance of a class)
Step14: All types in Python are classes, all values are objects!
Defining a Class
Step15: Create a class using the keyword class.
Methods are nested functions.
A method gets the object itself as first argument (usually called self).
Methods can access and modify fields (a.k.a. instance variables).
Special Methods
Step16: __init__ is the constructor.
Other special functions can be used to allow operators etc.
Step17: Ultimately, the language intrinsics of Python are
assignments
function definitions
class definitions
method or functions calls
The rest (e.g. all operators) are "syntactical sugar" and use special methods (e.g. __add__).
Step18: Inheritance
Step19: A class can inherit all methods from another class.
The inherited methods can be overridden.
Allows to extend functionality of a class.
Modules | Python Code:
from __future__ import print_function
Explanation: Python Tutorial
End of explanation
# Output "Hello World!"
print("Hello, World!")
print("Hello World!", 10.0)
Explanation: Hello World!
End of explanation
# define a variable
s = "Hello World!"
x = 10.0
i = 42
# define 2 variables at once
a,b = 1,1
# output the types
print(type(10.0), type(42), type("Hello World!"))
print(type(s), type(x))
# now change the type of s!
s = 3
print(type(s))
Explanation: # starts a comment line.
f(x,y) is a function call. f is the function name, x and y are the arguments.
Values, Types, Variables
End of explanation
name = "olaf"
print(len(name))
print(name.capitalize())
print(name.upper())
print(name[2])
Explanation: RHS of = (10.0, "Hello World") are values.
LHS of = (s, x, ...) are variables.
Values have a type.
We can assign more than one variable in a single assignment.
type(x) can be used to determine the type of x.
variables can change their type.
Strings
End of explanation
x = 2.0
i = 42
print(type(x), type(i))
# math expressions
y = (-2*x**3 + x**.5) / (x-1.0)
n = (-2*i**3 + 23)
print(y)
print(n)
# mixed expressions
y = (i*x + x**2)
print(y)
# division!
print("17 / 2 =", 17/2)
print("17. / 2 =", 17./2)
print("17 % 2 =", 17%2)
print(float(i)/2)
print((i+0.)/2)
# unary operators
i += 10
i -= 7
x /= 3.0
print(i, x)
print(3 * 10)
print("3" * 10)
Explanation: Python has strings.
Various functions to modify strings.
These functions return the new value.
The strings itself are immutable!
Ints and Floats
End of explanation
truth = True
lie = False
print(type(truth))
print(truth)
print(not lie and (truth or lie))
truth = (i == 42)
truth = lie or (i == n) and (x != y) or (x < y)
Explanation: "Standard" math operators exist: +, -, *, /, **
Be careful with /: integer division when both numbers are integers! (only Python 2!)
Unary operators +=, -=, ...
Operators may behave differently depending on the type.
Boolean Expressions
End of explanation
print("x={}, y={}".format(x, y))
print("y={1}, x={0}".format(x, y))
x = 3.14159
print("x={:.4}, x={:5.3}".format(x, y))
Explanation: Boolean values are True and False.
and, or and not are logical operators.
==, !=, <, >, <=, >= are comparison operators.
Formatted Output
End of explanation
# create an empty list
l = []
# append different elements
l.append("Hallo")
l.append("Welt!")
l.append(42)
l.append(23)
l.append([1.0, 2.0])
# create a number range
ns = range(20)
# output the list
print(l)
print(ns)
print(range(7,15))
print(len(ns))
# access elements
print(l[0])
print(l[2])
print(l[-1])
# take slices
print(ns)
print(ns[2:7])
print(ns[17:])
print(ns[:7])
# with stride
print(ns[7:16:2])
# unpack the list
print(l)
s1, s2, n1, n2, _ = l
print(s1, s2)
print(17 in ns)
print("Hallo" in l)
Explanation: str.format() to format output (was "%")
{} is replaced by function arguments in order
{1} explicitely denotes the second argument
{:5.3} can be used to format the output in more detail (here: precision)
Lists
End of explanation
fib = []
a, b = 0, 1
while b < 100:
a, b = b, (a+b)
fib.append(a)
print(fib)
for n in fib:
if n % 3 == 0:
print("{} is modulo 3!".format(n))
elif n % 2 == 0:
print("{} is even".format(n))
else:
print("{} is odd".format(n))
Explanation: Python provides lists.
[] is an emtpy list.
list.append(value) can be used to append anything to a list.
Nested lists are possible.
range(start, end) creates a list of numbers
len(list) returns the length of the list.
list[i] to access the ith element of the list (start counting with 0).
Negative indices denote elements from the end.
Slices can be used to take sublists.
A list can be unpacked in an assignment.
_ can be used to ignore an element.
in to find an element in a list.
Control Structures
End of explanation
# define a function
def f(x, c):
return x**2-c, x**2+c
print(f(3.0, 1.0))
print(f(5.0, 2.0))
# with default argument
def f(x, c=1.0):
return x**2-c, x**2+c
print(f(3.0))
print(f(5.0, 2.0))
# with docstring
def f(x):
"Computes the square of x."
return x**2
help(f)
Explanation: while, if, else, for control structures
Indentation is significant! All lines with the same indent belong to one block
: at end of previous line starts a new block
while is a loop with an end condition
for is a loop over elements of a list (or other generators)
Function Definitions
End of explanation
def f(x):
print("i =", i)
print("x =", x, "(in function, before add)")
x += 2
print("x =", x, "(in function, after add)")
return x
x = 3
i = 42
print("x =", x, "(in script, before call)")
y = f(x)
print("x =", x, "(in script, after function call)")
print("y =", y)
x = 3
def f():
global x
x += 2
print("x =", x, "(before call)")
f()
print("x =", x, "(after function call)")
Explanation: def to define a function
return defines the return values of the function
* to unpack lists/take argument lists
Variable Scoping
End of explanation
l = ["Hello", "World"]
ll = [l, l]
print(ll)
l[1] = "Olaf"
print(ll)
ll[0][1] = "Axel"
print(ll)
Explanation: Local variables override global variables.
Global variables are not modified when changing local variables.
Still, in a function definition, global variables can be accessed (i.e., they are copied).
When the global variable should be modified inside a function, explicitely declare it with global.
Variables contain References
End of explanation
# Create an object of class "file"
f = file('test.txt', 'w')
# Call the method "write" on the object
f.write('Hello')
# Close he file
f.close()
Explanation: Variables are references to values.
When an value is modified, all variables that reference it get changed.
Basic types are immutable, i.e. they cannot be modiified.
When they are assigned to another variable, a new copy is created.
Functions modifying them return a new, modified version.
Not so with other types!
Classes and Objects
Classical Python programs contain variables that have a value of a certain data type, and functions.
Observation: functions are tied to a given data type.
For example, print should behave differently when used with a string, and integer, or a file. The same basically holds for all functions.
Idea: Tie functions to the data type also syntactically
The world is made of objects (a.k.a. value)
An object is an instance of a class (a.k.a. data type)
A class provides methods (a.k.a. as functions) that can be used to do anything with these objects.
End of explanation
s = "Hello {}"
print(s.format("World!"))
print("Hello {}".format("World"))
x = 42.0
print(x.is_integer())
print((42.0).is_integer())
# same as l=[]
l = list()
l.append(42)
l.append(23)
Explanation: file is a class.
When using a class like a function, this will create an object (a.k.a. an instance of a class): f = file('test.txt', 'w')
Several instances/objects of a class can be created.
An object has methods (a.k.a. class functions) that can be used to do something with the object.
Methods are called like object.method(): f.write('Hello')
Everything is an object
End of explanation
class Circle:
"This class represents a circle."
def create(self, r):
"Generate a circle."
self.radius = r
def area(self):
"Compute the area of the circle."
return 3.14159 * self.radius**2
help(Circle)
# create two circles
c1 = Circle()
c1.create(2.0)
c2 = Circle()
c2.create(3.0)
print(c1.area())
print(c2.area())
print(c2.radius)
Explanation: All types in Python are classes, all values are objects!
Defining a Class
End of explanation
class Circle:
pi = 3.14159
# __init__ is the constructor
def __init__(self, r):
self.radius = r
def area(self):
return Circle.pi * self.radius**2
# define operator "+"
def __add__(self, other):
new = Circle(((self.area() + other.area())/3.14159)**0.5)
return new
# define how to convert it to a string (e.g. to print it)
def __str__(self):
return "I am a circle with radius {}.".format(self.radius)
c1 = Circle(2.0)
c2 = Circle(3.0)
print(c1.area())
print(c2.radius)
# We have defined "__add__", so we can add two circles
c3 = c1 + c2
print(c3.radius)
print(c3.area())
# We have defined "__str__", so we can print a circle
print(c1)
Explanation: Create a class using the keyword class.
Methods are nested functions.
A method gets the object itself as first argument (usually called self).
Methods can access and modify fields (a.k.a. instance variables).
Special Methods
End of explanation
# same a + 23
a = 19
print(a.__add__(23))
# same as "Hello Olaf!"[6:10]
print("Hello Olaf!".__getslice__(6, 10))
Explanation: __init__ is the constructor.
Other special functions can be used to allow operators etc.
End of explanation
class Polynomial:
"Represents a polynomial p(x)=a*x**2 + b*x + c."
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
# allows the object to be used as a function
def __call__(self, x):
return self.a*x**2 + self.b*x + self.c
p = Polynomial(3.0, 2.0, 1.0)
print(p(1.0))
Explanation: Ultimately, the language intrinsics of Python are
assignments
function definitions
class definitions
method or functions calls
The rest (e.g. all operators) are "syntactical sugar" and use special methods (e.g. __add__).
End of explanation
class MyCircle(Circle):
def __init__(self, r = 1.0, color = "red"):
Circle.__init__(self, r)
self.color = color
def __str__(self):
return "I am a {} circle with radius {} and area {}.".format(self.color, self.radius, self.area())
c1 = MyCircle()
c2 = MyCircle(2.0, "green")
print(c1)
print(c2)
print(c1 + c2)
Explanation: Inheritance
End of explanation
import math
print(math.pi)
print(math.sin(math.pi))
import sys
print("Hello World!", file=sys.stderr)
from math import sin, pi
print(sin(pi))
from math import *
print(log(pi))
Explanation: A class can inherit all methods from another class.
The inherited methods can be overridden.
Allows to extend functionality of a class.
Modules
End of explanation |
10,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Index file visualization
This notebook shows an easy way to represent the In Situ data positions using the index files.<br>
For this visualization of a sample <i>index_latest.txt</i> dataset of the Copernicus Marine Environment Monitoring Service.
Step1: Import packages
We use the two packages
Step2: Selection criteria
Here we show how the data positions can be selected prior to the plot.<br>
The example explains the selection by data providers, but that can be generalised to other properties.
Provider
The list dataprovider contains the name of the providers we want to keep for the plot.
Step3: Here we could also add something for the time or space domain.
Load and prepare data
Since the <i>index_latest.txt</i> is a formatted file, we use the numpy function <a href="http
Step4: For each data file, the positions are defined as a bounding box. <br>
To define the position shown on the map, we use the mean of the stored <i>geospatial_lat/lon_min/max</i> for each dataset.
Step5: Select by data provider
We create a list of indices corresponding to the entries with a provider belonging to the list specified at the beginning.
Step6: Could do intersection of the list, but for that we need to specify the provider name as specified in the index file.
Visualization
Finally, we create the map object.<br>
The map is centered on the Mediterranean Sea, this has to be changed according to the region of interest and the index file we consider.
Step7: Finally we can create the map where we will see the locations of the platforms as well as the corresponding file names. | Python Code:
indexfile = "datafiles/index_latest.txt"
Explanation: Index file visualization
This notebook shows an easy way to represent the In Situ data positions using the index files.<br>
For this visualization of a sample <i>index_latest.txt</i> dataset of the Copernicus Marine Environment Monitoring Service.
End of explanation
import numpy as np
import folium
Explanation: Import packages
We use the two packages:
* <a href="https://github.com/python-visualization/folium">folium</a> for the visualization and
* <a href="http://www.numpy.org/">numpy</a> for the data reading / processing.
End of explanation
dataproviderlist = ['IEO', 'INSTITUTO ESPANOL DE OCEANOGRAFIA', 'SOCIB']
Explanation: Selection criteria
Here we show how the data positions can be selected prior to the plot.<br>
The example explains the selection by data providers, but that can be generalised to other properties.
Provider
The list dataprovider contains the name of the providers we want to keep for the plot.
End of explanation
dataindex = np.genfromtxt(indexfile, skip_header=6, unpack=True, delimiter=',', dtype=None, \
names=['catalog_id', 'file_name', 'geospatial_lat_min', 'geospatial_lat_max',
'geospatial_lon_min', 'geospatial_lon_max',
'time_coverage_start', 'time_coverage_end',
'provider', 'date_update', 'data_mode', 'parameters'])
Explanation: Here we could also add something for the time or space domain.
Load and prepare data
Since the <i>index_latest.txt</i> is a formatted file, we use the numpy function <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html">genfromtxt</a> to extract the data from the document.
End of explanation
lon_min = dataindex['geospatial_lon_min']
lon_max = dataindex['geospatial_lon_max']
lat_min = dataindex['geospatial_lat_min']
lat_max = dataindex['geospatial_lat_max']
lonmean, latmean = 0.5*(lon_min + lon_max), 0.5*(lat_min + lat_max)
Explanation: For each data file, the positions are defined as a bounding box. <br>
To define the position shown on the map, we use the mean of the stored <i>geospatial_lat/lon_min/max</i> for each dataset.
End of explanation
indexlist = []
for np, provider in enumerate(dataindex['provider']):
matching = [s for s in dataproviderlist if s in provider]
if matching:
indexlist.append(np)
Explanation: Select by data provider
We create a list of indices corresponding to the entries with a provider belonging to the list specified at the beginning.
End of explanation
map = folium.Map(location=[39.5, 2], zoom_start=8)
cntr = 0
for i in indexlist:
curr_data = dataindex[i]
link = curr_data[1]
last_idx_slash = link.rfind('/')
ncdf_file_name = link[last_idx_slash+1::]
folium.Marker( location = [latmean[i], lonmean[i]], popup=ncdf_file_name).add_to(map)
Explanation: Could do intersection of the list, but for that we need to specify the provider name as specified in the index file.
Visualization
Finally, we create the map object.<br>
The map is centered on the Mediterranean Sea, this has to be changed according to the region of interest and the index file we consider.
End of explanation
map
Explanation: Finally we can create the map where we will see the locations of the platforms as well as the corresponding file names.
End of explanation |
10,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load data
Step1: Semantic transformations
http
Step2: TI-IDF + LSI
Step3: Save corpora
Step4: Transform "unseen" documents
What is the best way to get LDA scores on a corpus used for training a model?
Step5: Save models
Step6: Evaluation
Read insighful intro at http
Step7: Split doc
We'll split each document into two parts, and check that 1) topics of the first half are similar to topics of the second 2) halves of different documents are mostly dissimilar
Step8: How to use on corpus
Just send convert list of tokens to bow, then send to model
Step9: Visualization
Following | Python Code:
# Load pre-saved BoW
# Save BoW
user_dir = os.path.expanduser('~/cltk_data/user_data/')
try:
os.makedirs(user_dir)
except FileExistsError:
pass
bow_path = os.path.join(user_dir, 'bow_lda_gensim.mm')
mm_corpus = gensim.corpora.MmCorpus(bow_path)
print(mm_corpus)
print(next(iter(mm_corpus)))
Explanation: Load data
End of explanation
# Save for reuse
with open(os.path.expanduser('~/cltk_data/user_data/tlg_bow_id2word.dict'), 'rb') as file_open:
id2word_tlg = pickle.load(file_open)
# Quick testing using just a part of the corpus
# use fewer documents during training, LDA is slow
clipped_corpus = gensim.utils.ClippedCorpus(mm_corpus, 100)
%time lda_model = gensim.models.LdaModel(clipped_corpus,
num_topics=10,
id2word=id2word_tlg,
passes=4)
lda_model.print_topics(-1) # print a few most important words for each LDA topic
Explanation: Semantic transformations
http://radimrehurek.com/topic_modeling_tutorial/2%20-%20Topic%20Modeling.html#Semantic-transformations
LDA
End of explanation
# first train tfidf model
# this modifies the feature weights of each word
%time tfidf_model = gensim.models.TfidfModel(mm_corpus, id2word=id2word_tlg)
# then run lsi, which reduces dimensionality
%time lsi_model = gensim.models.LsiModel(tfidf_model[mm_corpus], id2word=id2word_tlg, num_topics=200)
# for the first doc of the TLG corpus, here are the LSI scores for each of the 200 topics
print(next(iter(lsi_model[tfidf_model[mm_corpus]])))
Explanation: TI-IDF + LSI
End of explanation
# cache the transformed corpora to disk, for use in later notebooks
path_lda = os.path.join(user_dir, 'gensim_tlg_lda.mm')
path_tfidf = os.path.join(user_dir, 'gensim_tlg_tfidf.mm')
path_lsi= os.path.join(user_dir, 'gensim_tlg_lsa.mm')
%time gensim.corpora.MmCorpus.serialize(path_lda, lda_model[mm_corpus])
%time gensim.corpora.MmCorpus.serialize(path_tfidf, tfidf_model[mm_corpus])
%time gensim.corpora.MmCorpus.serialize(path_lsi, lsi_model[tfidf_model[mm_corpus]])
Explanation: Save corpora
End of explanation
# LDA
def tokenize(text):
# https://radimrehurek.com/gensim/utils.html#gensim.utils.simple_preprocess
tokens = [token for token in simple_preprocess(text, deacc=True)]
return [token for token in tokens if token not in STOPS_LIST]
doc = "ἐπειδὴ πᾶσαν πόλιν ὁρῶμεν κοινωνίαν τινὰ οὖσαν καὶ πᾶσαν κοινωνίαν ἀγαθοῦ τινος ἕνεκεν συνεστηκυῖαν (τοῦ γὰρ εἶναι δοκοῦντος ἀγαθοῦ χάριν πάντα πράττουσι πάντες), δῆλον ὡς πᾶσαι μὲν ἀγαθοῦ τινος στοχάζονται, μάλιστα δὲ [5] καὶ τοῦ κυριωτάτου πάντων ἡ πασῶν κυριωτάτη καὶ πάσας περιέχουσα τὰς ἄλλας. αὕτη δ᾽ ἐστὶν ἡ καλουμένη πόλις καὶ ἡ κοινωνία ἡ πολιτική. ὅσοι μὲν οὖν οἴονται πολιτικὸν καὶ βασιλικὸν καὶ οἰκονομικὸν καὶ δεσποτικὸν εἶναι τὸν αὐτὸν οὐ καλῶς λέγουσιν (πλήθει γὰρ καὶ ὀλιγότητι νομίζουσι [10] διαφέρειν ἀλλ᾽ οὐκ εἴδει τούτων ἕκαστον, οἷον ἂν μὲν ὀλίγων, δεσπότην, ἂν δὲ πλειόνων, οἰκονόμον, ἂν δ᾽ ἔτι πλειόνων, πολιτικὸν ἢ βασιλικόν, ὡς οὐδὲν διαφέρουσαν μεγάλην οἰκίαν ἢ μικρὰν πόλιν: καὶ πολιτικὸν δὲ καὶ βασιλικόν, ὅταν μὲν αὐτὸς ἐφεστήκῃ, βασιλικόν, ὅταν [15] δὲ κατὰ τοὺς λόγους τῆς ἐπιστήμης τῆς τοιαύτης κατὰ μέρος ἄρχων καὶ ἀρχόμενος, πολιτικόν: ταῦτα δ᾽ οὐκ ἔστιν ἀληθῆ): δῆλον δ᾽ ἔσται τὸ λεγόμενον ἐπισκοποῦσι κατὰ τὴν ὑφηγημένην μέθοδον. ὥσπερ γὰρ ἐν τοῖς ἄλλοις τὸ σύνθετον μέχρι τῶν ἀσυνθέτων ἀνάγκη διαιρεῖν (ταῦτα γὰρ ἐλάχιστα [20] μόρια τοῦ παντός), οὕτω καὶ πόλιν ἐξ ὧν σύγκειται σκοποῦντες ὀψόμεθα καὶ περὶ τούτων μᾶλλον, τί τε διαφέρουσιν ἀλλήλων καὶ εἴ τι τεχνικὸν ἐνδέχεται λαβεῖν περὶ ἕκαστον τῶν ῥηθέντων."
doc = ' '.join(simple_preprocess(doc))
# transform text into the bag-of-words space
bow_vector = id2word_tlg.doc2bow(tokenize(doc))
print([(id2word_tlg[id], count) for id, count in bow_vector])
# transform into LDA space
lda_vector = lda_model[bow_vector]
print(lda_vector)
# print the document's single most prominent LDA topic
print(lda_model.print_topic(max(lda_vector, key=lambda item: item[1])[0]))
# transform into LSI space
lsi_vector = lsi_model[tfidf_model[bow_vector]]
print(lsi_vector)
# print the document's single most prominent LSI topic (not interpretable like LDA!)
print(lsi_model.print_topic(max(lsi_vector, key=lambda item: abs(item[1]))[0]))
Explanation: Transform "unseen" documents
What is the best way to get LDA scores on a corpus used for training a model?
End of explanation
path_lda = os.path.join(user_dir, 'gensim_tlg_lda.model')
path_tfidf = os.path.join(user_dir, 'gensim_tlg_tfidf.model')
path_lsi= os.path.join(user_dir, 'gensim_tlg_lsa.model')
# store all trained models to disk
lda_model.save(path_lda)
lsi_model.save(path_lsi)
tfidf_model.save(path_tfidf)
Explanation: Save models
End of explanation
# select top 50 words for each of the 20 LDA topics
top_words = [[word for word, _ in lda_model.show_topic(topicno, topn=50)] for topicno in range(lda_model.num_topics)]
print(top_words)
# get all top 50 words in all 20 topics, as one large set
all_words = set(itertools.chain.from_iterable(top_words))
print("Can you spot the misplaced word in each topic?")
# for each topic, replace a word at a different index, to make it more interesting
replace_index = np.random.randint(0, 10, lda_model.num_topics)
replacements = []
for topicno, words in enumerate(top_words):
other_words = all_words.difference(words)
replacement = np.random.choice(list(other_words))
replacements.append((words[replace_index[topicno]], replacement))
words[replace_index[topicno]] = replacement
print("%i: %s" % (topicno, ' '.join(words[:10])))
print("Actual replacements were:")
print(list(enumerate(replacements)))
Explanation: Evaluation
Read insighful intro at http://radimrehurek.com/topic_modeling_tutorial/2%20-%20Topic%20Modeling.html#Evaluation
Word intrusion
For each trained topic, they take its first ten words, then substitute one of them with another, randomly chosen word (intruder!) and see whether a human can reliably tell which one it was. If so, the trained topic is topically coherent (good); if not, the topic has no discernible theme (bad)
End of explanation
# this function first defined in pt. 1
def iter_tlg(tlg_dir):
file_names = os.listdir(tlg_dir)
for file_name in file_names:
file_path = os.path.join(tlg_dir, file_name)
with open(file_path) as file_open:
file_read = file_open.read()
tokens = tokenize(file_read)
# ignore short docs
if len(tokens) < 50:
continue
yield file_name, tokens
# evaluate on 1k documents **not** used in LDA training
tlg_preprocessed = os.path.expanduser('~/cltk_data/greek/text/tlg/plaintext/')
doc_stream = (tokens for _, tokens in iter_tlg(tlg_preprocessed)) # generator
test_docs = list(itertools.islice(doc_stream, 100, 200)) # ['πανυ', 'καλως', ...], [...], ...]
def intra_inter(model, test_docs, num_pairs=10000):
# split each test document into two halves and compute topics for each half
part1 = [model[id2word_tlg.doc2bow(tokens[: len(tokens) // 2])] for tokens in test_docs]
part2 = [model[id2word_tlg.doc2bow(tokens[len(tokens) // 2 :])] for tokens in test_docs]
# print computed similarities (uses cossim)
print("average cosine similarity between corresponding parts (higher is better):")
print(np.mean([gensim.matutils.cossim(p1, p2) for p1, p2 in zip(part1, part2)]))
random_pairs = np.random.randint(0, len(test_docs), size=(num_pairs, 2))
print("average cosine similarity between {} random parts (lower is better):".format(num_pairs))
print(np.mean([gensim.matutils.cossim(part1[i[0]], part2[i[1]]) for i in random_pairs]))
print("LDA results:")
intra_inter(lda_model, test_docs)
print("LSI results:")
intra_inter(lsi_model, test_docs)
Explanation: Split doc
We'll split each document into two parts, and check that 1) topics of the first half are similar to topics of the second 2) halves of different documents are mostly dissimilar
End of explanation
for title, tokens in iter_tlg(tlg_preprocessed):
#print(title, tokens[:10]) # print the article title and its first ten tokens
print(title)
print(lda_model[id2word_tlg.doc2bow(tokens)])
print('')
Explanation: How to use on corpus
Just send convert list of tokens to bow, then send to model: print(lda_model[id2word_tlg.doc2bow(tokens)])
End of explanation
lda_model.show_topics()
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
pyLDAvis.gensim.prepare(lda_model, mm_corpus, id2word_tlg)
Explanation: Visualization
Following: http://nbviewer.jupyter.org/github/bmabey/pyLDAvis/blob/master/notebooks/pyLDAvis_overview.ipynb
End of explanation |
10,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1.1 The Weiner Process
A Weiner process, $W(t)\,$, is a continuos-time stocastic process. By definition, the value of the process at time $t$ is
Step1: Or expressing $D$ in $\textrm{nm}^2 /\mu s$
Step2: We can also estimate $D$ experimentally from the knowledge of the PSF and the diffusion time $\tau_{spot}$
$$S_{spot} = \sqrt{2DN\tau_{spot}} \quad \rightarrow \quad D = \frac{S_{spot}^2}{2N\tau_{spot}}$$
Putting some reasonable number we obtain
Step3: not very different from what we obtained before from the viscosity model.
Examples
How far we travel in X seconds (hint standard deviation of displacement)?
Step4: How long we need to diffuse an X distance? | Python Code:
import numpy as np
d = 5e-9 # particle radius in meters
eta = 1.0e-3 # viscosity of water in SI units (Pascal-seconds) at 293 K
kB = 1.38e-23 # Boltzmann constant
T = 293 # Temperature in degrees Kelvin
D = kB*T/(3*np.pi*eta*d) # [m^2 / s]
D
Explanation: 1.1 The Weiner Process
A Weiner process, $W(t)\,$, is a continuos-time stocastic process. By definition, the value of the process at time $t$ is:
$$W(t) \sim \mathcal{N}(0,t)$$
where $\mathcal{N}(0,t)$ is a Normally-distributed random variable (RV) with $\mu$=0 and $\sigma^2=t$.
From the definition follows that $W(0)=0$.
Also, for any time instant $t$ and time delay $\tau$ >0 the following is true:
$$W(t+\tau)-W(t) \sim \mathcal{N}(0,\tau)$$
1.2 Brownian Motion
In Brownian motion of a freely diffusing particle, the mean squared displacement of a particle $\langle|\vec{r}(t)-\vec{r}(t+\tau)|^2\rangle$ is proportional to the time interval $\tau$ according to
$$\langle|\vec{r}(t)-\vec{r}(t+\tau)|^2\rangle = 2 D N \tau$$
$\vec{r}(t)$ position at time $t$
$N$ number of dimensions ($N$=3 for 3D simulations)
$D$ diffusion coefficient
$\tau$ time interval.
1.3 Brownian Motion as a Weiner Process
When using a Weiner process to describe a Brownian motion we must set a physical link between
the variance of the Weineer process and the diffusion coefficient.
Remembering that
$$k \mathcal{N}(\mu,\sigma^2) = \mathcal{N}(\mu,k\sigma^2)$$
if we build a process in which "dispacements" are normally distributed with variance equal to $2DN\tau$:
$$W(t+\tau)-W(t) \sim \mathcal{N}(0,2DN\tau)$$
than we are describing the Brownian motion of a particle with diffusion coefficient $D$. To simulate this process we must choose the times at which to evauate the position. For example we can sample the time at uniform intervals with step = $\Delta t$.
How to choose the simulation step $\Delta t$
The choice of the step depends on which properties we want to simulate. For example, let assume we want to simulate a colloidal particle diffusing through a confocal excitation volume of lateral dimension $S_{spot}$. In order to gather significant information we want to sample the particle position may times during the average diffusion time. The average diffusion time can be estimated setting the standard deviation of the displacement ($W(t+\tau)-W(t)\quad$) equal to $S_{spot}$ and solving for $\tau$
$$S_{spot} = \sqrt{2DN\tau_{spot}} \quad \rightarrow \quad \tau_{spot} = \frac{S_{spot}^2}{2ND}$$
so we want our simulation step to be $<< \tau_{spot}$.
Although $\tau_{spot}$ can be derived theorically from $D$ and from the knowledge of the PSF, we know that for typical biomolecules of few nanometers, diffusing through a diffraction limited exciation spot (of visible light), the diffusion
time is of the order of 1ms. Therefore we can safely set the simulation step to 0.5-1μs.
The diffusion coefficient $D$
The diffusion coefficient $D$ is given by:
$$ D = \frac{k_B T}{3 \pi \eta d} $$
$k_B$ Boltzman constant
$T$ temperature in Kelvin
$\eta$ viscosity (in SI units: Pa/s)
$d$ radius of the particle in meters
See also Theory - On Browniam motion and Diffusion coefficient
Note that the units of $D$ are $\mathrm{m}^2/\mathrm{s}\;$. Using some reasonable number we obtain:
End of explanation
Du = D*(1e6)**2/(1e3) # [um^2 / ms]
Du
Du = D*(1e9)**2/(1e6) # [nm^2 / us]
Du
Explanation: Or expressing $D$ in $\textrm{nm}^2 /\mu s$
End of explanation
S_spot = 0.8e-6
N = 3
tau_spot = 1e-3
D = S_spot**2 / (2*N*tau_spot) # [m^2 / s]
D
Du = D*(1e6)**2/(1e3) # [um^2 / ms]
Du
Explanation: We can also estimate $D$ experimentally from the knowledge of the PSF and the diffusion time $\tau_{spot}$
$$S_{spot} = \sqrt{2DN\tau_{spot}} \quad \rightarrow \quad D = \frac{S_{spot}^2}{2N\tau_{spot}}$$
Putting some reasonable number we obtain:
End of explanation
time = 10. # seconds
sigma = np.sqrt(2*D*3*time)
print('Displacement (std_dev): %.2f um' % (sigma*1e6))
Explanation: not very different from what we obtained before from the viscosity model.
Examples
How far we travel in X seconds (hint standard deviation of displacement)?
End of explanation
space = 1e-6 # m
time = 1.*space**2/(2*D*3)
print('Time for %.1f um displacement: %.1g s' % (space*1e6, time))
Explanation: How long we need to diffuse an X distance?
End of explanation |
10,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The role of dipole orientations in distributed source localization
When performing source localization in a distributed manner
(MNE/dSPM/sLORETA/eLORETA),
the source space is defined as a grid of dipoles that spans a large portion of
the cortex. These dipoles have both a position and an orientation. In this
tutorial, we will look at the various options available to restrict the
orientation of the dipoles and the impact on the resulting source estimate.
See inverse_orientation_constraints for related information.
Loading data
Load everything we need to perform source localization on the sample dataset.
Step1: The source space
Let's start by examining the source space as constructed by the
Step2: Fixed dipole orientations
While the source space defines the position of the dipoles, the inverse
operator defines the possible orientations of them. One of the options is to
assign a fixed orientation. Since the neural currents from which MEG and EEG
signals originate flows mostly perpendicular to the cortex [1]_, restricting
the orientation of the dipoles accordingly places a useful restriction on the
source estimate.
By specifying fixed=True when calling
Step3: Restricting the dipole orientations in this manner leads to the following
source estimate for the sample data
Step4: The direction of the estimated current is now restricted to two directions
Step5: When computing the source estimate, the activity at each of the three dipoles
is collapsed into the XYZ components of a single vector, which leads to the
following source estimate for the sample data
Step6: Limiting orientations, but not fixing them
Often, the best results will be obtained by allowing the dipoles to have
somewhat free orientation, but not stray too far from a orientation that is
perpendicular to the cortex. The loose parameter of the
Step7: Discarding dipole orientation information
Often, further analysis of the data does not need information about the
orientation of the dipoles, but rather their magnitudes. The pick_ori
parameter of the | Python Code:
import mne
import numpy as np
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
data_path = sample.data_path()
evokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif')
left_auditory = evokeds[0].apply_baseline()
fwd = mne.read_forward_solution(
data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif')
mne.convert_forward_solution(fwd, surf_ori=True, copy=False)
noise_cov = mne.read_cov(data_path + '/MEG/sample/sample_audvis-cov.fif')
subject = 'sample'
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
Explanation: The role of dipole orientations in distributed source localization
When performing source localization in a distributed manner
(MNE/dSPM/sLORETA/eLORETA),
the source space is defined as a grid of dipoles that spans a large portion of
the cortex. These dipoles have both a position and an orientation. In this
tutorial, we will look at the various options available to restrict the
orientation of the dipoles and the impact on the resulting source estimate.
See inverse_orientation_constraints for related information.
Loading data
Load everything we need to perform source localization on the sample dataset.
End of explanation
lh = fwd['src'][0] # Visualize the left hemisphere
verts = lh['rr'] # The vertices of the source space
tris = lh['tris'] # Groups of three vertices that form triangles
dip_pos = lh['rr'][lh['vertno']] # The position of the dipoles
dip_ori = lh['nn'][lh['vertno']]
dip_len = len(dip_pos)
dip_times = [0]
white = (1.0, 1.0, 1.0) # RGB values for a white color
actual_amp = np.ones(dip_len) # misc amp to create Dipole instance
actual_gof = np.ones(dip_len) # misc GOF to create Dipole instance
dipoles = mne.Dipole(dip_times, dip_pos, actual_amp, dip_ori, actual_gof)
trans = mne.read_trans(trans_fname)
fig = mne.viz.create_3d_figure(size=(600, 400), bgcolor=white)
coord_frame = 'mri'
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans, surfaces='white',
coord_frame=coord_frame, fig=fig)
# Mark the position of the dipoles with small red dots
mne.viz.plot_dipole_locations(
dipoles=dipoles, trans=trans, mode='sphere', subject=subject,
subjects_dir=subjects_dir, coord_frame=coord_frame, scale=7e-4, fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.25)
Explanation: The source space
Let's start by examining the source space as constructed by the
:func:mne.setup_source_space function. Dipoles are placed along fixed
intervals on the cortex, determined by the spacing parameter. The source
space does not define the orientation for these dipoles.
End of explanation
fig = mne.viz.create_3d_figure(size=(600, 400))
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans,
surfaces='white', coord_frame='head', fig=fig)
# Show the dipoles as arrows pointing along the surface normal
mne.viz.plot_dipole_locations(
dipoles=dipoles, trans=trans, mode='arrow', subject=subject,
subjects_dir=subjects_dir, coord_frame='head', scale=7e-4, fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
Explanation: Fixed dipole orientations
While the source space defines the position of the dipoles, the inverse
operator defines the possible orientations of them. One of the options is to
assign a fixed orientation. Since the neural currents from which MEG and EEG
signals originate flows mostly perpendicular to the cortex [1]_, restricting
the orientation of the dipoles accordingly places a useful restriction on the
source estimate.
By specifying fixed=True when calling
:func:mne.minimum_norm.make_inverse_operator, the dipole orientations are
fixed to be orthogonal to the surface of the cortex, pointing outwards. Let's
visualize this:
End of explanation
# Compute the source estimate for the 'left - auditory' condition in the sample
# dataset.
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True)
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity.
_, time_max = stc.get_peak(hemi='lh')
brain_fixed = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
Explanation: Restricting the dipole orientations in this manner leads to the following
source estimate for the sample data:
End of explanation
fig = mne.viz.create_3d_figure(size=(600, 400))
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans,
surfaces='white', coord_frame='head', fig=fig)
# Show the three dipoles defined at each location in the source space
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans, fwd=fwd,
surfaces='white', coord_frame='head', fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
Explanation: The direction of the estimated current is now restricted to two directions:
inward and outward. In the plot, blue areas indicate current flowing inwards
and red areas indicate current flowing outwards. Given the curvature of the
cortex, groups of dipoles tend to point in the same direction: the direction
of the electromagnetic field picked up by the sensors.
Loose dipole orientations
Forcing the source dipoles to be strictly orthogonal to the cortex makes the
source estimate sensitive to the spacing of the dipoles along the cortex,
since the curvature of the cortex changes within each ~10 square mm patch.
Furthermore, misalignment of the MEG/EEG and MRI coordinate frames is more
critical when the source dipole orientations are strictly constrained [2]_.
To lift the restriction on the orientation of the dipoles, the inverse
operator has the ability to place not one, but three dipoles at each
location defined by the source space. These three dipoles are placed
orthogonally to form a Cartesian coordinate system. Let's visualize this:
End of explanation
# Make an inverse operator with loose dipole orientations
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=1.0)
# Compute the source estimate, indicate that we want a vector solution
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_mag = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
Explanation: When computing the source estimate, the activity at each of the three dipoles
is collapsed into the XYZ components of a single vector, which leads to the
following source estimate for the sample data:
End of explanation
# Set loose to 0.2, the default value
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=0.2)
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_loose = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
Explanation: Limiting orientations, but not fixing them
Often, the best results will be obtained by allowing the dipoles to have
somewhat free orientation, but not stray too far from a orientation that is
perpendicular to the cortex. The loose parameter of the
:func:mne.minimum_norm.make_inverse_operator allows you to specify a value
between 0 (fixed) and 1 (unrestricted or "free") to indicate the amount the
orientation is allowed to deviate from the surface normal.
End of explanation
# Only retain vector magnitudes
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity
_, time_max = stc.get_peak(hemi='lh')
brain = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
Explanation: Discarding dipole orientation information
Often, further analysis of the data does not need information about the
orientation of the dipoles, but rather their magnitudes. The pick_ori
parameter of the :func:mne.minimum_norm.apply_inverse function allows you
to specify whether to return the full vector solution ('vector') or
rather the magnitude of the vectors (None, the default) or only the
activity in the direction perpendicular to the cortex ('normal').
End of explanation |
10,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Iris Flower Data
Step2: Standardize Features
Step3: Train Support Vector Classifier
Step4: Create Previously Unseen Observation
Step5: View Predicted Probabilities | Python Code:
# Load libraries
from sklearn.svm import SVC
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
import numpy as np
Explanation: Title: Calibrate Predicted Probabilities In SVC
Slug: calibrate_predicted_probabilities_in_svc
Summary: How to calibrate predicted probabilities in support vector classifier in Scikit-Learn
Date: 2017-09-22 12:00
Category: Machine Learning
Tags: Support Vector Machines
Authors: Chris Albon
SVC's use of a hyperplane to create decision regions do not naturally output a probability estimate that an observation is a member of a certain class. However, we can in fact output calibrated class probabilities with a few caveats. In an SVC, Platt scaling can be used, wherein first the SVC is trained, then a separate cross-validated logistic regression is trained to map the SVC outputs into probabilities:
$$P(y=1 \mid x)={\frac {1}{1+e^{(A*f(x)+B)}}}$$
where $A$ and $B$ are parameter vectors and $f$ is the $i$th observation's signed distance from the hyperplane. When we have more than two classes, an extension of Platt scaling is used.
In scikit-learn, the predicted probabilities must be generated when the model is being trained. This can be done by setting SVC's probability to True. After the model is trained, we can output the estimated probabilities for each class using predict_proba.
Preliminaries
End of explanation
# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target
Explanation: Load Iris Flower Data
End of explanation
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
Explanation: Standardize Features
End of explanation
# Create support vector classifier object
svc = SVC(kernel='linear', probability=True, random_state=0)
# Train classifier
model = svc.fit(X_std, y)
Explanation: Train Support Vector Classifier
End of explanation
# Create new observation
new_observation = [[.4, .4, .4, .4]]
Explanation: Create Previously Unseen Observation
End of explanation
# View predicted probabilities
model.predict_proba(new_observation)
Explanation: View Predicted Probabilities
End of explanation |
10,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Yellowbrick Text Examples
This notebook is a sample of the text visualizations that yellowbrick provides
Step2: Load Text Corpus for Example Code
Yellowbrick has provided a text corpus wrangled from the Baleen RSS Corpus to present the following examples. If you haven't downloaded the data, you can do so by running
Step3: Frequency Distribution Visualization
A method for visualizing the frequency of tokens within and across corpora is frequency distribution. A frequency distribution tells us the frequency of each vocabulary item in the text. In general, it could count any kind of observable event. It is a distribution because it tells us how the total number of word tokens in the text are distributed across the vocabulary items.
Step4: Note that the FreqDistVisualizer does not perform any normalization or vectorization, and it expects text that has already be count vectorized.
We first instantiate a FreqDistVisualizer object, and then call fit() on that object with the count vectorized documents and the features (i.e. the words from the corpus), which computes the frequency distribution. The visualizer then plots a bar chart of the top 50 most frequent terms in the corpus, with the terms listed along the x-axis and frequency counts depicted at y-axis values. As with other Yellowbrick visualizers, when the user invokes show(), the finalized visualization is shown.
Step5: Visualizing Stopwords Removal
For example, it is interesting to compare the results of the FreqDistVisualizer before and after stopwords have been removed from the corpus
Step6: Visualizing tokens across corpora
It is also interesting to explore the differences in tokens across a corpus. The hobbies corpus that comes with Yellowbrick has already been categorized (try corpus['categories']), so let's visually compare the differences in the frequency distributions for two of the categories | Python Code:
import os
import sys
# Modify the path
sys.path.append("..")
import yellowbrick as yb
import matplotlib.pyplot as plt
Explanation: Yellowbrick Text Examples
This notebook is a sample of the text visualizations that yellowbrick provides
End of explanation
from download import download_all
from sklearn.datasets.base import Bunch
## The path to the test data sets
FIXTURES = os.path.join(os.getcwd(), "data")
## Dataset loading mechanisms
datasets = {
"hobbies": os.path.join(FIXTURES, "hobbies")
}
def load_data(name, download=True):
Loads and wrangles the passed in text corpus by name.
If download is specified, this method will download any missing files.
# Get the path from the datasets
path = datasets[name]
# Check if the data exists, otherwise download or raise
if not os.path.exists(path):
if download:
download_all()
else:
raise ValueError((
"'{}' dataset has not been downloaded, "
"use the download.py module to fetch datasets"
).format(name))
# Read the directories in the directory as the categories.
categories = [
cat for cat in os.listdir(path)
if os.path.isdir(os.path.join(path, cat))
]
files = [] # holds the file names relative to the root
data = [] # holds the text read from the file
target = [] # holds the string of the category
# Load the data from the files in the corpus
for cat in categories:
for name in os.listdir(os.path.join(path, cat)):
files.append(os.path.join(path, cat, name))
target.append(cat)
with open(os.path.join(path, cat, name), 'r') as f:
data.append(f.read())
# Return the data bunch for use similar to the newsgroups example
return Bunch(
categories=categories,
files=files,
data=data,
target=target,
)
corpus = load_data('hobbies')
Explanation: Load Text Corpus for Example Code
Yellowbrick has provided a text corpus wrangled from the Baleen RSS Corpus to present the following examples. If you haven't downloaded the data, you can do so by running:
$ python download.py
In the same directory as the text notebook. Note that this will create a directory called data that contains subdirectories with the provided datasets.
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
from yellowbrick.text.freqdist import FreqDistVisualizer
Explanation: Frequency Distribution Visualization
A method for visualizing the frequency of tokens within and across corpora is frequency distribution. A frequency distribution tells us the frequency of each vocabulary item in the text. In general, it could count any kind of observable event. It is a distribution because it tells us how the total number of word tokens in the text are distributed across the vocabulary items.
End of explanation
vectorizer = CountVectorizer()
docs = vectorizer.fit_transform(corpus.data)
features = vectorizer.get_feature_names()
visualizer = FreqDistVisualizer()
visualizer.fit(docs, features)
visualizer.show()
Explanation: Note that the FreqDistVisualizer does not perform any normalization or vectorization, and it expects text that has already be count vectorized.
We first instantiate a FreqDistVisualizer object, and then call fit() on that object with the count vectorized documents and the features (i.e. the words from the corpus), which computes the frequency distribution. The visualizer then plots a bar chart of the top 50 most frequent terms in the corpus, with the terms listed along the x-axis and frequency counts depicted at y-axis values. As with other Yellowbrick visualizers, when the user invokes show(), the finalized visualization is shown.
End of explanation
vectorizer = CountVectorizer(stopwords='english')
docs = vectorizer.fit_transform(corpus.data)
features = vectorizer.get_feature_names()
visualizer = FreqDistVisualizer()
visualizer.fit(docs, features)
visualizer.show()
Explanation: Visualizing Stopwords Removal
For example, it is interesting to compare the results of the FreqDistVisualizer before and after stopwords have been removed from the corpus:
End of explanation
hobby_types = {}
for category in corpus['categories']:
texts = []
for idx in range(len(corpus['data'])):
if corpus['target'][idx] == category:
texts.append(corpus['data'][idx])
hobby_types[category] = texts
vectorizer = CountVectorizer(stop_words='english')
docs = vectorizer.fit_transform(text for text in hobby_types['cooking'])
features = vectorizer.get_feature_names()
visualizer = FreqDistVisualizer()
visualizer.fit(docs, features)
visualizer.show()
vectorizer = CountVectorizer(stop_words='english')
docs = vectorizer.fit_transform(text for text in hobby_types['gaming'])
features = vectorizer.get_feature_names()
visualizer = FreqDistVisualizer()
visualizer.fit(docs, features)
visualizer.show()
Explanation: Visualizing tokens across corpora
It is also interesting to explore the differences in tokens across a corpus. The hobbies corpus that comes with Yellowbrick has already been categorized (try corpus['categories']), so let's visually compare the differences in the frequency distributions for two of the categories: "cooking" and "gaming"
End of explanation |
10,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top
Step1: Importing data
Step2: Data preparing and cleaning | Python Code:
from skdata.data import (
SkDataFrame as DataFrame,
SkDataSeries as Series
)
import pandas as pd
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#SkData---Data-Specification" data-toc-modified-id="SkData---Data-Specification-1"><span class="toc-item-num">1 </span>SkData - Data Specification</a></span><ul class="toc-item"><li><span><a href="#Importing-data" data-toc-modified-id="Importing-data-1.1"><span class="toc-item-num">1.1 </span>Importing data</a></span></li><li><span><a href="#Data-preparing-and-cleaning" data-toc-modified-id="Data-preparing-and-cleaning-1.2"><span class="toc-item-num">1.2 </span>Data preparing and cleaning</a></span></li></ul></li></ul></div>
SkData - Data Specification
SkData provide a data class to structure and organize the preprocessing data.
End of explanation
df_train = DataFrame(
pd.read_csv('../data/train.csv', index_col='PassengerId')
)
df_train.head()
df_train.summary()
Explanation: Importing data
End of explanation
df_train['Sex'].replace({
'male': 'Male', 'female': 'Female'
}, inplace=True)
df_train['Embarked'].replace({
'C': 'Cherbourg', 'Q': 'Queenstown', 'S': 'Southampton'
}, inplace=True)
df_train.summary()
df_train['Sex'] = df_train['Sex'].astype('category')
df_train['Embarked'] = df_train['Embarked'].astype('category')
df_train.summary()
survived_dict = {0: 'Died', 1: 'Survived'}
pclass_dict = {1: 'Upper Class', 2: 'Middle Class', 3: 'Lower Class'}
# df_train['Pclass'].categorize(categories=pclass_dict)
# df_train['Survived'].categorize(categories=survived_dict)
print('STEPS:')
df_train.steps
Explanation: Data preparing and cleaning
End of explanation |
10,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In data science, it's common to have lots of nearly duplicate data. For instance, you'll find lots of nearly duplicate web pages in crawling the internet; you'll find lots of pictures of the same cat or dog overlaid with slightly different text posted on Twitter as memes.
Near duplicate data just means data that is almost the same. The sentences "Congress returned from recess" and "Congress returned from recess last week" are near duplicates. They are not exactly the same, but they're similar.
Sometimes its helpful to find all such duplicates
Step1: Based on the way we are choosing words, we say that 410 pairs out of 1000 documents have a high enough jaccard to call them similar. This seems realistic enough. We can fiddle with this if we want, by changing the base
Step2: We can also see that the jaccards look normally distributed
Step3: Now let's time this, to see how it increases with the number of documents $n$. We expect it to be $\Theta(n^2)$, because each document is compared to every other document. | Python Code:
import itertools
import string
import functools
letters = string.ascii_lowercase
vocab = list(map(''.join, itertools.product(letters, repeat=2)))
from random import choices
def zipf_pdf(k):
return 1/k**1.07
def exponential_pdf(k, base):
return base**k
def new_document(n_words, pdf):
return set(
choices(
vocab,
weights=map(pdf, range(1, 1+len(vocab))),
k=n_words
)
)
def new_documents(n_documents, n_words, pdf):
return [new_document(n_words, pdf) for _ in range(n_documents)]
def jaccard(a, b):
return len(a & b) / len(a | b)
def all_pairs(documents):
return list(itertools.combinations(documents, 2))
def filter_similar(pairs, cutoff=0.9):
return list(filter(
lambda docs: jaccard(docs[0], docs[1]) > cutoff,
pairs
))
documents = new_documents(1000, 1000, functools.partial(exponential_pdf, base=1.1))
pairs = all_pairs(documents)
Explanation: In data science, it's common to have lots of nearly duplicate data. For instance, you'll find lots of nearly duplicate web pages in crawling the internet; you'll find lots of pictures of the same cat or dog overlaid with slightly different text posted on Twitter as memes.
Near duplicate data just means data that is almost the same. The sentences "Congress returned from recess" and "Congress returned from recess last week" are near duplicates. They are not exactly the same, but they're similar.
Sometimes its helpful to find all such duplicates: either to remove them from the dataset or analyze the dupes themselves. For instance, you might want to find all the groups of memes on Twitter, or delete online comments from bots.
In order to find duplicates, you need a formal way to represent similiarity so that a computer can understand. One commonly used metric is the Jaccard similarity, which is the size of the intersection of two sets divided by the union. We can express the Jaccard in symbols as follows. If $A$ and $B$ are sets and $|A|$ and $|B|$ show the sizes of those sets (sometimes this is called the "cardinality") then:
$$Jaccard(A,B) = \frac{|A \cap B|}{|A \cup B|}$$
If you try calculating a few similarities with pen and paper, you'll quickly get a good intuition for the Jaccard. For instance, say $A = {1,2,3}$ and $B={2,3,4}$. $A \cap B$ just is just the elements which are in $A$ and $B$ = ${2,3}$, which has a size of 2. Similarly ${A \cup B}$ is the elements which are in $A$ or $B$ which is equal to ${1,2,3,4}$, which has a size of 4. Thus, the Jaccard is 2/4 = .5.
Exercise: Try calculating the Jaccard when $A = {1,2,3}$ and $B={1,2,3}$. What happens? How about when $A ={1,2,3}$ and $B={4,5,6}$? Is it possible to have a Jaccard that is lower? How about a Jaccard that is higher?
Now let's say you are trying to find documents that are almost duplicates in your set. You can represent each document as a set of words, assigning each word a unqiue number. Then you find the Jaccard similarity between all pairs of documents, and find those pairs that have a value greater than, say, $0.9$.
End of explanation
len(filter_similar(pairs))
Explanation: Based on the way we are choosing words, we say that 410 pairs out of 1000 documents have a high enough jaccard to call them similar. This seems realistic enough. We can fiddle with this if we want, by changing the base
End of explanation
jacards = list(map(lambda docs: jaccard(docs[0], docs[1]), pairs))
%matplotlib inline
import seaborn as sns
sns.distplot(jacards)
Explanation: We can also see that the jaccards look normally distributed
End of explanation
def create_and_filter(n_documents):
documents = new_documents(n_documents, 500, functools.partial(exponential_pdf, base=1.1))
pairs = all_pairs(documents)
return filter_similar(pairs)
import timeit
def time_create_and_filter(n_documents):
return timeit.timeit(
'create_and_filter(n)',
globals={
"n": n_documents,
"create_and_filter": create_and_filter
},
number=1
)
import pandas as pd
from tqdm import tnrange, tqdm_notebook
def create_timing_df(ns):
return pd.DataFrame({
'n': ns,
'time': list(map(time_create_and_filter, tqdm_notebook(ns)))
})
df = create_timing_df([2 ** e for e in range(1, 13)])
sns.lmplot(x="n", y="time", data=df, order=2, )
Explanation: Now let's time this, to see how it increases with the number of documents $n$. We expect it to be $\Theta(n^2)$, because each document is compared to every other document.
End of explanation |
10,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Resolving Conflicts Using Precedence Declarations
This file shows how shift/reduce and reduce/reduce conflicts can be resolved using operator precedence declarations.
The following grammar is ambiguous because it does not specify the precedence of the arithmetical operators
Step1: Specification of the Parser
Step2: The start variable of our grammar is expr, but we don't have to specify that. The default
start variable is the first variable that is defined.
Step3: The following operator precedence declarations declare that the operators '+'and '-' have a lower precedence than the operators '*'and '/'. The operator '^' has the highest precedence. Furthermore, the declarations specify that the operators '+', '-', '*', and '/' are left associative, while the operator '^' is declared as right associative using the keyword right.
Operators can also be defined as being non-associative using the keyword nonassoc.
Step4: Setting the optional argument write_tables to False <B style="color
Step5: As there are no warnings all conflicts have been resolved using the precedence declarations.
Let's look at the action table that is generated.
Step6: The function test(s) takes a string s as its argument an tries to parse this string. If all goes well, an abstract syntax tree is returned.
If the string can't be parsed, an error message is printed by the parser. | Python Code:
import ply.lex as lex
tokens = [ 'NUMBER' ]
def t_NUMBER(t):
r'0|[1-9][0-9]*'
t.value = int(t.value)
return t
literals = ['+', '-', '*', '/', '^', '(', ')']
t_ignore = ' \t'
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count('\n')
def t_error(t):
print(f"Illegal character '{t.value[0]}'")
t.lexer.skip(1)
__file__ = 'main'
lexer = lex.lex()
Explanation: Resolving Conflicts Using Precedence Declarations
This file shows how shift/reduce and reduce/reduce conflicts can be resolved using operator precedence declarations.
The following grammar is ambiguous because it does not specify the precedence of the arithmetical operators:
expr : expr '+' expr
| expr '-' expr
| expr '*' expr
| expr '/' expr
| expr '^' expr
| '(' expr ')'
| NUMBER
;
We will see how the use of precedence declarations can be used to resolve shift/reduce-conflicts.
Specification of the Scanner
We implement a minimal scanner for arithmetic expressions.
End of explanation
import ply.yacc as yacc
Explanation: Specification of the Parser
End of explanation
start = 'expr'
Explanation: The start variable of our grammar is expr, but we don't have to specify that. The default
start variable is the first variable that is defined.
End of explanation
precedence = (
('left', '+', '-') , # precedence 1
('left', '*', '/'), # precedence 2
('right', '^') # precedence 3
)
def p_expr_plus(p):
"expr : expr '+' expr"
p[0] = ('+', p[1], p[3])
def p_expr_minus(p):
"expr : expr '-' expr"
p[0] = ('-', p[1], p[3])
def p_expr_mult(p):
"expr : expr '*' expr"
p[0] = ('*', p[1], p[3])
def p_expr_div(p):
"expr : expr '/' expr"
p[0] = ('/', p[1], p[3])
def p_expr_power(p):
"expr : expr '^' expr"
p[0] = ('^', p[1], p[3])
def p_expr_paren(p):
"expr : '(' expr ')'"
p[0] = p[2]
def p_expr_NUMBER(p):
"expr : NUMBER"
p[0] = p[1]
def p_error(p):
if p:
print(f"Syntax error at character number {p.lexer.lexpos} at token '{p.value}' in line {p.lexer.lineno}.")
else:
print('Syntax error at end of input.')
Explanation: The following operator precedence declarations declare that the operators '+'and '-' have a lower precedence than the operators '*'and '/'. The operator '^' has the highest precedence. Furthermore, the declarations specify that the operators '+', '-', '*', and '/' are left associative, while the operator '^' is declared as right associative using the keyword right.
Operators can also be defined as being non-associative using the keyword nonassoc.
End of explanation
parser = yacc.yacc(write_tables=False, debug=True)
Explanation: Setting the optional argument write_tables to False <B style="color:red">is required</B> to prevent an obscure bug where the parser generator tries to read an empty parse table.
End of explanation
!type parser.out
!cat parser.out
%run ../ANTLR4-Python/AST-2-Dot.ipynb
Explanation: As there are no warnings all conflicts have been resolved using the precedence declarations.
Let's look at the action table that is generated.
End of explanation
def test(s):
t = yacc.parse(s)
d = tuple2dot(t)
display(d)
return t
test('2^3*4+5')
test('1+2*3^4')
test('1 + 2 * -3^4')
Explanation: The function test(s) takes a string s as its argument an tries to parse this string. If all goes well, an abstract syntax tree is returned.
If the string can't be parsed, an error message is printed by the parser.
End of explanation |
10,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interlab from iGEM 2015
Step1: Configuring and reading your data
You first need to map each column in your plate to a colony or a control.
Step2: Then, we an instance of PlateMate
Step3: The variable pm above is what you will be using to read and parse your data, plot it, and analyze it. It will consider the plate mapping you have defined by colonyNames and controlNames. For instance, you can retrieve this information by using two functions below
Step4: <br />
Let's get started with importing our data. During this process, PlateMate will look for all files that follow a pattern (in our case, each file starts with "medida"). Then, it will read each of those files and parse them. This is usually a fast process, even for larger data sets, and should take only a fraction of section to complete.
Step5: Now our instance pm has all information about the plate readings. We can get a summary from one of the well sets by using the function summary(). As an example, let us check the data from LB wells.
Step6: Each row displayed above represents a different measure in time (in our case, each measurement were spaced by 1 hour). Thus, we're looking at the data from the first 3 hours. If you look back in our map, all wells with LB were in column A. Each number with A refers to a different row on the actual plate, i.e., A04 represents column A row 4 in the original plate.
You can always retrieve the whole data from a population by using getFluorescence().
Step7: Similarly, you can check the optical density associated with that particular population.
Step8: Simple plotting
Because the file also contained information about the temperature at the time of the reading, platemate will also sotre it for any possible analysis.
Step9: Using statistical tests, ANOVA and post-hoc Tukey's HSD test
<br />
An important part of comparing expressions of different devices/genes is the use of appropriate statistical testing. For instance, suppose that you want to find out if the response presented by Dev1 is significantly stronger than the response presented by the negative control 1. This can be solved by a simple statistical test such as testing Mann-Whitney's $U$ statistics. Below, we're showing the value evaluated for $U$ and p-value when comparing Dev1, Dev2 and Dev3 with -control1.
Step10: The result above shows that we cannot rule out the hypothesis that Dev3 is not significantly larger than the negative control. This completely agrees with the bar plots comparing all devices and the negative control. In other words, this basically shows that no significant expression was observed in our device 3.
Is the expression of device 3 at least stronger than LB?
Step11: This shows that it is, although the negative control also shows a response significantly larger than the LB medium.
<br />
Using all possible combinations is not the proper way to study multiple populations. Kruskal-Wallis' and (Post-Hoc) Dunn's tests are an extension of the $U$ statistics for comparing multiple populations at once.
<br />
Because ANOVA is a very popular test, platemate can perform ANOVA followed by a post-hoc test using Tukey's HSD. | Python Code:
%matplotlib inline
import pylab as pl
from math import sqrt
import sys
# importing platemate
sys.path.insert(0, '../src')
import platemate
Explanation: Interlab from iGEM 2015
End of explanation
ColumnNames = {
'C' : "Dev1",
'D' : "Dev2",
'E' : "Dev3"
}
controlNames = {
'A' : "LB",
'B' : "LB+Cam",
'F' : "+control",
'G' : "-control1",
'H' : "-control2"
}
Explanation: Configuring and reading your data
You first need to map each column in your plate to a colony or a control.
End of explanation
reload(platemate)
pm = platemate.PlateMate( colonyNames = ColumnNames, controlNames = controlNames )
Explanation: Then, we an instance of PlateMate:
End of explanation
print pm.getColonyNames()
print pm.getControlNames()
Explanation: The variable pm above is what you will be using to read and parse your data, plot it, and analyze it. It will consider the plate mapping you have defined by colonyNames and controlNames. For instance, you can retrieve this information by using two functions below:
End of explanation
pm.findFiles("medida")
pm.readFluorescence()
pm.readOpticalDensity()
Explanation: <br />
Let's get started with importing our data. During this process, PlateMate will look for all files that follow a pattern (in our case, each file starts with "medida"). Then, it will read each of those files and parse them. This is usually a fast process, even for larger data sets, and should take only a fraction of section to complete.
End of explanation
pm.summary("LB")
Explanation: Now our instance pm has all information about the plate readings. We can get a summary from one of the well sets by using the function summary(). As an example, let us check the data from LB wells.
End of explanation
pm.getFluorescence("LB")
Explanation: Each row displayed above represents a different measure in time (in our case, each measurement were spaced by 1 hour). Thus, we're looking at the data from the first 3 hours. If you look back in our map, all wells with LB were in column A. Each number with A refers to a different row on the actual plate, i.e., A04 represents column A row 4 in the original plate.
You can always retrieve the whole data from a population by using getFluorescence().
End of explanation
pm.getOpticalDensity("LB")
Explanation: Similarly, you can check the optical density associated with that particular population.
End of explanation
# retrieving temperature
Temperature = pm.getTemperature()
# printing mean, min and max.
print "Average temperature: %4.1f'C" % ( Temperature.mean() )
print "Temperature range: %4.1f'C - %4.1f'C" % (Temperature.min(), Temperature.max())
pm.plotTemperature()
pl.show()
print "Plotting all wells for each population"
pl.figure(figsize=(6,4.5))
pm.plotIt(["Dev1","Dev2","Dev3"])
pl.show()
print "Plotting averages for each population"
pl.figure(figsize=(6,4.5))
pm.plotMean(["Dev1","Dev2","Dev3"])
pl.show()
print "Plotting averages and 1-std intervals for each population"
pl.figure(figsize=(6,4.5))
pm.plotFuzzyMean(["Dev1","Dev2","Dev3"])
pl.show()
pm.compareFluorescence("LB","Dev1")
pl.figure(figsize=(7,4))
pm.plotBars(["Dev1","Dev2","Dev3","-control1"], 5)
pl.show()
Explanation: Simple plotting
Because the file also contained information about the temperature at the time of the reading, platemate will also sotre it for any possible analysis.
End of explanation
print "Device 1 vs -control1:"
print pm.compareFluorescence("Dev1","-control1")
print "Device 2 vs -control1:"
print pm.compareFluorescence("Dev2","-control1")
print "Device 3 vs -control1:"
print pm.compareFluorescence("Dev3","-control1")
Explanation: Using statistical tests, ANOVA and post-hoc Tukey's HSD test
<br />
An important part of comparing expressions of different devices/genes is the use of appropriate statistical testing. For instance, suppose that you want to find out if the response presented by Dev1 is significantly stronger than the response presented by the negative control 1. This can be solved by a simple statistical test such as testing Mann-Whitney's $U$ statistics. Below, we're showing the value evaluated for $U$ and p-value when comparing Dev1, Dev2 and Dev3 with -control1.
End of explanation
print "Device 3 vs LB:"
print pm.compareFluorescence("Dev3","LB")
print "-control vs LB:"
print pm.compareFluorescence("-control1","LB")
Explanation: The result above shows that we cannot rule out the hypothesis that Dev3 is not significantly larger than the negative control. This completely agrees with the bar plots comparing all devices and the negative control. In other words, this basically shows that no significant expression was observed in our device 3.
Is the expression of device 3 at least stronger than LB?
End of explanation
pm.ANOVA(["Dev1","Dev2","Dev3","-control1"])
pm.TukeyHSD(["Dev1","Dev2","Dev3","-control1"])
Explanation: This shows that it is, although the negative control also shows a response significantly larger than the LB medium.
<br />
Using all possible combinations is not the proper way to study multiple populations. Kruskal-Wallis' and (Post-Hoc) Dunn's tests are an extension of the $U$ statistics for comparing multiple populations at once.
<br />
Because ANOVA is a very popular test, platemate can perform ANOVA followed by a post-hoc test using Tukey's HSD.
End of explanation |
10,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 6
Step1: Each "letter" of a string again belongs to the string type. A string of length one is called a character.
Step2: Since computers store data in binary, the designers of early computers (1960s) created a code called ASCII (American Standard Code for Information Interchange) to associate to each character a number between 0 and 127. Every number between 0 and 127 is represented in binary by 7 bits (between 0000000 and 1111111), and so each character is stored with 7 bits of memory. Later, ASCII was extended with another 128 characters, so that codes between 0 and 255 were used, requiring 8 bits. 8 bits of memory is called a byte. One byte of memory suffices to store one (extended ASCII) character.
You might notice that there are 256 ASCII codes available, but there are fewer than 256 characters available on your keyboard, even once you include symbols like # and ;. Some of these "extra" codes are for accented letters, and others are relics of old computers. For example, ASCII code 7 (0000111) stands for the "bell", and older readers might remember making the Apple II computer beep by pressing Control-G on the keyboard ("G" is the 7th letter). You can look up a full ASCII table if you're curious.
Nowadays, the global community of computer users requires far more than 256 "letters" -- there are many alphabets around the world! So instead of ASCII, we can access over 100 thousand unicode characters. Scroll through a unicode table to see what is possible. In Python version 3.x, all strings are considered in Unicode, but in Python 2.7 (which we use), it's a bit trickier to work with Unicode.
Here we stay within ASCII codes, since they will suffice for basic English messages. Python has built-in commands chr and ord for converting from code-number (0--255) to character and back again.
Step3: The following code will produce a table of the ASCII characters with codes between 32 and 126. This is a good range which includes all the most common English characters and symbols on a U.S. keyboard. Note that ASCII code 32 corresponds to an empty space (an important character for long messages!)
Step4: Since we only work with the ASCII range between 32 and 126, it will be useful to "cycle" other numbers into this range. For example, we will interpret 127 as 32, 128 as 33, etc., when we convert out-of-range numbers into characters.
The following function forces a number into a given range, using the mod operator. It's a common trick, to make lists loop around cyclically.
Step5: Now we can implement a substitution cipher by converting characters to their ASCII codes, shuffling the codes, and converting back. One of the simplest substitution ciphers is called a Caesar cipher, in which each character is shifted -- by a fixed amount -- down the list. For example, a Caesar cipher of shift 3 would send 'A' to 'D' and 'B' to 'E', etc.. Near the end of the list, characters are shifted back to the beginning -- the list is considered cyclicly, using our inrange function.
Here is an implementation of the Caesar cipher, using the ASCII range between 32 and 126. We begin with a function to shift a single character.
Step6: Let's see the effect of the Caesar cipher on our ASCII table.
Step7: Now we can use the Caesar cipher to encrypt strings.
Step8: As designed, the Caesar cipher turns plaintext into ciphertext by using a shift of the ASCII table. To decipher the ciphertext, one can just use the Caesar cipher again, with the negative shift.
Step9: The Vigenère cipher
The Caesar cipher is pretty easy to break, by a brute force attack (shift by all possible values) or a frequency analysis (compare the frequency of characters in a message to the frequency of characters in typical English messages, to make a guess).
The Vigenère cipher is a variant of the Caesar cipher which uses an ecryption key to vary the shift-parameter throughout the encryption process. For example, to encrypt the message "This is very secret" using the key "Key", you line up the characters of the message above repeated copies of the key.
T | h | i | s | | i | s | | v | e | r | y | | s | e | c | r | e | t
--|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--
K | e | y | K | e | y | K | e | y | K | e | y | K | e | y | K | e | y | K
Then, you turn everything into ASCII (or your preferred numerical system), and use the bottom row to shift the top row.
ASCII message | 84 | 104 | 105 | 115 | 32 | 105 | 115 | 32 | 118 | 101 | 114 | 121 | 32 | 115 | 101 | 99 | 114 | 101 | 116
---|-----|-----
Shift | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75
ASCII shifted | 159 | 205 | 226 | 190 | 133 | 226 | 190 | 133 | 239 | 176 | 215 | 242 | 107 | 216 | 222 | 174 | 215 | 222 | 191
ASCII shifted in range | 64 | 110 | 36 | 95 | 38 | 36 | 95 | 38 | 49 | 81 | 120 | 52 | 107 | 121 | 32 | 79 | 120 | 32 | 96
Finally, the shifted ASCII codes are converted back into characters for transmission. In this case, the codes 64,110,36,95, etc., are converted to the ciphertext "@n$_&$_&1Qx4ky Ox \`"
The Vigenère cipher is much harder to crack than the Caesar cipher, if you don't have the key. Indeed, the varying shifts make frequency analysis more difficult. The Vigenère cipher is weak by today's standards (see Wikipedia for a description of 19th century attacks), but illustrates the basic actors in a symmetric key cryptosystem
Step10: The Vigenère cipher is called a symmetric cryptosystem, because the same key that is used to encrypt the plaintext can be used to decrypt the ciphertext. All we do is subtract the shift at each stage.
Step11: The Vigenère cipher becomes an effective way for two parties to communicate securely, as long as they share a secret key. In the 19th century, this often meant that the parties would require an initial in-person meeting to agree upon a key, or a well-guarded messenger would carry the key from one party to the other.
Today, as we wish to communicate securely over long distances on a regular basis, the process of agreeing on a key is more difficult. It seems like a chicken-and-egg problem, where we need a shared secret to communicate securely, but we can't share a secret without communicating securely in the first place!
Remarkably, this secret-sharing problem can be solved with some modular arithmetic tricks. This is the subject of the next section.
Exercises
A Caesar cipher was used to encode a message, with the resulting ciphertext
Step12: A theorem of Gauss states that, if $p$ is prime, there exists an integer $b$ whose order is precisely $p-1$ (as big as possible!). Such an integer is called a primitive root modulo $p$. For example, the previous computation found 12 primitive roots modulo $37$
Step13: This would not be very useful if we couldn't find Sophie Germain primes. Fortunately, they are not so rare. The first few are 2, 3, 5, 11, 23, 29, 41, 53, 83, 89. It is expected, but unproven that there are infinitely many Sophie Germain primes. In practice, they occur fairly often. If we consider numbers of magnitude $N$, about $1 / \log(N)$ of them are prime. Among such primes, we expect about $1.3 / \log(N)$ to be Sophie Germain primes. In this way, we can expect to stumble upon Sophie Germain primes if we search for a bit (and if $\log(N)^2$ is not too large).
The code below tests whether a number $p$ is a Sophie Germain prime. We construct it by simply testing whether $p$ and $2p+1$ are both prime. We use the Miller-Rabin test (the code from the previous Python notebook) in order to test whether each is prime.
Step14: Let's test this out by finding the Sophie Germain primes up to 100, and their associated safe primes.
Step15: Next, we find the first 100-digit Sophie Germain prime! This might take a minute!
Step16: In the seconds or minutes your computer was running, it checked the primality of almost 90 thousand numbers, each with 100 digits. Not bad!
The Diffie-Hellman protocol
When we study protocols for secure communication, we must keep track of the communicating parties (often called Alice and Bob), and who has knowledge of what information. We assume at all times that the "wire" between Alice and Bob is tapped -- anything they say to each other is actively monitored, and is therefore public knowledge. We also assume that what happens on Alice's private computer is private to Alice, and what happens on Bob's private computer is private to Bob. Of course, these last two assumptions are big assumptions -- they point towards the danger of computer viruses which infect computers and can violate such privacy!
The goal of the Diffie-Hellman protocol is -- at the end of the process -- for Alice and Bob to share a secret without ever having communicated the secret with each other. The process involves a series of modular arithmetic calculations performed on each of Alice and Bob's computers.
The process begins when Alice or Bob creates and publicizes a large prime number p and a primitive root g modulo p. It is best, for efficiency and security, to choose a safe prime p. Alice and Bob can create their own safe prime, or choose one from a public list online, e.g., from the RFC 3526 memo. Nowadays, it's common to take p with 2048 bits, i.e., a prime which is between $2^{2046}$ and $2^{2047}$ (a number with 617 decimal digits!).
For the purposes of this introduction, we use a smaller safe prime, with about 256 bits. We use the SystemRandom functionality of the random package to create a good random prime. It is not so much of an issue here, but in general one must be very careful in cryptography that one's "random" numbers are really "random"! The SystemRandom function uses chaotic properties of your computer's innards in order to initialize a random number generator, and is considered cryptographically secure.
Step17: The function above searches and searches among random numbers until it finds a Sophie Germain prime. The (possibly endless!) search is performed with a while True
Step18: Next we find a primitive root, modulo the safe prime p.
Step19: The pair of numbers $(g, p)$, the primitive root and the safe prime, chosen by either Alice or Bob, is now made public. They can post their $g$ and $p$ on a public website or shout it in the streets. It doesn't matter. They are just tools for their secret-creation algorithm below.
Alice and Bob's private secrets
Next, Alice and Bob invent private secret numbers $a$ and $b$. They do not tell anyone these numbers. Not each other. Not their family. Nobody. They don't write them on a chalkboard, or leave them on a thumbdrive that they lose. These are really secret.
But they don't use their phone numbers, or social security numbers. It's best for Alice and Bob to use a secure random number generator on their separate private computers to create $a$ and $b$. They are often 256 bit numbers in practice, so that's what we use below.
Step20: Now Alice and Bob use their secrets to generate new numbers. Alice computes the number
$$A = g^a \text{ mod } p,$$
and Bob computes the number
$$B = g^b \text{ mod } p.$$
Step21: Now Alice and Bob do something that seems very strange at first. Alice sends Bob her new number $A$ and Bob sends Alice his new number $B$. Since they are far apart, and the channel is insecure, we can assume everyone in the world now knows $A$ and $B$.
Step22: Now Alice, on her private computer, computes $B^a$ mod $p$. She can do that because everyone knows $B$ and $p$, and she knows $a$ too.
Similarly, Bob, on his private computer, computes $A^b$ mod $p$. He can do that because everyone knows $A$ and $p$, and he knows $b$ too.
Alice and Bob do not share the results of their computations!
Step23: Woah! What happened? In terms of exponents, it's elementary. For
$$B^a = (g^{b})^a = g^{ba} = g^{ab} = (g^a)^b = A^b.$$
So these two computations yield the same result (mod $p$, the whole way through).
In the end, we find that Alice and Bob share a secret. We call this secret number $S$.
$$S = B^a = A^b.$$
Step24: This common secret $S$ can be used as a key for Alice and Bob to communicate hereafter. For example, they might use $S$ (converted to a string, if needed) as the key for a Vigenère cipher, and chat with each other knowing that only they have the secret key to encrypt and decrypt their messages. | Python Code:
W = "Hello"
print W
for j in range(len(W)): # len(W) is the length of the string W.
print W[j] # Access the jth character of the string.
Explanation: Part 6: Ciphers and Key exchange
In this notebook, we introduce cryptography -- how to communicate securely over insecure channels. We begin with a study of two basic ciphers, the Caesar cipher and its fancier variant, the Vigenère cipher. The Vigenère cipher uses a key to turn plaintext (i.e., the message) into ciphertext (the coded message), and uses the same key to turn the ciphertext back into plaintext. Therefore, two parties can communicate securely if they -- and only they -- possess the key.
If the security of communication rests on possession of a common key, then we're left with a new problem: how do the two parties agree on a common key, especially if they are far apart and communicating over an insecure channel?
A clever solution to this problem was published in 1976 by Whitfield Diffie and Martin Hellman, and so it's called Diffie-Hellman key exchange. It takes advantage of modular arithmetic: the existence of a primitive root (modulo a prime) and the difficulty of solving the discrete logarithm problem.
This part complements Chapter 6 of An Illustrated Theory of Numbers.
Table of Contents
Ciphers
Key exchange
<a id='cipher'></a>
Ciphers
A cipher is a way of transforming a message, called the plaintext into a different form, the ciphertext, which conceals the meaning to all but the intended recipient(s). A cipher is a code, and can take many forms. A substitution cipher might simply change every letter to a different letter in the alphabet. This is the idea behind "Cryptoquip" puzzles. These are not too hard for people to solve, and are easy for computers to solve, using frequency analysis (understanding how often different letters or letter-combinations occur).
ASCII code and the Caesar cipher
Even though substitution ciphers are easy to break, they are a good starting point. To implement substitution ciphers in Python, we need to study the string type in a bit more detail. To declare a string variable, just put your string in quotes. You can use any letters, numbers, spaces, and many symbols inside a string. You can enclose your string by single quotes, like 'Hello' or double-quotes, like "Hello". This flexibility is convenient, if you want to use quotes within your string. For example, the string Prince's favorite prime is 1999 should be described in Python with double-quotes "Prince's favorite prime is 1999" so that the apostrophe doesn't confuse things.
Strings are indexed, and their letters can be retrieved as if the string were a list of letters. Python experts will note that strings are immutable while lists are mutable objects, but we aren't going to worry about that here.
End of explanation
print type(W)
print type(W[0]) # W[0] is a character.
Explanation: Each "letter" of a string again belongs to the string type. A string of length one is called a character.
End of explanation
chr(65)
ord('A')
Explanation: Since computers store data in binary, the designers of early computers (1960s) created a code called ASCII (American Standard Code for Information Interchange) to associate to each character a number between 0 and 127. Every number between 0 and 127 is represented in binary by 7 bits (between 0000000 and 1111111), and so each character is stored with 7 bits of memory. Later, ASCII was extended with another 128 characters, so that codes between 0 and 255 were used, requiring 8 bits. 8 bits of memory is called a byte. One byte of memory suffices to store one (extended ASCII) character.
You might notice that there are 256 ASCII codes available, but there are fewer than 256 characters available on your keyboard, even once you include symbols like # and ;. Some of these "extra" codes are for accented letters, and others are relics of old computers. For example, ASCII code 7 (0000111) stands for the "bell", and older readers might remember making the Apple II computer beep by pressing Control-G on the keyboard ("G" is the 7th letter). You can look up a full ASCII table if you're curious.
Nowadays, the global community of computer users requires far more than 256 "letters" -- there are many alphabets around the world! So instead of ASCII, we can access over 100 thousand unicode characters. Scroll through a unicode table to see what is possible. In Python version 3.x, all strings are considered in Unicode, but in Python 2.7 (which we use), it's a bit trickier to work with Unicode.
Here we stay within ASCII codes, since they will suffice for basic English messages. Python has built-in commands chr and ord for converting from code-number (0--255) to character and back again.
End of explanation
for a in range(32,127):
c = chr(a)
print "ASCII %d is %s"%(a, c)
Explanation: The following code will produce a table of the ASCII characters with codes between 32 and 126. This is a good range which includes all the most common English characters and symbols on a U.S. keyboard. Note that ASCII code 32 corresponds to an empty space (an important character for long messages!)
End of explanation
def inrange(n,range_min, range_max):
'''
The input number n can be any integer.
The output number will be between range_min and range_max (inclusive)
If the input number is already within range, it will not change.
'''
range_len = range_max - range_min + 1
a = n % range_len
if a < range_min:
a = a + range_len
return a
inrange(13,1,10)
inrange(17,5,50)
Explanation: Since we only work with the ASCII range between 32 and 126, it will be useful to "cycle" other numbers into this range. For example, we will interpret 127 as 32, 128 as 33, etc., when we convert out-of-range numbers into characters.
The following function forces a number into a given range, using the mod operator. It's a common trick, to make lists loop around cyclically.
End of explanation
def Caesar_shift(c, shift):
'''
Shifts the character c by shift units
within the ASCII table between 32 and 126.
The shift parameter can be any integer!
'''
ascii = ord(c)
a = ascii + shift # Now we have a number between 32+shift and 126+shift.
a = inrange(a,32,126) # Put the number back in range.
return chr(a)
Explanation: Now we can implement a substitution cipher by converting characters to their ASCII codes, shuffling the codes, and converting back. One of the simplest substitution ciphers is called a Caesar cipher, in which each character is shifted -- by a fixed amount -- down the list. For example, a Caesar cipher of shift 3 would send 'A' to 'D' and 'B' to 'E', etc.. Near the end of the list, characters are shifted back to the beginning -- the list is considered cyclicly, using our inrange function.
Here is an implementation of the Caesar cipher, using the ASCII range between 32 and 126. We begin with a function to shift a single character.
End of explanation
for a in range(32,127):
c = chr(a)
print "ASCII %d is %s, which shifts to %s"%(a, c, Caesar_shift(c,5)) # Shift by 5.
Explanation: Let's see the effect of the Caesar cipher on our ASCII table.
End of explanation
def Caesar_cipher(plaintext, shift):
ciphertext = ''
for c in plaintext: # Iterate through the characters of a string.
ciphertext = ciphertext + Caesar_shift(c,shift)
return ciphertext
print Caesar_cipher('Hello! Can you read this?', 5) # Shift forward 5 units in ASCII.
Explanation: Now we can use the Caesar cipher to encrypt strings.
End of explanation
print Caesar_cipher('Mjqqt&%%Hfs%~tz%wjfi%ymnxD', -5) # Shift back 5 units in ASCII.
Explanation: As designed, the Caesar cipher turns plaintext into ciphertext by using a shift of the ASCII table. To decipher the ciphertext, one can just use the Caesar cipher again, with the negative shift.
End of explanation
def Vigenere_cipher(plaintext, key):
ciphertext = '' # Start with an empty string
for j in range(len(plaintext)):
c = plaintext[j] # the jth letter of the plaintext
key_index = j % len(key) # Cycle through letters of the key.
shift = ord(key[key_index]) # How much we shift c by.
ciphertext = ciphertext + Caesar_shift(c,shift) # Add new letter to ciphertext
return ciphertext
print Vigenere_cipher('This is very secret', 'Key') # 'Key' is probably a bad key!!
Explanation: The Vigenère cipher
The Caesar cipher is pretty easy to break, by a brute force attack (shift by all possible values) or a frequency analysis (compare the frequency of characters in a message to the frequency of characters in typical English messages, to make a guess).
The Vigenère cipher is a variant of the Caesar cipher which uses an ecryption key to vary the shift-parameter throughout the encryption process. For example, to encrypt the message "This is very secret" using the key "Key", you line up the characters of the message above repeated copies of the key.
T | h | i | s | | i | s | | v | e | r | y | | s | e | c | r | e | t
--|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--
K | e | y | K | e | y | K | e | y | K | e | y | K | e | y | K | e | y | K
Then, you turn everything into ASCII (or your preferred numerical system), and use the bottom row to shift the top row.
ASCII message | 84 | 104 | 105 | 115 | 32 | 105 | 115 | 32 | 118 | 101 | 114 | 121 | 32 | 115 | 101 | 99 | 114 | 101 | 116
---|-----|-----
Shift | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75
ASCII shifted | 159 | 205 | 226 | 190 | 133 | 226 | 190 | 133 | 239 | 176 | 215 | 242 | 107 | 216 | 222 | 174 | 215 | 222 | 191
ASCII shifted in range | 64 | 110 | 36 | 95 | 38 | 36 | 95 | 38 | 49 | 81 | 120 | 52 | 107 | 121 | 32 | 79 | 120 | 32 | 96
Finally, the shifted ASCII codes are converted back into characters for transmission. In this case, the codes 64,110,36,95, etc., are converted to the ciphertext "@n$_&$_&1Qx4ky Ox \`"
The Vigenère cipher is much harder to crack than the Caesar cipher, if you don't have the key. Indeed, the varying shifts make frequency analysis more difficult. The Vigenère cipher is weak by today's standards (see Wikipedia for a description of 19th century attacks), but illustrates the basic actors in a symmetric key cryptosystem: the plaintext, ciphertext, and a single key. Today, symmetric key cryptosystems like AES and 3DES are used all the time for secure communication.
Below, we implement the Vigenère cipher.
End of explanation
def Vigenere_decipher(ciphertext, key):
plaintext = '' # Start with an empty string
for j in range(len(ciphertext)):
c = ciphertext[j] # the jth letter of the ciphertext
key_index = j % len(key) # Cycle through letters of the key.
shift = - ord(key[key_index]) # Note the negative sign to decipher!
plaintext = plaintext + Caesar_shift(c,shift) # Add new letter to plaintext
return plaintext
Vigenere_decipher('@n$_&$_&1Qx4ky Ox `', 'Key')
# Try a few cipher/deciphers yourself to get used to the Vigenere system.
Explanation: The Vigenère cipher is called a symmetric cryptosystem, because the same key that is used to encrypt the plaintext can be used to decrypt the ciphertext. All we do is subtract the shift at each stage.
End of explanation
def mult_order(a,p):
'''
Determines the (multiplicative) order of an integer
a, modulo p. Here p is prime, and GCD(a,p) = 1.
If bad inputs are used, this might lead to a
never-ending loop!
'''
current_number = a % p
current_exponent = 1
while current_number != 1:
current_number = (current_number * a)%p
current_exponent = current_exponent + 1
return current_exponent
for j in range(1,37):
print "The multiplicative order of %d modulo 37 is %d"%(j,mult_order(j,37))
# These orders should all be divisors of 36.
Explanation: The Vigenère cipher becomes an effective way for two parties to communicate securely, as long as they share a secret key. In the 19th century, this often meant that the parties would require an initial in-person meeting to agree upon a key, or a well-guarded messenger would carry the key from one party to the other.
Today, as we wish to communicate securely over long distances on a regular basis, the process of agreeing on a key is more difficult. It seems like a chicken-and-egg problem, where we need a shared secret to communicate securely, but we can't share a secret without communicating securely in the first place!
Remarkably, this secret-sharing problem can be solved with some modular arithmetic tricks. This is the subject of the next section.
Exercises
A Caesar cipher was used to encode a message, with the resulting ciphertext: 'j!\'1r$v1"$v&&+1t}v(v$2'. Use a loop (brute force attack) to figure out the original message.
Imagine that you encrypt a long message (e.g., 1000 words of standard English) with a Vigenère cipher. How might you detect the length of the key, if it is short (e.g. 3 or 4 characters)?
Consider running a plaintext message through a Vigenère cipher with a 3-character key, and then running the ciphertext through a Vigenère cipher with a 4-character key. Explain how this is equivalent to running the original message through a single cipher with a 12-character key.
<a id='keyexchange'></a>
Key exchange
Now we study Diffie-Hellman key exchange, a remarkable way for two parties to share a secret without ever needing to directly communicate the secret with each other. Their method is based on properties of modular exponentiation and the existence of a primitive root modulo prime numbers.
Primitive roots and Sophie Germain primes
If $p$ is a prime number, and $GCD(a,p) = 1$, then recall Fermat's Little Theorem: $$a^{p-1} \equiv 1 \text{ mod } p.$$
It may be the case that $a^\ell \equiv 1$ mod $p$ for some smaller (positive) value of $\ell$ however. The smallest such positive value of $\ell$ is called the order (multiplicative order, to be precise) of $a$ modulo $p$, and it is always a divisor of $p-1$.
The following code determines the order of a number, mod $p$, with a brute force approach.
End of explanation
def is_primroot_safe(b,p):
'''
Checks whether b is a primitive root modulo p,
when p is a safe prime. If p is not safe,
the results will not be good!
'''
q = (p-1) / 2 # q is the Sophie Germain prime
if b%p == 1: # Is the multiplicative order 1?
return False
if (b*b)%p == 1: # Is the multiplicative order 2?
return False
if pow(b,q,p) == 1: # Is the multiplicative order q?
return False
return True # If not, then b is a primitive root mod p.
Explanation: A theorem of Gauss states that, if $p$ is prime, there exists an integer $b$ whose order is precisely $p-1$ (as big as possible!). Such an integer is called a primitive root modulo $p$. For example, the previous computation found 12 primitive roots modulo $37$: they are 2,5,13,15,17,18,19,20,22,24,32,35. To see these illustrated (mod 37), check out this poster (yes, that is blatant self-promotion!)
For everything that follows, suppose that $p$ is a prime number. Not only do primitive roots exist mod $p$, but they are pretty common. In fact, the number of primitive roots mod $p$ equals $\phi(p-1)$, where $\phi$ denotes Euler's totient. On average, $\phi(n)$ is about $6 / \pi^2$ times $n$ (for positive integers $n$). While numbers of the form $p-1$ are not "average", one still expects that $\phi(p-1)$ is a not-very-small fraction of $p-1$. You should not have to look very far if you want to find a primitive root.
The more difficult part, in practice, is determining whether a number $b$ is or is not a primitive root modulo $p$. When $p$ is very large (like hundreds or thousands of digits), $p-1$ is also very large. It is certainly not practical to cycle all the powers (from $1$ to $p-1$) of $b$ to determine whether $b$ is a primitive root!
The better approach, sometimes, is to use the fact that the multiplicative order of $b$ must be a divisor of $p-1$. If one can find all the divisors of $p-1$, then one can just check whether $b^d \equiv 1$ mod $p$ for each divisor $d$. This makes the problem of determining whether $b$ is a primitive root just about as hard as the problem of factoring $p-1$. This is a hard problem, in general!
But, for the application we're interested in, we will want to have a large prime number $p$ and a primitive root mod $p$. The easiest way to do this is to use a Sophie Germain prime $q$. A Sophie Germain prime is a prime number $q$ such that $2q + 1$ is also prime. When $q$ is a Sophie Germain prime, the resulting prime $p = 2q + 1$ is called a safe prime.
Observe that when $p$ is a safe prime, the prime decomposition of $p-1$ is
$$p-1 = 2 \cdot q.$$
That's it. So the possible multiplicative orders of an element $b$, mod $p$, are the divisors of $2q$, which are
$$1, 2, q, \text{ or } 2q.$$
In order to check whether $b$ is a primitive root, modulo a safe prime $p = 2q + 1$, we must check just three things: is $b \equiv 1$, is $b^2 \equiv 1$, or is $b^q \equiv 1$, mod $p$? If the answer to these three questions is NO, then $b$ is a primitive root mod $p$.
End of explanation
from random import randint # randint chooses random integers.
def Miller_Rabin(p, base):
'''
Tests whether p is prime, using the given base.
The result False implies that p is definitely not prime.
The result True implies that p **might** be prime.
It is not a perfect test!
'''
result = 1
exponent = p-1
modulus = p
bitstring = bin(exponent)[2:] # Chop off the '0b' part of the binary expansion of exponent
for bit in bitstring: # Iterates through the "letters" of the string. Here the letters are '0' or '1'.
sq_result = result*result % modulus # We need to compute this in any case.
if sq_result == 1:
if (result != 1) and (result != exponent): # Note that exponent is congruent to -1, mod p.
return False # a ROO violation occurred, so p is not prime
if bit == '0':
result = sq_result
if bit == '1':
result = (sq_result * base) % modulus
if result != 1:
return False # a FLT violation occurred, so p is not prime.
return True # If we made it this far, no violation occurred and p might be prime.
def is_prime(p, witnesses=50): # witnesses is a parameter with a default value.
'''
Tests whether a positive integer p is prime.
For p < 2^64, the test is deterministic, using known good witnesses.
Good witnesses come from a table at Wikipedia's article on the Miller-Rabin test,
based on research by Pomerance, Selfridge and Wagstaff, Jaeschke, Jiang and Deng.
For larger p, a number (by default, 50) of witnesses are chosen at random.
'''
if (p%2 == 0): # Might as well take care of even numbers at the outset!
if p == 2:
return True
else:
return False
if p > 2**64: # We use the probabilistic test for large p.
trial = 0
while trial < witnesses:
trial = trial + 1
witness = randint(2,p-2) # A good range for possible witnesses
if Miller_Rabin(p,witness) == False:
return False
return True
else: # We use a determinisic test for p <= 2**64.
verdict = Miller_Rabin(p,2)
if p < 2047:
return verdict # The witness 2 suffices.
verdict = verdict and Miller_Rabin(p,3)
if p < 1373653:
return verdict # The witnesses 2 and 3 suffice.
verdict = verdict and Miller_Rabin(p,5)
if p < 25326001:
return verdict # The witnesses 2,3,5 suffice.
verdict = verdict and Miller_Rabin(p,7)
if p < 3215031751:
return verdict # The witnesses 2,3,5,7 suffice.
verdict = verdict and Miller_Rabin(p,11)
if p < 2152302898747:
return verdict # The witnesses 2,3,5,7,11 suffice.
verdict = verdict and Miller_Rabin(p,13)
if p < 3474749660383:
return verdict # The witnesses 2,3,5,7,11,13 suffice.
verdict = verdict and Miller_Rabin(p,17)
if p < 341550071728321:
return verdict # The witnesses 2,3,5,7,11,17 suffice.
verdict = verdict and Miller_Rabin(p,19) and Miller_Rabin(p,23)
if p < 3825123056546413051:
return verdict # The witnesses 2,3,5,7,11,17,19,23 suffice.
verdict = verdict and Miller_Rabin(p,29) and Miller_Rabin(p,31) and Miller_Rabin(p,37)
return verdict # The witnesses 2,3,5,7,11,17,19,23,29,31,37 suffice for testing up to 2^64.
def is_SGprime(p):
'''
Tests whether p is a Sophie Germain prime
'''
if is_prime(p): # A bit faster to check whether p is prime first.
if is_prime(2*p + 1): # and *then* check whether 2p+1 is prime.
return True
Explanation: This would not be very useful if we couldn't find Sophie Germain primes. Fortunately, they are not so rare. The first few are 2, 3, 5, 11, 23, 29, 41, 53, 83, 89. It is expected, but unproven that there are infinitely many Sophie Germain primes. In practice, they occur fairly often. If we consider numbers of magnitude $N$, about $1 / \log(N)$ of them are prime. Among such primes, we expect about $1.3 / \log(N)$ to be Sophie Germain primes. In this way, we can expect to stumble upon Sophie Germain primes if we search for a bit (and if $\log(N)^2$ is not too large).
The code below tests whether a number $p$ is a Sophie Germain prime. We construct it by simply testing whether $p$ and $2p+1$ are both prime. We use the Miller-Rabin test (the code from the previous Python notebook) in order to test whether each is prime.
End of explanation
for j in range(1,100):
if is_SGprime(j):
print j, 2*j+1
Explanation: Let's test this out by finding the Sophie Germain primes up to 100, and their associated safe primes.
End of explanation
test_number = 10**99 # Start looking at the first 100-digit number, which is 10^99.
while not is_SGprime(test_number):
test_number = test_number + 1
print test_number
Explanation: Next, we find the first 100-digit Sophie Germain prime! This might take a minute!
End of explanation
from random import SystemRandom # Import the necessary package.
r = SystemRandom().getrandbits(256)
print "The random integer is ",r
print "with binary expansion",bin(r) # r is an integer constructed from 256 random bits.
print "with bit-length ",len(bin(r)) - 2 # In case you want to check. Remember '0b' is at the beginning.
def getrandSGprime(bitlength):
'''
Creates a random Sophie Germain prime p with about
bitlength bits.
'''
while True:
p = SystemRandom().getrandbits(bitlength) # Choose a really random number.
if is_SGprime(p):
return p
Explanation: In the seconds or minutes your computer was running, it checked the primality of almost 90 thousand numbers, each with 100 digits. Not bad!
The Diffie-Hellman protocol
When we study protocols for secure communication, we must keep track of the communicating parties (often called Alice and Bob), and who has knowledge of what information. We assume at all times that the "wire" between Alice and Bob is tapped -- anything they say to each other is actively monitored, and is therefore public knowledge. We also assume that what happens on Alice's private computer is private to Alice, and what happens on Bob's private computer is private to Bob. Of course, these last two assumptions are big assumptions -- they point towards the danger of computer viruses which infect computers and can violate such privacy!
The goal of the Diffie-Hellman protocol is -- at the end of the process -- for Alice and Bob to share a secret without ever having communicated the secret with each other. The process involves a series of modular arithmetic calculations performed on each of Alice and Bob's computers.
The process begins when Alice or Bob creates and publicizes a large prime number p and a primitive root g modulo p. It is best, for efficiency and security, to choose a safe prime p. Alice and Bob can create their own safe prime, or choose one from a public list online, e.g., from the RFC 3526 memo. Nowadays, it's common to take p with 2048 bits, i.e., a prime which is between $2^{2046}$ and $2^{2047}$ (a number with 617 decimal digits!).
For the purposes of this introduction, we use a smaller safe prime, with about 256 bits. We use the SystemRandom functionality of the random package to create a good random prime. It is not so much of an issue here, but in general one must be very careful in cryptography that one's "random" numbers are really "random"! The SystemRandom function uses chaotic properties of your computer's innards in order to initialize a random number generator, and is considered cryptographically secure.
End of explanation
q = getrandSGprime(256) # A random ~256 bit Sophie Germain prime
p = 2*q + 1 # And its associated safe prime
print "p is ",p # Just to see what we're working with.
print "q is ",q
Explanation: The function above searches and searches among random numbers until it finds a Sophie Germain prime. The (possibly endless!) search is performed with a while True: loop that may look strange. The idea is to stay in the loop until such a prime is found. Then the return p command returns the found prime as output and halts the loop. One must be careful with while True loops, since they are structured to run forever -- if there's not a loop-breaking command like return or break inside the loop, your computer will be spinning for a long time.
End of explanation
def findprimroot_safe(p):
'''
Finds a primitive root,
modulo a safe prime p.
'''
b = 2 # Start trying with 2.
while True: # We just keep on looking.
if is_primroot_safe(b,p):
return b
b = b + 1 # Try the next base. Shouldn't take too long to find one!
g = findprimroot_safe(p)
print g
Explanation: Next we find a primitive root, modulo the safe prime p.
End of explanation
a = SystemRandom().getrandbits(256) # Alice's secret number
b = SystemRandom().getrandbits(256) # Bob's secret number
print "Only Alice should know that a = %d"%(a)
print "Only Bob should know that b = %d"%(b)
print "But everyone can know p = %d and g = %d"%(p,g)
Explanation: The pair of numbers $(g, p)$, the primitive root and the safe prime, chosen by either Alice or Bob, is now made public. They can post their $g$ and $p$ on a public website or shout it in the streets. It doesn't matter. They are just tools for their secret-creation algorithm below.
Alice and Bob's private secrets
Next, Alice and Bob invent private secret numbers $a$ and $b$. They do not tell anyone these numbers. Not each other. Not their family. Nobody. They don't write them on a chalkboard, or leave them on a thumbdrive that they lose. These are really secret.
But they don't use their phone numbers, or social security numbers. It's best for Alice and Bob to use a secure random number generator on their separate private computers to create $a$ and $b$. They are often 256 bit numbers in practice, so that's what we use below.
End of explanation
A = pow(g,a,p) # This would be computed on Alice's computer.
B = pow(g,b,p) # This would be computed on Bob's computer.
Explanation: Now Alice and Bob use their secrets to generate new numbers. Alice computes the number
$$A = g^a \text{ mod } p,$$
and Bob computes the number
$$B = g^b \text{ mod } p.$$
End of explanation
print "Everyone knows A = %d and B = %d."%(A,B)
Explanation: Now Alice and Bob do something that seems very strange at first. Alice sends Bob her new number $A$ and Bob sends Alice his new number $B$. Since they are far apart, and the channel is insecure, we can assume everyone in the world now knows $A$ and $B$.
End of explanation
print pow(B,a,p) # This is what Alice computes.
print pow(A,b,p) # This is what Bob computes.
Explanation: Now Alice, on her private computer, computes $B^a$ mod $p$. She can do that because everyone knows $B$ and $p$, and she knows $a$ too.
Similarly, Bob, on his private computer, computes $A^b$ mod $p$. He can do that because everyone knows $A$ and $p$, and he knows $b$ too.
Alice and Bob do not share the results of their computations!
End of explanation
S = pow(B,a,p) # Or we could have used pow(A,b,p)
print S
Explanation: Woah! What happened? In terms of exponents, it's elementary. For
$$B^a = (g^{b})^a = g^{ba} = g^{ab} = (g^a)^b = A^b.$$
So these two computations yield the same result (mod $p$, the whole way through).
In the end, we find that Alice and Bob share a secret. We call this secret number $S$.
$$S = B^a = A^b.$$
End of explanation
# We use the single-quotes for a long string, that occupies multiple lines.
# The backslash at the end of the line tells Python to ignore the newline character.
# Imagine that Alice has a secret message she wants to send to Bob.
# She writes the plaintext on her computer.
plaintext = '''Did you hear that the American Mathematical Society has an annual textbook sale? \
It's 40 percent off for members and 25 percent off for everyone else.'''
# Now Alice uses the secret S (as a string) to encrypt.
ciphertext = Vigenere_cipher(plaintext, str(S))
print ciphertext
# Alice sends the following ciphertext to Bob, over an insecure channel.
# When Bob receives the ciphertext, he decodes it with the secret S again.
print Vigenere_decipher(ciphertext, str(S))
Explanation: This common secret $S$ can be used as a key for Alice and Bob to communicate hereafter. For example, they might use $S$ (converted to a string, if needed) as the key for a Vigenère cipher, and chat with each other knowing that only they have the secret key to encrypt and decrypt their messages.
End of explanation |
10,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Theory and Practice of Visualization Exercise 1
Imports
Step1: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook. | Python Code:
from IPython.display import Image
Explanation: Theory and Practice of Visualization Exercise 1
Imports
End of explanation
# Add your filename and uncomment the following line:
Image(filename='graphie.JPG')
Explanation: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook.
End of explanation |
10,883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Code Testing and CI
The notebook contains problems about code testing and continuous integration with Travis CI.
Original by E Tollerud 2017 for LSSTC DSFP Session3 and AstroHackWeek, modified by B Sipocz
Problem 1
Step1: 1b
Step2: 1d
Step3: 1e
Step4: 1f
Step5: 1g
Step6: This should yield a report, which you can use to decide if you need to add more tests to acheive complete coverage. Check out the command line arguments to see if you can get a more detailed line-by-line report.
Problem 2
Step7: 2b
This test has an intentional bug... but depending how you right the test you might not catch it... Use unit tests to find it! (and then fix it...)
Step9: 2c
There are (at least) two significant bugs in this code (one fairly apparent, one much more subtle). Try to catch them both, and write a regression test that covers those cases once you've found them.
One note about this function
Step11: 2d
Hint
Step12: Problem 3
Step13: 3b
Step14: Be sure to commit and push this to github before proceeding | Python Code:
!conda install pytest pytest-cov
Explanation: Code Testing and CI
The notebook contains problems about code testing and continuous integration with Travis CI.
Original by E Tollerud 2017 for LSSTC DSFP Session3 and AstroHackWeek, modified by B Sipocz
Problem 1: Set up py.test in you repo
In this problem we'll aim to get the py.test testing framework up and running in the code repository you set up in the last set of problems. We can then use it to collect and run tests of the code.
1a: Ensure py.test is installed
Of course py.test must actually be installed before you can use it. The commands below should work for the Anaconda Python Distribution, but if you have some other Python installation you'll want to install pytest (and its coverage plugin) as directed in the install instructions for py.test.
End of explanation
!mkdir #complete
!touch #complete
%%file <yourpackage>/tests/test_something.py
def test_something_func():
assert #complete
Explanation: 1b: Ensure your repo has code suitable for unit tests
Depending on what your code actually does, you might need to modify it to actually perform something testable. For example, if all it does is print something, you might find it difficult to write an effective unit test. Try adding a function that actually performs some operation and returns something different depending on various inputs. That tends to be the easiest function to unit-test: one with a clear "right" answer in certain situations.
Also be sure you have cded to the root of the repo for pytest to operate correctly.
1c: Add a test file with a test function
The test must be part of the package and follow the convention that the file and the function begin with test to get picked up by the test collection machinery. Inside the test function, you'll need some code that fails if the test condition fails. The easiest way to do this is with an assert statement, which raises an error if its first argument is False.
Hint: remember that to be a valid python package, a directory must have an __init__.py
End of explanation
from <yourpackage>.tests import test_something
test_something.test_something_func()
Explanation: 1d: Run the test directly
While this is not how you'd ordinarily run the tests, it's instructive to first try to execute the test directly, without using any fancy test framework. If your test function just runs, all is good. If you get an exception, the test failed (which in this case might be good).
Hint: you may need to use reload or just re-start your notebook kernel to get the cell below to recognize the changes.
End of explanation
!py.test
Explanation: 1e: Run the tests with py.test
Once you have an example test, you can try invoking py.test, which is how you should run the tests in the future. This should yield a report that shows a dot for each test. If all you see are dots, the tests ran sucessfully. But if there's a failure, you'll see the error, and the traceback showing where the error happened.
End of explanation
!py.test
Explanation: 1f: Make the test fail (or succeed...)
If your test failed when you ran it, you should now try to fix the test (or the code...) to make it work. Try running
(Modify your test to fail if it succeeded before, or vice versa)
End of explanation
!py.test --cov=<yourproject> tests/ #complete
Explanation: 1g: Check coverage
The coverage plugin we installed will let you check which lines of your code are actually run by the testing suite.
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
# `math` here is for *scalar* math... normally you'd use numpy but this makes it a bit simpler to debug
import math
inf = float('inf') # this is a quick-and-easy way to get the "infinity" value
def function_a(angle=180):
anglerad = math.radians(angle)
return math.sin(anglerad/2)/math.sin(anglerad)
Explanation: This should yield a report, which you can use to decide if you need to add more tests to acheive complete coverage. Check out the command line arguments to see if you can get a more detailed line-by-line report.
Problem 2: Implement some unit tests
The sub-problems below each contain different unit testing complications. Place the code from the snippets in your repository (either using an editor or the %%file trick), and write tests to ensure the correctness of the functions. Try to achieve 100% coverage for all of them (especially to catch some hidden bugs!).
Also, note that some of these examples are not really practical - that is, you wouldn't want to do this in real code because there's better ways to do it. But because of that, they are good examples of where something can go subtly wrong... and therefore where you want to make tests!
2a
When you have a function with a default, it's wise to test both the with-default call (function_b()), and when you give a value (function_b(1.2))
Hint: Beware of numbers that come close to 0... write your tests to accomodate floating-point errors!
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
def function_b(value):
if value < 0:
return value - 1
else:
value2 = subfunction_b(value + 1)
return value + value2
def subfunction_b(inp):
vals_to_accum = []
for i in range(10):
vals_to_accum.append(inp ** (i/10))
if vals_to_accum[-1] > 2:
vals.append(100)
# really you would use numpy to do this kind of number-crunching... but we're doing this for the sake of example right now
return sum(vals_to_accum)
Explanation: 2b
This test has an intentional bug... but depending how you right the test you might not catch it... Use unit tests to find it! (and then fix it...)
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
import math
# know that to not have to worry about this, you should just use `astropy.coordinates`.
def angle_to_sexigesimal(angle_in_degrees, decimals=3):
Convert the given angle to a sexigesimal string of hours of RA.
Parameters
----------
angle_in_degrees : float
A scalar angle, expressed in degrees
Returns
-------
hms_str : str
The sexigesimal string giving the hours, minutes, and seconds of RA for the given `angle_in_degrees`
if math.floor(decimals) != decimals:
raise ValueError('decimals should be an integer!')
hours_num = angle_in_degrees*24/180
hours = math.floor(hours_num)
min_num = (hours_num - hours)*60
minutes = math.floor(min_num)
seconds = (min_num - minutes)*60
format_string = '{}:{}:{:.' + str(decimals) + 'f}'
return format_string.format(hours, minutes, seconds)
Explanation: 2c
There are (at least) two significant bugs in this code (one fairly apparent, one much more subtle). Try to catch them both, and write a regression test that covers those cases once you've found them.
One note about this function: in real code you're probably better off just using the Angle object from astropy.coordinates. But this example demonstrates one of the reasons why that was created, as it's very easy to write a buggy version of this code.
Hint: you might find it useful to use astropy.coordinates.Angle to create test cases...
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
import numpy as np
def function_d(array1=np.arange(10)*2, array2=np.arange(10), operation='-'):
Makes a matrix where the [i,j]th element is array1[i] <operation> array2[j]
if operation == '+':
return array1[:, np.newaxis] + array2
elif operation == '-':
return array1[:, np.newaxis] - array2
elif operation == '*':
return array1[:, np.newaxis] * array2
elif operation == '/':
return array1[:, np.newaxis] / array2
else:
raise ValueError('Unrecognized operation "{}"'.format(operation))
Explanation: 2d
Hint: numpy has some useful functions in numpy.testing for comparing arrays.
End of explanation
!py.test
Explanation: Problem 3: Set up travis to run your tests whenever a change is made
Now that you have a testing suite set up, you can try to turn on a continuous integration service to constantly check that any update you might send doesn't create a bug. We will the Travis-CI service for this purpose, as it has one of the lowest barriers to entry from Github.
3a: Ensure the test suite is passing locally
Seems obvious, but it's easy to forget to check this and only later realize that all the trouble you thought you had setting up the CI service was because the tests were actually broken...
End of explanation
%%file .travis.yml
language: python
python:
- "3.6"
# command to install dependencies
#install: "pip install numpy" #uncomment this if your code depends on numpy or similar
# command to run tests
script: pytest
Explanation: 3b: Set up an account on travis
This turns out to be quite convenient. If you go to the Travis web site, you'll see a "Sign in with GitHub" button. You'll need to authorize Travis, but once you've done so it will automatically log you in and know which repositories are yours.
3c: Create a minimal .travis.yml file.
Before we can activate travis on our repo, we need to tell travis a variety of metadata about what's in the repository and how to run it. The template below should be sufficient for the simplest needs.
End of explanation
!git #complete
Explanation: Be sure to commit and push this to github before proceeding:
End of explanation |
10,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Make me an Image Analyst already!
In the last lesson you learnt the basics of Python. You learnt what variables are, how loops function and how to get a list of our filenames. In this lesson we will learn how to read and display images, followed by how to identify objects of interest in the image.
Since the information we generated in the other notebook can't be shared with this notebook, let's regenerate our file list of images. We didn't mention in the last lesson that it is customary to put all the import statements at the top of a program, although this isn't a requirement it helps to keep the code organized.
Step1: Reading Images
In the last cell we printed the name of the first image file in the Type1 directory. We will use this as our test file for now.
To read images anaconda comes with the sci-kit image package, which has many useful functions for image analysis. Among them are the functions imread() and imshow() these readn and display the image respectively. As with os and glob the scikit image package has to be imported, but in this case we won't be using many functions from the package so we will only import the imread() and imshow() functions.
you will notice a strange looking line
Step2: If you open the corresponding image in Fiji you will see that its displayed in grayscale, while here it looks blue-green. This is because the imshow() function applies something called as a lookup table (LUT) to help our eyes see the high and low intensities better. If you would like to still view it as a grey image we can change the display parameters as below
Step3: Above, matplotlib is the standard plotting library in python. Matplotlib has a function called as colormap which corresponds to the cm that we import. We tell imshow to use the cm.gray colormap to display our image in grayscale. You can also use other colormaps, go ahead and experiment if you wish.
Now that I have the Image, what do i do?
Congrats! You have successfully read and displayed an image! If you are from a bio-background you will identify the little gray blobs as nuclei of cells. For those of you not from a bio-background nuclei is where cells store their DNA. We acquired these images on a microscope by applying a fluorescent dye specific to the DNA and capturing the image with a microscope.
Surely this isn't enough. We would like to measure things about the nucleus. Many things can be measured, but for the purposes of this exercise we will lmit ourselves to measuring the area and the length of the major axis.
But.. we are jumping ahead. Although we can see the nuclei, the computer doesn't know what you are talking about all it sees is numbers. Don't believe me?
try printing the variable that hold our image
Step4: And, that's what an image really is. Its just a series of numbers which are interpreted by the computer and displayed as intensities, which our eyes then interpret as images. Nothing particularly fancy.
However, images being just numbers gives us a big advantage. We can do mathematical operations on them. In our images for example the nuclei are bright compared to the background. Maybe this can be used to tell the computer where the nuclei are?
GIVE ME ALLLLL THE BRIGHT THINGS
How will you figure out what you count as bright? One way is to open the image in imageJ and see wha the approximate intensities of images are. We will try a little dumber method here, but learn a few things about arrays along the way
Like lists arrays are a kind of object in python. They are a lot more powerful than lists when it comes to manipulation but come with some drawbacks. Usually we end up using a list, an array, or something else as the situation demands.
Arrays aren't a default part of Python, but are introduced by the Numpy package. The next line is one that you will have at the top of nearly every program you write
Step5: We have just imported numpy but decided to refer to it as np. You can replace 'np' with anything you like but np is the standard used everywhere, so I suggest you stick to it. This helps in things like copy-pasting code from the internet ;)
let's declare a small numpy array and play with it a little bit.
Step6: You might notice that the object passed to the np.array() function is a list. The np.array() function takes the list and converts it to an array. You know it's an array because it says 'array' when its printed. This is the same thing that was printed when we called our image variable. So, we can safely say that our image is also an array. Anything that we do with this small array here can also be done to our bigger image array. In fact we can even treat our little array as an image and see how it looks
Step7: With the image we would like to find all the pixels that are bright. In the case of our small array, we can decide that everything bigger than 1 is a bright pixel. Using this as our criteria, we can as the computer to identify the pixels for us.
How you ask? Remember the booleans from the last lesson?
Step8: We get a boolean array which is full of true and false values corresponding to our question
Step9: Otsu's method identifies '560' as being a good threhold for our image to identify the brighter objects and it seems to do a good job!
The next step is to get the area of the first nucleus.... but which is the first nucleus??
Tag 'em and bag 'em
How do we decide which the first nucleus to be analysed should be? There is no inherent numbering and if you look at the bw_img its just true/false values. One option is to number all the objects based on if they are connected to other pixels of the same value. This is called labeling. scikit image has a function for this. The label() function takes in a black and white image and returns an image where each object has been given a number, which is interpreted as an intensity by the imshow function.
Step10: Now we have a way to determine the order in which objects are measured. The function to measure objects is called regionprops(). This function takes in the labelled image and the intensity image as an input and returns an object which has measurements of each of the objects.
Step11: Unlike the other data types which we have seen so far regionprops gives us a list of named properties these can be accessed as below
Step12: Note
Step13: We can also use list comprehension to collect all the measured areas into a single list. | Python Code:
import os
import glob
root_root = '/home/aneesh/Images/Source/'
dir_of_root = os.listdir(root_root)
file_paths = [glob.glob(os.path.join(root_root,dor, '*.tif')) for dor in dir_of_root]
print(file_paths[0][0])
Explanation: Make me an Image Analyst already!
In the last lesson you learnt the basics of Python. You learnt what variables are, how loops function and how to get a list of our filenames. In this lesson we will learn how to read and display images, followed by how to identify objects of interest in the image.
Since the information we generated in the other notebook can't be shared with this notebook, let's regenerate our file list of images. We didn't mention in the last lesson that it is customary to put all the import statements at the top of a program, although this isn't a requirement it helps to keep the code organized.
End of explanation
from skimage.io import imread, imshow
%matplotlib inline
in_img = imread(file_paths[0][0])
imshow(in_img)
Explanation: Reading Images
In the last cell we printed the name of the first image file in the Type1 directory. We will use this as our test file for now.
To read images anaconda comes with the sci-kit image package, which has many useful functions for image analysis. Among them are the functions imread() and imshow() these readn and display the image respectively. As with os and glob the scikit image package has to be imported, but in this case we won't be using many functions from the package so we will only import the imread() and imshow() functions.
you will notice a strange looking line: %matplotlib inline this is actually meant for the jupyter notebook. This tells the notebook to display the images within the page.
End of explanation
import matplotlib.cm as cm
imshow(in_img, cmap=cm.gray)
Explanation: If you open the corresponding image in Fiji you will see that its displayed in grayscale, while here it looks blue-green. This is because the imshow() function applies something called as a lookup table (LUT) to help our eyes see the high and low intensities better. If you would like to still view it as a grey image we can change the display parameters as below:
End of explanation
in_img
Explanation: Above, matplotlib is the standard plotting library in python. Matplotlib has a function called as colormap which corresponds to the cm that we import. We tell imshow to use the cm.gray colormap to display our image in grayscale. You can also use other colormaps, go ahead and experiment if you wish.
Now that I have the Image, what do i do?
Congrats! You have successfully read and displayed an image! If you are from a bio-background you will identify the little gray blobs as nuclei of cells. For those of you not from a bio-background nuclei is where cells store their DNA. We acquired these images on a microscope by applying a fluorescent dye specific to the DNA and capturing the image with a microscope.
Surely this isn't enough. We would like to measure things about the nucleus. Many things can be measured, but for the purposes of this exercise we will lmit ourselves to measuring the area and the length of the major axis.
But.. we are jumping ahead. Although we can see the nuclei, the computer doesn't know what you are talking about all it sees is numbers. Don't believe me?
try printing the variable that hold our image:
End of explanation
import numpy as np
Explanation: And, that's what an image really is. Its just a series of numbers which are interpreted by the computer and displayed as intensities, which our eyes then interpret as images. Nothing particularly fancy.
However, images being just numbers gives us a big advantage. We can do mathematical operations on them. In our images for example the nuclei are bright compared to the background. Maybe this can be used to tell the computer where the nuclei are?
GIVE ME ALLLLL THE BRIGHT THINGS
How will you figure out what you count as bright? One way is to open the image in imageJ and see wha the approximate intensities of images are. We will try a little dumber method here, but learn a few things about arrays along the way
Like lists arrays are a kind of object in python. They are a lot more powerful than lists when it comes to manipulation but come with some drawbacks. Usually we end up using a list, an array, or something else as the situation demands.
Arrays aren't a default part of Python, but are introduced by the Numpy package. The next line is one that you will have at the top of nearly every program you write:
End of explanation
myarray = np.array([[1,2,3],
[1,2,3],
[1,2,3]])
myarray
Explanation: We have just imported numpy but decided to refer to it as np. You can replace 'np' with anything you like but np is the standard used everywhere, so I suggest you stick to it. This helps in things like copy-pasting code from the internet ;)
let's declare a small numpy array and play with it a little bit.
End of explanation
imshow(myarray, cmap=cm.gray)
Explanation: You might notice that the object passed to the np.array() function is a list. The np.array() function takes the list and converts it to an array. You know it's an array because it says 'array' when its printed. This is the same thing that was printed when we called our image variable. So, we can safely say that our image is also an array. Anything that we do with this small array here can also be done to our bigger image array. In fact we can even treat our little array as an image and see how it looks:
End of explanation
myarray>1
Explanation: With the image we would like to find all the pixels that are bright. In the case of our small array, we can decide that everything bigger than 1 is a bright pixel. Using this as our criteria, we can as the computer to identify the pixels for us.
How you ask? Remember the booleans from the last lesson?
End of explanation
from skimage.filters import threshold_otsu
my_array_thresh = threshold_otsu(myarray)
print(my_array_thresh)
print(myarray>my_array_thresh)
imshow(myarray>my_array_thresh)
img_thresh = threshold_otsu(in_img)
print(img_thresh)
bw_img = in_img>img_thresh
imshow(bw_img)
Explanation: We get a boolean array which is full of true and false values corresponding to our question: is this number bigger than 1
Boolean arrays have some interesting properties, one of which is that you can multiply them with regular arrays. The True values are treated as 1 and the False as 0. However, we will not go into that right now.
Right now we'd like to know of a good method to identify our nuclei.
Otsu: the man on the threshold.
In the last section when we asked the computer which values are greater than 1 what we did was set the threshold as '1'. But this isn't always straighforward especially with images where the intensities can vary a lot. This means that if you painfully find the threshold for one image there is no gurantee that it will work for the next image. What we need to do is to use the characteristics of the image to determine what the appropriate threshold is.
Otsu's method is a very effective way of doing this. I won't go into the details of the method but I highly reccomend reading about it: https://en.wikipedia.org/wiki/Otsu%27s_method
The nice people behind the Scikit Image package have already made a function to determine the otsu threshold as shown below. This can alse be applied to our image.
End of explanation
from skimage.measure import label
label_im = label(bw_img)
imshow(label_im)
Explanation: Otsu's method identifies '560' as being a good threhold for our image to identify the brighter objects and it seems to do a good job!
The next step is to get the area of the first nucleus.... but which is the first nucleus??
Tag 'em and bag 'em
How do we decide which the first nucleus to be analysed should be? There is no inherent numbering and if you look at the bw_img its just true/false values. One option is to number all the objects based on if they are connected to other pixels of the same value. This is called labeling. scikit image has a function for this. The label() function takes in a black and white image and returns an image where each object has been given a number, which is interpreted as an intensity by the imshow function.
End of explanation
from skimage.measure import regionprops
r_props = regionprops(label_im, in_img)
#print length of number r_props to determine number of objects measured.
len(r_props)
Explanation: Now we have a way to determine the order in which objects are measured. The function to measure objects is called regionprops(). This function takes in the labelled image and the intensity image as an input and returns an object which has measurements of each of the objects.
End of explanation
# Area for the first object
print(r_props[0].area)
# Area for the first object
print(r_props[1].area)
# Area for the first object
print(r_props[2].area)
Explanation: Unlike the other data types which we have seen so far regionprops gives us a list of named properties these can be accessed as below:
End of explanation
help(regionprops)
Explanation: Note: Regionprops returns many properties a full list of the properties is available in the help file. The help for any imported function can be accessed by running the help(regionprops) command.
End of explanation
all_areas = [rp.area for rp in r_props]
print(all_areas)
Explanation: We can also use list comprehension to collect all the measured areas into a single list.
End of explanation |
10,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$f_{2}(c,p) = \dfrac{1}{2}r_{c}c^{2}+\dfrac{1}{4}u_{c}c^{4}+\dfrac{1}{6}v_{c}c^{6}+\dfrac{1}{2}r_{p}p^{2}+\dfrac{1}{4}u_{p}p^{4}-\gamma cp-\dfrac{1}{2}ec^{2}p^{2}-Ep$
Step1: Rescaling | Python Code:
f2 = ((1/2)*r_c*c**2+(1/4)*u_c*c**4+(1/6)*v_c*c**6+(1/2)*r_p*p**2+(1/4)*u_p*p**4-E*p-gamma*c*p-e*c**2*p**2/2)
nsimplify(f2)
Explanation: $f_{2}(c,p) = \dfrac{1}{2}r_{c}c^{2}+\dfrac{1}{4}u_{c}c^{4}+\dfrac{1}{6}v_{c}c^{6}+\dfrac{1}{2}r_{p}p^{2}+\dfrac{1}{4}u_{p}p^{4}-\gamma cp-\dfrac{1}{2}ec^{2}p^{2}-Ep$
End of explanation
P, C, w, rho, beta, tau, Epr = symbols('P C w rho beta tau E^{\prime}')
fP = nsimplify(f2.subs(p,w*Pprm))
fP
fP = nsimplify(fP.subs(w,sqrt(r_p/u_p)))
fP
fP = expand(fP/(r_p**2/u_p),r_p)
fP
fP = fP.subs([(E,Epr/sqrt(u_p)*r_p**(3/2)),(c,cprm*sqrt(r_p/e))])
fP
fP = fP.subs([(cprm,thetaprm),(r_c,rtheta*e*r_p/u_p),(u_c,gammaprm*e**2/u_p),(v_c,rhoprm*e**3/(u_p*r_p)),(gamma,betaprm*r_p*sqrt(e/u_p))])
fP
fhelp = nsimplify((1/2)*rtheta*thetaprm**2+(1/4)*gammaprm*thetaprm**4+(1/6)*rhoprm*thetaprm**6+(1/2)*Pprm**2+(1/4)*Pprm**4
+(1/6)*sigmaprm*Pprm**6-betaprm*thetaprm*Pprm-(1/2)*Pprm**2*thetaprm**2-Eprm*Pprm)
fhelp
Ep = solve(fhelp.diff(Pprm),Eprm)[0]
Ep
fhelp.diff(thetaprm)
P_min = solve(fhelp.diff(thetaprm),Pprm)[0]
P_min
Ep = Ep.subs(Pprm,P_min)
Ep
series(Ep,thetaprm,n=7)
# Series expansion of $E$ out to $\mathcal{O(\theta^{\prime 7})}$:
# $\beta E = \theta^{\prime}\left(-\beta^{\prime 2} + r_{\theta}^{\prime}\right) + \theta^{\prime 3}\left(\gamma^{\prime} - r_{\theta}^{\prime} + \dfrac{r_{\theta}^{\prime 3}}{\beta^{\prime 2}} - \dfrac{r_{\theta}^{\prime 2}}{\beta^{\prime 2}}\right) + \theta^{\prime 5}\left(-\gamma^{\prime} + \rho^{\prime} + \dfrac{3\gamma^{\prime}r_{\theta}^{\prime 2}}{\beta^{\prime 2}} - \dfrac{2\gamma^{\prime}r_{\theta}}{\beta^{\prime 2}} + \dfrac{r_{\theta}^{\prime 2}}{\beta^{\prime 2}} - \dfrac{3r_{\theta}^{\prime 4}}{\beta^{\prime 4}} + \dfrac{2r_{\theta}^{\prime 3}}{\beta^{\prime 4}}\right)$
# The coefficients of $E$ i.t.o. $a$ where $r_{\theta}^{\prime} = \beta^{\prime 2} + a$:
# $B(a) = \dfrac{a^{3}}{\beta^{\prime 2}} + a^{2}\left(3 - \dfrac{1}{\beta^{\prime 2}}\right) + a(3\beta^{\prime 2} - 3) + \beta^{\prime 4} - 2\beta^{\prime 2} + \gamma^{\prime}$
# $C(a) = -\dfrac{3a^{4}}{\beta^{\prime 4}} + a^{3}\left(\dfrac{2}{\beta^{\prime 4}} - \dfrac{12}{\beta^{\prime 2}}\right) + a^{2}\left(\dfrac{3\gamma^{\prime}}{\beta^{\prime 2}} - 18 + \dfrac{7}{\beta^{\prime 2}}\right) + a\left(6\gamma^{\prime} - 12\beta^{\prime 2} - \dfrac{2\gamma^{\prime}}{\beta^{\prime 2}} + 8\right) - 3\beta^{\prime 4} + 3\beta^{\prime 2}\gamma^{\prime} + 3\beta^{\prime 2} - 3\gamma^{\prime} + \rho^{\prime}$
# $R(a) = B(a)^{2} - \dfrac{20aC(a)}{9}$ (not going to write this one out, it's long)
rth = betaprm**2+a*betaprm
B = (gammaprm-rtheta+(rtheta**3-rtheta**2)/betaprm**2)/betaprm
C = (-gammaprm+rhoprm+3*gammaprm*(rtheta/betaprm)**2-2*gammaprm*rtheta/betaprm**2
+(rtheta/betaprm)**2+rtheta**5*sigmaprm/betaprm**4-3*(rtheta/betaprm)**4+2*rtheta**3/betaprm**4)/betaprm
B = collect(expand(B.subs(rtheta,rth)),a)
B
C = collect(expand(C.subs(rtheta,rth)),a)
C
b0 = betaprm**3-2*betaprm+gammaprm/betaprm
b1 = 3*betaprm**2-3
b2 = 3*betaprm-1/betaprm
b3 = 1
c0 = betaprm**5*sigmaprm-3*betaprm**3+3*betaprm*gammaprm+3*betaprm-3*gammaprm/betaprm+rhoprm/betaprm
c1 = 5*betaprm**4*sigmaprm+6*gammaprm-12*betaprm**2+8-2*gammaprm/betaprm**2
c2 = 10*betaprm**3*sigmaprm+(3*gammaprm+7)/betaprm-18*betaprm
c3 = 10*betaprm**2*sigmaprm-12+2/betaprm**2
c4 = 5*betaprm*sigmaprm-3/betaprm
c5 = sigmaprm
gammap = expand(solve(b0-b_0,gammaprm)[0])
sigmap = expand(solve(c0-c_0,sigmaprm)[0])
replacements = [(gammaprm,gammap),(sigmaprm,sigmap)]
sigmap = simplify(sigmap.subs(gammaprm,gammap))
sigmap
gammap
b0
b1
b2
b3
B_a = b3*a**3+b2*a**2+b1*a+b_0
B_a
# c0 = c0.subs(sigmaprm,sigmap).subs(gammaprm,gammap)
c0
c1 = c1.subs(sigmaprm,sigmap).subs(gammaprm,gammap)
expand(c1)
c2 = c2.subs(sigmaprm,sigmap).subs(gammaprm,gammap)
expand(c2)
c3 = c3.subs(sigmaprm,sigmap).subs(gammaprm,gammap)
expand(c3)
c4 = c4.subs(sigmaprm,sigmap).subs(gammaprm,gammap)
expand(c4)
c5 = c5.subs(sigmaprm,sigmap).subs(gammaprm,gammap)
c5
C_a = c_5*a**5+c4*a**4+c3*a**3+c2*a**2+c1*a+c_0
collect(expand(C_a),a)
series(B_a**2-20*a*C_a/9,a,n=7)
series(B**2-20*a*C/9,a,n=7)
Etrun = a*thetaprm+b*thetaprm**3+c*thetaprm**5
Etrun
collect(Etrun.subs([(b,B),(c,C)]),thetaprm)
theta_L = solve(Etrun.diff(thetaprm),thetaprm)[1]
theta_U = solve(Etrun.diff(thetaprm),thetaprm)[3]
theta_L,theta_U
E_L = simplify(Etrun.subs(thetaprm,theta_U))
E_U = simplify(Etrun.subs(thetaprm,theta_L))
E_L,E_U
Explanation: Rescaling
End of explanation |
10,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The NEST noise_generator
Hans Ekkehard Plesser, 2015-06-25
This notebook describes how the NEST noise_generator model works and what effect it has on model neurons.
NEST needs to be in your PYTHONPATH to run this notebook.
Basics
The noise_generator emits
a piecewise constant current
that changes at fixed intervals $\delta$.
For each interval, a new amplitude is chosen from the normal distribution.
Each target neuron receives a different realization of the current.
To be precise, the output current of the generator is given by
$$I(t) = \mu + \sigma N_j \qquad\text{with $j$ such that}\quad j\delta < t \leq (j+1)\delta$$
where $N_j$ is the value drawn from the zero-mean unit-variance normal distribution for interval $j$ containing $t$.
When using the generator with modulated variance, the noise current is given by
$$I(t) = \mu + \sqrt{\sigma^2 + \sigma_m^2\sin(2\pi f j\delta + \frac{2\pi}{360}\phi_d)} N_j \;.$$
Mathematical symbols match model parameters as follows
|Symbol|Parameter|Unit|Default|Description|
|------|
Step1: We thus have for $\delta \ll \tau$ and $t\gg\tau$
$$\langle (\Delta V)^2 \rangle
\approx \frac{\delta\tau \sigma^2 }{2 C^2} \;.$$
How to obtain a specific mean and variance of the potential
In order to obtain a specific mean membrane potential $\bar{V}$ with standard deviation $\Sigma$ for given neuron parameters $\tau$ and $C$ and fixed current-update interval $\delta$, we invert the expressions obtained above.
For the mean, we have for $t\to\infty$
$$\langle V\rangle = \frac{\mu\tau}{C} \qquad\Rightarrow\qquad \mu = \frac{C}{\tau} \bar{V}$$
and for the standard deviation
$$\langle (\Delta V)^2 \rangle \approx \frac{\delta\tau \sigma^2 }{2 C^2}
\qquad\Rightarrow\qquad \sigma = \sqrt{\frac{2}{\delta\tau}}C\Sigma \;.$$
Tests and examples
We will now test the expressions derived above against NEST. We first define some helper functions.
Step2: A first test simulation
Step3: Theory and simulation are in excellent agreement. The regular "drops" in the standard deviation are a consquence of the piecewise constant current and the synchronous switch in current for all neurons. It is discussed in more detail below.
A case with non-zero mean
We repeat the previous simulation, but now with non-zero mean current.
Step4: We again observe excellent agreement between theory and simulation.
Shorter and longer switching intervals
We now repeat the previous simulation for zero mean with shorter ($\delta=0.1$ ms) and longer ($\delta=10$ ms) switching intervals.
Step5: Again, agreement is fine and the slight drooping artefacts are invisible, since the noise is now updated on every time step. Note also that the noise standard deviation $\sigma$ is larger (by $\sqrt{10}$) than for $\delta=1$ ms.
Step6: For $\delta=10$, i.e., a noise switching time equal to $\tau_m$, the drooping artefact becomes clearly visible. Note that our theory developed above only applies to the points at which the input current switches, i.e., at multiples of $\delta$, beginning with the arrival of the first current at the neuron (at delay plus one time step). At those points, agreement with theory is good.
Why does the standard deviation dip between current updates?
In the last case, where $\delta = \tau_m$, the dips in the membrane potential between changes in the noise current become quite large. They can be explained as follows. For large $\delta$, we have at the end of a $\delta$-interval for neuron $n$ membrane potential $V_n(t_{j})\approx I_{n,j-1}\tau/C$ and these values will be distributed across neurons with standard deviation $\sqrt{\langle (\Delta V_m)^2 \rangle}$. Then, input currents of all neurons switch to new values $I_{n,j}$ and the membrane potential of each neuron now evolves towards $V_n(t_{j+1})\approx I_{n,j}\tau/C$. Since current values are independent of each other, this means that membrane-potential trajectories criss-cross each other, constricting the variance of the membrane potential before they approach their new steady-state values, as illustrated below.
You should therefore use short switching times $\delta$.
Step7: Autocorrelation
We briefly look at the autocorrelation of the membrane potential for three values of $\delta$.
Step8: We see that the autocorrelation is clearly dominated by the membrane time constant of $\tau_m=10$ ms. The switching time $\delta$ has a lesser effect, although it is noticeable for $\delta=5$ ms.
Different membrane time constants
To document the influence of the membrane time constant, we compute the autocorrelation function for three different $\tau_m$. | Python Code:
import sympy
sympy.init_printing()
x = sympy.Symbol('x')
sympy.series((1-sympy.exp(-x))/(1+sympy.exp(-x)), x)
Explanation: The NEST noise_generator
Hans Ekkehard Plesser, 2015-06-25
This notebook describes how the NEST noise_generator model works and what effect it has on model neurons.
NEST needs to be in your PYTHONPATH to run this notebook.
Basics
The noise_generator emits
a piecewise constant current
that changes at fixed intervals $\delta$.
For each interval, a new amplitude is chosen from the normal distribution.
Each target neuron receives a different realization of the current.
To be precise, the output current of the generator is given by
$$I(t) = \mu + \sigma N_j \qquad\text{with $j$ such that}\quad j\delta < t \leq (j+1)\delta$$
where $N_j$ is the value drawn from the zero-mean unit-variance normal distribution for interval $j$ containing $t$.
When using the generator with modulated variance, the noise current is given by
$$I(t) = \mu + \sqrt{\sigma^2 + \sigma_m^2\sin(2\pi f j\delta + \frac{2\pi}{360}\phi_d)} N_j \;.$$
Mathematical symbols match model parameters as follows
|Symbol|Parameter|Unit|Default|Description|
|------|:--------|:---|------:|:----------|
|$\mu$|mean|pA|0 pA|mean of the noise current amplitude|
|$\sigma$|std|pA|0 pA|standard deviation of the noise current amplitude|
|$\sigma_m$|std_mod|pA|0 pA|modulation depth of the std. deviation of the noise current amplitude|
|$\delta$|dt|ms|1 ms|interval between current amplitude changes|
|$f$|frequency|Hz|0 Hz| frequency of variance modulation|
|$\phi_d$|phase|[deg]|0$^{\circ}$| phase of variance modulation|
For the remainder of this document, we will only consider the current at time points $t_j=j\delta$ and define
$$I_j = I(t_j+) = \mu + \sigma N_j $$
and correspondingly for the case of modulated noise. Note that $I_j$ is thus the current emitted during $(t_j, t_{j+1}]$, following NEST's use of left-open, right-closed intervals. We also set $\omega=2\pi f$ and $\phi=\frac{2\pi}{360}\phi_d$ for brevity.
Properties of the noise current
The noise current is a piecewise constant current. Thus, it is only an approximation to white noise and the properties of the noise will depend on the update interval $\delta$. The default update interval is $\delta = 1$ms. We chose this value so that the default would be independent from the time step $h$ of the simulation, assuming that time steps larger than 1 ms are rarely used. It also is plausible to assume that most time steps chosen will divide 1 ms evenly, so that changes in current amplitude will coincide with time steps. If this is not the case, the subsequent analysis does not apply exactly.
The currents to all targets of a noise generator have different amplitudes, but always change simultaneously at times $j\delta$.
Across an ensemble of targets or realizations, we have
\begin{align}
\langle I_j\rangle &= \mu \
\langle \Delta I_j^2\rangle &= \sigma^2 \qquad \text{without modulation} \
\langle \Delta I_j^2\rangle &= \sigma^2 + \sigma_m^2\sin( \omega j\delta + \phi) \qquad \text{with modulation.}
\end{align}
Without modulation, the autocorrelation of the noise is given by
$$\langle (I_j-\mu) (I_k-\mu)\rangle = \sigma^2\delta_{jk}$$
where $\delta_{jk}$ is Kronecker's delta.
With modulation, the autocorrlation is
$$\langle (I_j-\mu) (I_k-\mu)\rangle = \sigma_j^2\delta_{jk}\qquad\text{where}\; \sigma_j = \sqrt{\sigma^2 + \sigma_m^2\sin( j\delta\omega + \phi_d)}\;.$$
Note that it is currently not possible to record this noise current directly in NEST, since a multimeter cannot record from a noise_generator.
Noise generators effect on a neuron
Precisely how a current injected into a neuron will affect that neuron, will obviously depend on the neuron itself. We consider here the subthreshold dynamics most widely used in NEST, namely the leaky integrator. The analysis that follows is applicable directly to all iaf_psc_* models. It applies to conductance based neurons such as the iaf_cond_* models only as long as no synaptic input is present, which changes the membrane conductances.
Membrane potential dynamics
We focus here only on subthreshold dynamics, i.e., we assume that the firing threshold of the neuron is $V_{\text{th}}=\infty$. We also ignore all synaptic input, which is valid for linear models, and set the resting potential $E_L=0$ mV for convenience. The membrane potential $V$ is then governed by
$$\dot{V} = - \frac{V}{\tau} + \frac{I}{C}$$
where $\tau$ is the membrane time constant and $C$ the capacitance. We further assume $V(0)=0$ mV. We now focus on the membrane potential at times $t_j=j\delta$. Let $V_j=V(j\delta)$ be the membrane potential at time $t_j$. Then, a constant currant $I_j$ will be applied to the neuron until $t_{j+1}=t_j+\delta$, at which time the membrane potential will be
$$V_{j+1} = V_j e^{-\delta/\tau} + \left(1-e^{-\delta/\tau}\right)\frac{I_j\tau}{C} \;.$$
We can apply this backward in time towards $V_0=0$
\begin{align}
V_{j+1} &= V_j e^{-\delta/\tau} + \left(1-e^{-\delta/\tau}\right)\frac{I_j\tau}{C} \
&= \left[V_{j-1} e^{-\delta/\tau} + \left(1-e^{-\delta/\tau}\right)\frac{I_{j-1}\tau}{C}\right]
e^{-\delta/\tau} + \left(1-e^{-\delta/\tau}\right)\frac{I_j\tau}{C} \
&= \left(1-e^{-\delta/\tau}\right)\frac{\tau}{C}\sum_{k=0}^{j} I_k e^{-(j-k)\delta/\tau} \
&= \left(1-e^{-\delta/\tau}\right)\frac{\tau}{C}\sum_{k=0}^{j} I_{k} e^{-k\delta/\tau} \;.
\end{align}
In the last step, we exploited the mutual independence of the random current amplitudes $I_k$, which allows us to renumber them arbitratily.
Mean and variance of the membrane potential
The mean of the membrane potential at $t_{j+1}$ is thus
\begin{align}
\langle V_{j+1}\rangle &= \left(1-e^{-\delta/\tau}\right)\frac{\tau}{C}\sum_{k=0}^{j} \langle I_{k} \rangle e^{-k\delta/\tau}\
&= \frac{\mu\tau}{C}\left(1-e^{-\delta/\tau}\right)\sum_{k=0}^{j} e^{-k\delta/\tau}\
&= \frac{\mu\tau}{C}\left(1-e^{-(j+1)\delta/\tau}\right)\
&= \frac{\mu\tau}{C}\left(1-e^{-t_{j+1}/\tau}\right)
\end{align}
as expected; note that we used the geometric sum formula in the second step.
To obtain the variance of the membrane potential at $t_{j+1}$, we first compute the second moment
$$\langle V_{j+1}^2 \rangle = \frac{\tau^2}{C^2}\left(1-e^{-\delta/\tau}\right)^2 \left\langle\left(\sum_{k=0}^{j} I_{k} e^{-k\delta/\tau}\right)^2\right\rangle$$
Substituting $q = e^{-\delta/\tau}$ and $\alpha = \frac{\tau^2}{C^2}\left(1-e^{-\delta/\tau}\right)^2= \frac{\tau^2}{C^2}\left(1-q\right)^2$ and , we have
\begin{align}
\langle V_{j+1}^2 \rangle &= \alpha \left\langle\left(\sum_{k=0}^{j} I_{k} q^k\right)^2\right\rangle \
&= \alpha \sum_{k=0}^{j} \sum_{m=0}^{j} \langle I_k I_m \rangle q^{k+m} \
&= \alpha \sum_{k=0}^{j} \sum_{m=0}^{j} (\mu^2 + \sigma_k^2 \delta_{km}) q^{k+m} \
&= \alpha \mu^2 \left(\sum_{k=0}^j q^k\right)^2 + \alpha \sum_{k=0}^{j} \sigma_k^2 q^{2k} \
&= \langle V_{j+1}\rangle^2 + \alpha \sum_{k=0}^{j} \sigma_k^2 q^{2k} \;.
\end{align}
Evaluating the remaining sum for the modulate case will be tedious, so we focus for now on the unmodulated case, i.e., $\sigma\equiv\sigma_k$, so that we again are left with a geometric sum, this time over $q^2$. We can now subtract the square of the mean to obtain the variance
\begin{align}
\langle (\Delta V_{j+1})^2 \rangle &= \langle V_{j+1}^2 \rangle - \langle V_{j+1}\rangle^2 \
&= \alpha \sigma^2 \frac{q^{2(j+1)}-1}{q^2-1} \
&= \frac{\sigma^2\tau^2}{C^2} (1-q)^2 \frac{q^{2(j+1)}-1}{q^2-1} \
&= \frac{\sigma^2\tau^2}{C^2} \frac{1-q}{1+q}\left(1-q^{2(j+1)}\right) \
&= \frac{\sigma^2\tau^2}{C^2} \frac{1-e^{-\delta/\tau}}{1+e^{-\delta/\tau}}\left(1-e^{-2t_{j+1}/\tau}\right) \;.
\end{align}
In the last step, we used that $1-q^2=(1-q)(1+q)$.
The last term in this expression describes the approach of the variance of the membrane potential to its steady-state value. The fraction in front of it describes the effect of switching current amplitudes at intervals $\delta$ instead of instantenously as in real white noise.
We now have in the long-term limit
$$\langle (\Delta V)^2 \rangle = \lim_{j\to\infty} \langle (\Delta V_{j+1})^2 \rangle
= \frac{\sigma^2\tau^2}{C^2} \frac{1-e^{-\delta/\tau}}{1+e^{-\delta/\tau}} \;. $$
We expand the fraction:
End of explanation
import math
import numpy as np
import scipy
import matplotlib.pyplot as plt
%matplotlib inline
def noise_params(V_mean, V_std, dt=1.0, tau_m=10., C_m=250.):
'Returns mean and std for noise generator for parameters provided; defaults for iaf_psc_alpha.'
return C_m / tau_m * V_mean, math.sqrt(2/(tau_m*dt))*C_m*V_std
def V_asymptotic(mu, sigma, dt=1.0, tau_m=10., C_m=250.):
'Returns asymptotic mean and std of V_m'
V_mean = mu * tau_m / C_m
V_std = (sigma * tau_m / C_m) * np.sqrt(( 1 - math.exp(-dt/tau_m) ) / ( 1 + math.exp(-dt/tau_m) ))
return V_mean, V_std
def V_mean(t, mu, tau_m=10., C_m=250.):
'Returns predicted voltage for given times and parameters.'
vm, _ = V_asymptotic(mu, sigma, tau_m=tau_m, C_m=C_m)
return vm * ( 1 - np.exp( - t / tau_m ) )
def V_std(t, sigma, dt=1.0, tau_m=10., C_m=250.):
'Returns predicted variance for given times and parameters.'
_, vms = V_asymptotic(mu, sigma, dt=dt, tau_m=tau_m, C_m=C_m)
return vms * np.sqrt(1 - np.exp(-2*t/tau_m))
import nest
def simulate(mu, sigma, dt=1.0, tau_m=10., C_m=250., N=1000, t_max=50.):
'''
Simulate an ensemble of N iaf_psc_alpha neurons driven by noise_generator.
Returns
- voltage matrix, one column per neuron
- time axis indexing matrix rows
- time shift due to delay, time at which first current arrives
'''
resolution = 0.1
delay = 1.0
nest.ResetKernel()
nest.SetKernelStatus({'resolution': resolution})
ng = nest.Create('noise_generator', params={'mean': mu, 'std': sigma, 'dt': dt})
vm = nest.Create('voltmeter', params={'interval': resolution})
nrns = nest.Create('iaf_psc_alpha', N, params={'E_L': 0., 'V_m': 0., 'V_th': 1e6,
'tau_m': tau_m, 'C_m': C_m})
nest.Connect(ng, nrns, syn_spec={'delay': delay})
nest.Connect(vm, nrns)
nest.Simulate(t_max)
# convert data into time axis vector and matrix with one column per neuron
t, s, v = vm.events['times'], vm.events['senders'], vm.events['V_m']
tix = np.array(np.round(( t - t.min() ) / resolution), dtype=int)
sx = np.unique(s)
assert len(sx) == N
six = s - s.min()
V = np.zeros((tix.max()+1, N))
for ix, vm in enumerate(v):
V[tix[ix], six[ix]] = vm
# time shift due to delay and onset after first step
t_shift = delay + resolution
return V, np.unique(t), t_shift
Explanation: We thus have for $\delta \ll \tau$ and $t\gg\tau$
$$\langle (\Delta V)^2 \rangle
\approx \frac{\delta\tau \sigma^2 }{2 C^2} \;.$$
How to obtain a specific mean and variance of the potential
In order to obtain a specific mean membrane potential $\bar{V}$ with standard deviation $\Sigma$ for given neuron parameters $\tau$ and $C$ and fixed current-update interval $\delta$, we invert the expressions obtained above.
For the mean, we have for $t\to\infty$
$$\langle V\rangle = \frac{\mu\tau}{C} \qquad\Rightarrow\qquad \mu = \frac{C}{\tau} \bar{V}$$
and for the standard deviation
$$\langle (\Delta V)^2 \rangle \approx \frac{\delta\tau \sigma^2 }{2 C^2}
\qquad\Rightarrow\qquad \sigma = \sqrt{\frac{2}{\delta\tau}}C\Sigma \;.$$
Tests and examples
We will now test the expressions derived above against NEST. We first define some helper functions.
End of explanation
dt = 1.0
mu, sigma = noise_params(0., 1., dt=dt)
print("mu = {:.2f}, sigma = {:.2f}".format(mu, sigma))
V, t, ts = simulate(mu, sigma, dt=dt)
V_mean_th = V_mean(t, mu)
V_std_th = V_std(t, sigma, dt=dt)
plt.plot(t, V.mean(axis=1), 'b-', label=r'$\bar{V_m}$')
plt.plot(t + ts, V_mean_th, 'b--', label=r'$\langle V_m \rangle$')
plt.plot(t, V.std(axis=1), 'r-', label=r'$\sqrt{\bar{\Delta V_m^2}}$')
plt.plot(t + ts, V_std_th, 'r--', label=r'$\sqrt{\langle (\Delta V_m)^2 \rangle}$')
plt.legend()
plt.xlabel('Time $t$ [ms]')
plt.ylabel('Membrane potential $V_m$ [mV]')
plt.xlim(0, 50);
Explanation: A first test simulation
End of explanation
dt = 1.0
mu, sigma = noise_params(2., 1., dt=dt)
print("mu = {:.2f}, sigma = {:.2f}".format(mu, sigma))
V, t, ts = simulate(mu, sigma, dt=dt)
V_mean_th = V_mean(t, mu)
V_std_th = V_std(t, sigma, dt=dt)
plt.plot(t, V.mean(axis=1), 'b-', label=r'$\bar{V_m}$')
plt.plot(t + ts, V_mean_th, 'b--', label=r'$\langle V_m \rangle$')
plt.plot(t, V.std(axis=1), 'r-', label=r'$\sqrt{\bar{\Delta V_m^2}}$')
plt.plot(t + ts, V_std_th, 'r--', label=r'$\sqrt{\langle (\Delta V_m)^2 \rangle}$')
plt.legend()
plt.xlabel('Time $t$ [ms]')
plt.ylabel('Membrane potential $V_m$ [mV]')
plt.xlim(0, 50);
Explanation: Theory and simulation are in excellent agreement. The regular "drops" in the standard deviation are a consquence of the piecewise constant current and the synchronous switch in current for all neurons. It is discussed in more detail below.
A case with non-zero mean
We repeat the previous simulation, but now with non-zero mean current.
End of explanation
dt = 0.1
mu, sigma = noise_params(0., 1., dt=dt)
print("mu = {:.2f}, sigma = {:.2f}".format(mu, sigma))
V, t, ts = simulate(mu, sigma, dt=dt)
V_mean_th = V_mean(t, mu)
V_std_th = V_std(t, sigma, dt=dt)
plt.plot(t, V.mean(axis=1), 'b-', label=r'$\bar{V_m}$')
plt.plot(t + ts, V_mean_th, 'b--', label=r'$\langle V_m \rangle$')
plt.plot(t, V.std(axis=1), 'r-', label=r'$\sqrt{\bar{\Delta V_m^2}}$')
plt.plot(t + ts, V_std_th, 'r--', label=r'$\sqrt{\langle (\Delta V_m)^2 \rangle}$')
plt.legend()
plt.xlabel('Time $t$ [ms]')
plt.ylabel('Membrane potential $V_m$ [mV]')
plt.xlim(0, 50);
Explanation: We again observe excellent agreement between theory and simulation.
Shorter and longer switching intervals
We now repeat the previous simulation for zero mean with shorter ($\delta=0.1$ ms) and longer ($\delta=10$ ms) switching intervals.
End of explanation
dt = 10.0
mu, sigma = noise_params(0., 1., dt=dt)
print("mu = {:.2f}, sigma = {:.2f}".format(mu, sigma))
V, t, ts = simulate(mu, sigma, dt=dt)
V_mean_th = V_mean(t, mu)
V_std_th = V_std(t, sigma, dt=dt)
plt.plot(t, V.mean(axis=1), 'b-', label=r'$\bar{V_m}$')
plt.plot(t + ts, V_mean_th, 'b--', label=r'$\langle V_m \rangle$')
plt.plot(t, V.std(axis=1), 'r-', label=r'$\sqrt{\bar{\Delta V_m^2}}$')
plt.plot(t + ts, V_std_th, 'r--', label=r'$\sqrt{\langle (\Delta V_m)^2 \rangle}$')
plt.legend()
plt.xlabel('Time $t$ [ms]')
plt.ylabel('Membrane potential $V_m$ [mV]')
plt.xlim(0, 50);
Explanation: Again, agreement is fine and the slight drooping artefacts are invisible, since the noise is now updated on every time step. Note also that the noise standard deviation $\sigma$ is larger (by $\sqrt{10}$) than for $\delta=1$ ms.
End of explanation
plt.plot(t, V[:, :25], lw=3, alpha=0.5);
plt.plot([31.1, 31.1], [-3, 3], 'k--', lw=2)
plt.plot([41.1, 41.1], [-3, 3], 'k--', lw=2)
plt.xlabel('Time $t$ [ms]')
plt.ylabel('Membrane potential $V_m$ [mV]')
plt.xlim(30, 42);
plt.ylim(-2.1, 2.1);
Explanation: For $\delta=10$, i.e., a noise switching time equal to $\tau_m$, the drooping artefact becomes clearly visible. Note that our theory developed above only applies to the points at which the input current switches, i.e., at multiples of $\delta$, beginning with the arrival of the first current at the neuron (at delay plus one time step). At those points, agreement with theory is good.
Why does the standard deviation dip between current updates?
In the last case, where $\delta = \tau_m$, the dips in the membrane potential between changes in the noise current become quite large. They can be explained as follows. For large $\delta$, we have at the end of a $\delta$-interval for neuron $n$ membrane potential $V_n(t_{j})\approx I_{n,j-1}\tau/C$ and these values will be distributed across neurons with standard deviation $\sqrt{\langle (\Delta V_m)^2 \rangle}$. Then, input currents of all neurons switch to new values $I_{n,j}$ and the membrane potential of each neuron now evolves towards $V_n(t_{j+1})\approx I_{n,j}\tau/C$. Since current values are independent of each other, this means that membrane-potential trajectories criss-cross each other, constricting the variance of the membrane potential before they approach their new steady-state values, as illustrated below.
You should therefore use short switching times $\delta$.
End of explanation
from scipy.signal import fftconvolve
from statsmodels.tsa.stattools import acf
def V_autocorr(V_mean, V_std, dt=1., tau_m=10.):
'Returns autocorrelation of membrane potential and pertaining time axis.'
mu, sigma = noise_params(V_mean, V_std, dt=dt, tau_m=tau_m)
V, t, ts = simulate(mu, sigma, dt=dt, tau_m=tau_m, t_max=5000., N=20)
# drop the first second
V = V[t>1000., :]
# compute autocorrelation columnwise, then average over neurons
nlags = 1000
nt, nn = V.shape
acV = np.zeros((nlags+1, nn))
for c in range(V.shape[1]):
acV[:, c] = acf(V[:, c], unbiased=True, nlags=1000, fft=True)
#fftconvolve(V[:, c], V[::-1, c], mode='full') / V[:, c].std()**2
acV = acV.mean(axis=1)
# time axis
dt = t[1] - t[0]
acT = np.arange(0, nlags+1) * dt
return acV, acT
acV_01, acT_01 = V_autocorr(0., 1., 0.1)
acV_10, acT_10 = V_autocorr(0., 1., 1.0)
acV_50, acT_50 = V_autocorr(0., 1., 5.0)
plt.plot(acT_01, acV_01, label=r'$\delta = 0.1$ms');
plt.plot(acT_10, acV_10, label=r'$\delta = 1.0$ms');
plt.plot(acT_50, acV_50, label=r'$\delta = 5.0$ms');
plt.xlim(0, 50);
plt.ylim(-0.1, 1.05);
plt.legend();
plt.xlabel(r'Delay $\tau$ [ms]')
plt.ylabel(r'$\langle V(t)V(t+\tau)\rangle$');
Explanation: Autocorrelation
We briefly look at the autocorrelation of the membrane potential for three values of $\delta$.
End of explanation
acV_t01, acT_t01 = V_autocorr(0., 1., 0.1, 1.)
acV_t05, acT_t05 = V_autocorr(0., 1., 0.1, 5.)
acV_t10, acT_t10 = V_autocorr(0., 1., 0.1, 10.)
plt.plot(acT_t01, acV_t01, label=r'$\tau_m = 1$ms');
plt.plot(acT_t05, acV_t05, label=r'$\tau_m = 5$ms');
plt.plot(acT_t10, acV_t10, label=r'$\tau_m = 10$ms');
plt.xlim(0, 50);
plt.ylim(-0.1, 1.05);
plt.legend();
plt.xlabel(r'Delay $\tau$ [ms]')
plt.ylabel(r'$\langle V(t)V(t+\tau)\rangle$');
Explanation: We see that the autocorrelation is clearly dominated by the membrane time constant of $\tau_m=10$ ms. The switching time $\delta$ has a lesser effect, although it is noticeable for $\delta=5$ ms.
Different membrane time constants
To document the influence of the membrane time constant, we compute the autocorrelation function for three different $\tau_m$.
End of explanation |
10,887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iterative vs fragment-based mapping
Iterative mapping first proposed by <a name="ref-1"/>(Imakaev et al., 2012), allows to map usually a high number of reads. However other methodologies, less "brute-force" can be used to take into account the chimeric nature of the Hi-C reads.
A simple alternative is to allow split mapping.
Another way consists in pre-truncating <a name="ref-1"/>(Ay and Noble, 2015) reads that contain a ligation site and map only the longest part of the read <a name="ref-2"/>(Wingett et al., 2015).
Finally, an intermediate approach, fragment-based, consists in mapping full length reads first, and than splitting unmapped reads at the ligation sites <a name="ref-1"/>(Serra et al. 2017).
Advantages of iterative mapping
It's the only solution when no restriction enzyme has been used (i.e. micro-C)
Can be faster when few windows (2 or 3) are used
Advantages of fragment-based mapping
Generally faster
Safer
Step1: The full mapping function can be used to perform either iterative or fragment-based mapping, or a combination of both.
It is important to note that although the default mapping parameters used by TADbit are relatively strict, a non negligible proportion of the reads will be mis-mapped, and this applies at each iteration of the mapping.
Iterative mapping
Here an example of use as iterative mapping
Step2: Note
Step3: And for the second side of the read-end
Step4: Fragment-based mapping
The fragment-based mapping strategy works in 2 steps | Python Code:
from pytadbit.mapping.full_mapper import full_mapping
Explanation: Iterative vs fragment-based mapping
Iterative mapping first proposed by <a name="ref-1"/>(Imakaev et al., 2012), allows to map usually a high number of reads. However other methodologies, less "brute-force" can be used to take into account the chimeric nature of the Hi-C reads.
A simple alternative is to allow split mapping.
Another way consists in pre-truncating <a name="ref-1"/>(Ay and Noble, 2015) reads that contain a ligation site and map only the longest part of the read <a name="ref-2"/>(Wingett et al., 2015).
Finally, an intermediate approach, fragment-based, consists in mapping full length reads first, and than splitting unmapped reads at the ligation sites <a name="ref-1"/>(Serra et al. 2017).
Advantages of iterative mapping
It's the only solution when no restriction enzyme has been used (i.e. micro-C)
Can be faster when few windows (2 or 3) are used
Advantages of fragment-based mapping
Generally faster
Safer: mapped reads are generally larger than 25-30 nm (the largest window used in iterative mapping). Less reads are mapped, but the difference is usually canceled or reversed when looking for "valid-pairs".
Note: We use GEM2 <a name="ref-1"/>(Marco-Sola et al. 2012), performance are very similar to Bowtie2, and in some cases slightly better.
For now TADbit is only compatible with GEM2.
Mapping
End of explanation
cell = 'mouse_B' # or mouse_PSC
rep = 'rep1' # or rep2
Explanation: The full mapping function can be used to perform either iterative or fragment-based mapping, or a combination of both.
It is important to note that although the default mapping parameters used by TADbit are relatively strict, a non negligible proportion of the reads will be mis-mapped, and this applies at each iteration of the mapping.
Iterative mapping
Here an example of use as iterative mapping:
(Estimated time 15h with 8 cores)
End of explanation
! mkdir -p results/iterativ/$cell\_$rep
! mkdir -p results/iterativ/$cell\_$rep/01_mapping
# for the first side of the reads
full_mapping(mapper_index_path='genome/Mus_musculus-GRCm38.p6/Mus_musculus-GRCm38.p6_contigs.gem',
out_map_dir='results/iterativ/{0}_{1}/01_mapping/mapped_{0}_{1}_r1/'.format(cell, rep),
fastq_path='FASTQs/%s_%s_1.fastq.dsrc' % (cell,rep),
frag_map=False, clean=True, nthreads=8,
windows=((1,25),(1,35),(1,45),(1,55),(1,65),(1,75)),
temp_dir='results/iterativ/{0}_{1}/01_mapping/mapped_{0}_{1}_r1_tmp/'.format(cell, rep))
Explanation: Note: the execution of this notebook should be repeated for each of the 4 replicates
End of explanation
# for the second side of the reads
full_mapping(mapper_index_path='genome/Mus_musculus-GRCm38.p6/Mus_musculus-GRCm38.p6_contigs.gem',
out_map_dir='results/iterativ/{0}_{1}/01_mapping/mapped_{0}_{1}_r2/'.format(cell, rep),
fastq_path='FASTQs/%s_%s_2.fastq.dsrc' % (cell,rep),
frag_map=False, clean=True, nthreads=8,
windows=((1,25),(1,35),(1,45),(1,55),(1,65),(1,75)),
temp_dir='results/iterativ/{0}_{1}/01_mapping/mapped_{0}_{1}_r2_tmp/'.format(cell, rep))
Explanation: And for the second side of the read-end:
End of explanation
! mkdir -p results/fragment/$cell\_$rep
! mkdir -p results/fragment/$cell\_$rep/01_mapping
# for the first side of the reads
full_mapping(mapper_index_path='genome/Mus_musculus-GRCm38.p6/Mus_musculus-GRCm38.p6_contigs.gem',
out_map_dir='results/fragment/{0}_{1}/01_mapping/mapped_{0}_{1}_r1/'.format(cell, rep),
fastq_path='FASTQs/%s_%s_1.fastq.dsrc' % (cell, rep),
r_enz='MboI', frag_map=True, clean=True, nthreads=8,
temp_dir='results/fragment/{0}_{1}/01_mapping/mapped_{0}_{1}_r1_tmp/'.format(cell, rep))
# for the second side of the reads
full_mapping(mapper_index_path='genome/Mus_musculus-GRCm38.p6/Mus_musculus-GRCm38.p6_contigs.gem',
out_map_dir='results/fragment/{0}_{1}/01_mapping/mapped_{0}_{1}_r2/'.format(cell, rep),
fastq_path='FASTQs/%s_%s_2.fastq.dsrc' % (cell, rep),
r_enz='MboI', frag_map=True, clean=True, nthreads=8,
temp_dir='results/fragment/{0}_{1}/01_mapping/mapped_{0}_{1}_r2_tmp/'.format(cell, rep))
Explanation: Fragment-based mapping
The fragment-based mapping strategy works in 2 steps:
1. The read-ends are mapped entirely, assuming that no ligation occurred in them.
2. For unmapped read-ends, the function searches for a ligation site (e.g. in the case of MboI this would correspond to GATCGATC and in the case of HindIII to AAGCTAGCTT). The read-end is split accordingly replacing the ligation site by two RE sites:
read-end-part-one---AAGCTAGCTT----read-end-part-two
will be split in:
read-end-part-one---AAGCTT
and:
AAGCTT----read-end-part-two
Note: __if no ligation site is found__, step two will be repeated using digested RE site as split point (AAGCT in the case of HindIII). This is done in order to be protected against sequencing errors. When this path is followed the digested RE site is removed, but not replaced. If digested RE sites are not found either, the read will be classified as unmapped.
Note: __both mapping strategies can be combined__, for example defining the windows as previously (iterative mapping), but also give a RE name r_enz=MboI and setting frag_map=True like this if a read has not been mapped in any window, TADbit will also try to apply the fragment-based strategy.
End of explanation |
10,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Code printers
The most basic form of code generation are the code printers. The convert SymPy expressions into the target language.
The most common languages are C, C++, Fortran, and Python, but over a dozen languages are supported. Here, we will quickly go over each supported language.
Step1: Let us use the function $$|\sin(x^2)|.$$
Step2: Exercise
Step3: Exercise
Step4: We've also prepared some Javascript to do the plotting. This code will take two mathematical expressions written in Javascript and plot the functions.
Step5: Now SymPy functions can be plotted by filling in the two missing expressions in the above code and then calling the Javascript display function on that code.
Step6: Exercise | Python Code:
from sympy import *
init_printing()
Explanation: Code printers
The most basic form of code generation are the code printers. The convert SymPy expressions into the target language.
The most common languages are C, C++, Fortran, and Python, but over a dozen languages are supported. Here, we will quickly go over each supported language.
End of explanation
x = symbols('x')
expr = abs(sin(x**2))
expr
ccode(expr)
fcode(expr)
julia_code(expr)
jscode(expr)
mathematica_code(expr)
octave_code(expr)
from sympy.printing.rust import rust_code
rust_code(expr)
rcode(expr)
from sympy.printing.cxxcode import cxxcode
cxxcode(expr)
Explanation: Let us use the function $$|\sin(x^2)|.$$
End of explanation
# Write your answer here
Explanation: Exercise: Codegen your own function
Come up with a symbolic expression and try generating code for it in each language. Note, some languages don't support everything. What works and what doesn't? What things are the same across languages and what things are different?
Reminder: If you click a cell and press b it will add a new cell below it.
End of explanation
%%javascript
require.config({
paths: {
'chartjs': '//cdnjs.cloudflare.com/ajax/libs/Chart.js/2.6.0/Chart'
}
});
Explanation: Exercise: Plotting SymPy Functions with JavaScript
One use case that works nicely with the Jupyter notebook is plotting mathematical functions using JavaScript plotting libraries. There are a variety of plotting libraries available and the notebook makes it relatively easy to use. Here we will use Chart.js to plot functions of a single variable. We can use the %%javascript magic to type JavaScript directly into a notebook cell. In this cell we load in the Chart.js library:
End of explanation
from scipy2017codegen.plotting import js_template
print(js_template.format(top_function='***fill me in!***',
bottom_function='***fill me in!***',
chart_id='***fill me in!***'))
Explanation: We've also prepared some Javascript to do the plotting. This code will take two mathematical expressions written in Javascript and plot the functions.
End of explanation
from IPython.display import Javascript
x = symbols('x')
f1 = sin(x)
f2 = cos(x)
Javascript(js_template.format(top_function=jscode(f1),
bottom_function=jscode(f2),
chart_id='sincos'))
Explanation: Now SymPy functions can be plotted by filling in the two missing expressions in the above code and then calling the Javascript display function on that code.
End of explanation
from scipy2017codegen.plotting import batman_equations
top, bottom = batman_equations()
top
bottom
# Write your answer here
Explanation: Exercise: Batman!
Plot the equations below for top and bottom.
There are all kind of functions that can be plotted, but one particularly interesting set of functions are called the Batman Equations. We've provided the piecewise versions of these functions written in SymPy below. Try plotting these with the JS plotter we've created.
End of explanation |
10,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Federal Reserve Series Data
Download federal reserve series.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter Federal Reserve Series Data Recipe Parameters
Specify the values for a Fred observations API call.
A table will appear in the dataset.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute Federal Reserve Series Data
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: Federal Reserve Series Data
Download federal reserve series.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth':'service', # Credentials used for writing data.
'fred_api_key':'', # 32 character alpha-numeric lowercase string.
'fred_series_id':'', # Series ID to pull data from.
'fred_units':'lin', # A key that indicates a data value transformation.
'fred_frequency':'', # An optional parameter that indicates a lower frequency to aggregate values to.
'fred_aggregation_method':'avg', # A key that indicates the aggregation method used for frequency aggregation.
'project':'', # Existing BigQuery project.
'dataset':'', # Existing BigQuery dataset.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter Federal Reserve Series Data Recipe Parameters
Specify the values for a Fred observations API call.
A table will appear in the dataset.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'fred':{
'auth':{'field':{'name':'auth','kind':'authentication','order':0,'default':'service','description':'Credentials used for writing data.'}},
'api_key':{'field':{'name':'fred_api_key','kind':'string','order':1,'default':'','description':'32 character alpha-numeric lowercase string.'}},
'frequency':{'field':{'name':'fred_frequency','kind':'choice','order':4,'default':'','description':'An optional parameter that indicates a lower frequency to aggregate values to.','choices':['','d','w','bw','m','q','sa','a','wef','weth','wew','wetu','wem','wesu','wesa','bwew','bwem']}},
'series':[
{
'series_id':{'field':{'name':'fred_series_id','kind':'string','order':2,'default':'','description':'Series ID to pull data from.'}},
'units':{'field':{'name':'fred_units','kind':'choice','order':3,'default':'lin','description':'A key that indicates a data value transformation.','choices':['lin','chg','ch1','pch','pc1','pca','cch','cca','log']}},
'aggregation_method':{'field':{'name':'fred_aggregation_method','kind':'choice','order':5,'default':'avg','description':'A key that indicates the aggregation method used for frequency aggregation.','choices':['avg','sum','eop']}}
}
],
'out':{
'bigquery':{
'project':{'field':{'name':'project','kind':'string','order':10,'default':'','description':'Existing BigQuery project.'}},
'dataset':{'field':{'name':'dataset','kind':'string','order':11,'default':'','description':'Existing BigQuery dataset.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute Federal Reserve Series Data
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
10,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-1', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
10,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Traveling Salesman Problem
In this assignment you will implement one or more algorithms for the traveling salesman problem, such as the dynamic programming algorithm covered in the video lectures.
The file tsp.txt describes a TSP instance. The first line indicates the number of cities. Each city is a point in the plane, and each subsequent line indicates the x- and y-coordinates of a single city.
The distance between two cities is defined as the Euclidean distance --- that is, two cities at locations (x,y) and (z,w) have distance √((x−z)^2+(y−w)^2 between them.
In the box below, type in the minimum cost of a traveling salesman tour for this instance, rounded down to the nearest integer.
OPTIONAL
Step1: Draw points
Step2: Initialize the 2-D Array
Step3: Run the Dynamic Programming algorithm | Python Code:
import numpy as np
file = "tsp.txt"
# file = "test2.txt"
data = open(file, 'r').readlines()
n = int(data[0])
graph = {}
for i,v in enumerate(data[1:]):
graph[i] = tuple(map(float, v.strip().split(" ")))
dist_val = np.zeros([n,n])
for i in range(n):
for k in range(n):
dist_val[i,k] = dist_val[k,i] = np.sqrt((graph[k][0]-graph[i][0])**2 + (graph[k][1]-graph[i][1])**2)
print (graph)
Explanation: Traveling Salesman Problem
In this assignment you will implement one or more algorithms for the traveling salesman problem, such as the dynamic programming algorithm covered in the video lectures.
The file tsp.txt describes a TSP instance. The first line indicates the number of cities. Each city is a point in the plane, and each subsequent line indicates the x- and y-coordinates of a single city.
The distance between two cities is defined as the Euclidean distance --- that is, two cities at locations (x,y) and (z,w) have distance √((x−z)^2+(y−w)^2 between them.
In the box below, type in the minimum cost of a traveling salesman tour for this instance, rounded down to the nearest integer.
OPTIONAL: If you want bigger data sets to play with, check out the TSP instances from around the world https://www.tsp.gatech.edu/world/countries.html. The smallest data set (Western Sahara) has 29 cities, and most of the data sets are much bigger than that. What's the largest of these data sets that you're able to solve --- using dynamic programming or, if you like, a completely different method?
HINT: You might experiment with ways to reduce the data set size. For example, trying plotting the points. Can you infer any structure of the optimal solution? Can you use that structure to speed up your algorithm?
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
values = list(graph.values())
y = [values[i][0] for i in range(len(values))]
x = [values[i][1] for i in range(len(values))]
plt.scatter(y,x)
plt.show()
import collections
def to_key(a):
my_str = ""
for i in a:
my_str += str(int(i))
return my_str
def to_subset(v, n):
a = np.zeros(n)
a[v] = 1
return a
def create_all_subset(n):
A = collections.defaultdict(dict)
for m in range(1,n):
for a in (itertools.combinations(range(n), m)):
key = a + tuple([0 for i in range(n-m)])
print (a, tuple([0 for i in range(n-m)]), key, m, n)
for j in range(n):
A[to_key(key)][j] = np.inf
A[to_key(to_subset(0,n))][0] = 0
return A
# res= to_subset([2,3],5)
# print (res)
# print (to_key(res))
# A = create_all_subset(3)
# print (A)
# print (index_to_set(10,'25'))
# print(set_to_index([1,3]))
import itertools
def powerset(iterable):
"powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"
s = list(iterable)
return itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(1,len(s)+1))
def index_to_set(index, n='8'):
fmt = '{0:0'+n+'b}'
res = fmt.format(index)
mylist = list(res)
mylist.reverse()
print (res)
mylist = np.asarray(mylist, dtype=int)
ret = np.where(mylist==1)
# ret = []
# for i, j in enumerate(mylist):
# if j=="1":
# ret.append(i)
return list(ret[0])
def set_to_index(my_set):
# i = [1, 5, 7]
ret = 0
for i in my_set:
ret += 2**i
return ret
print ("~~ Test")
# print (set_to_index([1]))
# print (index_to_set(set_to_index([1])))
ex_all_sets = powerset(range(5))
for s in ex_all_sets:
print ("~~ Original set:", s)
print ("index:", set_to_index(s))
print ("recovered set:", index_to_set(set_to_index(s),'5'))
Explanation: Draw points
End of explanation
A = np.full([2**n, n], np.inf)
A[set_to_index([0]),0]=0
for i in range(0, n):
A[set_to_index([i]),i] = dist_val[i,0]
print (set_to_index([i]), dist_val[i,0])
Explanation: Initialize the 2-D Array
End of explanation
from tqdm import tqdm
def _dist(k, j):
return np.sqrt((graph[k][0]-graph[j][0])**2 + (graph[k][1]-graph[j][1])**2)
FULL = range(n)
for m in range(1,n):
# all_sets = powerset(range(1,m))
all_sets = itertools.combinations(FULL, m+1)
print ("Subset Size:",m)
for _set in all_sets:
if not _set:
continue
_set = list(_set)
# print ("Len Set", len(_set))
set2_idx = set_to_index(_set)
for j in _set:
_set2 = _set.copy()
_set2.remove(j)
if j==0 or not _set2:
continue
# print ("_set2", _set2)
_set2_idx = set_to_index(_set2)
# print ("handle Set", _set2, "idx",_set2_idx, "j:", j)
minval = np.inf
for k in _set2:
# print ("idxSet:", _set2_idx, "k:", k, "dist", A[_set2_idx,k])
val = A[_set2_idx,k] + dist_val[k,j]
if val < minval:
minval = val
# print ("minval",minval)
A[set2_idx,j] = minval
# print (A)
my_set = [i for i in range(n)]
print ("Full Set", my_set, set_to_index(my_set))
minval = np.inf
for j in range(1,n):
val = A[set_to_index(my_set),j] + dist_val[j,0]
if val < minval:
minval = val
print ("minval", minval)
# print (A[set_to_index(my_set),:])
Explanation: Run the Dynamic Programming algorithm
End of explanation |
10,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
cxMate Service DEMO
By Ayato Shimada, Mitsuhiro Eto
This DEMO shows
1. detect communities using an igraph's community detection algorithm
2. paint communities (nodes and edges) in different colors
3. perform layout using graph-tool's sfdp algorithm
Step1: Send CX to service using requests module
Services are built on a server
You don't have to construct graph libraries in your local environment.
It is very easy to use python-igraph and graph-tools.
In order to send CX
requests
Step2: Network used for DEMO
This DEMO uses yeastHQSubnet.cx as original network.
- 2924 nodes
- 6827 edges
<img src="example1.png" alt="Drawing" style="width
Step3: What happened?
Output contains
graph with community membership + color assignment for each group.
- node1
Step4: 3. graph-tool layout service
In order to perform layout algorithm, graph-tool's layout algorithm service can be used.
C++ optimized parallel, community-structure-aware layout algorithms
You can use the community structure as a parameter for layout, and result reflects its structure.
You can use graph-tool's service in the same way as igraph's service.
Both input and output of cxMate service are CX, NOT igraph's object, graph-tool's object and so on.
So, you don't have to convert igraph object to graph-tools object.
<img src="service.png" alt="Drawing" style="width
Step5: Save .cx file
To save and look the output data, you can use r.json()['data']
Step6: Color Palette
If you want to change color of communities, you can do it easily.
Many color palettes of seaborn can be used. (See http
Step7: Default Palette
Without setting parameter 'palette', 'husl' is used as color palette.
Step8: Other palettes | Python Code:
# Tested on:
!python --version
Explanation: cxMate Service DEMO
By Ayato Shimada, Mitsuhiro Eto
This DEMO shows
1. detect communities using an igraph's community detection algorithm
2. paint communities (nodes and edges) in different colors
3. perform layout using graph-tool's sfdp algorithm
End of explanation
import requests
import json
url_community = 'http://localhost:80' # igraph's community detection service URL
url_layout = 'http://localhost:3000' # graph-tool's layout service URL
headers = {'Content-type': 'application/json'}
Explanation: Send CX to service using requests module
Services are built on a server
You don't have to construct graph libraries in your local environment.
It is very easy to use python-igraph and graph-tools.
In order to send CX
requests : to send CX file to service in Python. (curl also can be used.)
json : to convert object to a CX formatted string.
End of explanation
data = open('./yeastHQSubnet.cx') # 1.
parameter = {'type': 'leading_eigenvector', 'clusters': 5, 'palette': 'husl'} # 2.
r = requests.post(url=url_community, headers=headers, data=data, params=parameter) # 3.
Explanation: Network used for DEMO
This DEMO uses yeastHQSubnet.cx as original network.
- 2924 nodes
- 6827 edges
<img src="example1.png" alt="Drawing" style="width: 500px;"/>
1. igraph community detection and color generator service
In order to detect communities, igraph's community detection service can be used.
How to use the service on Jupyter Notebook
open the CX file using open()
set parameters in dictionary format. (About parameters, see the document of service.)
post the CX data to URL of service using requests.post()
End of explanation
import re
with open('output1.cx', 'w') as f:
# single quotation -> double quotation
output = re.sub(string=str(r.json()['data']), pattern="'", repl='"')
f.write(output)
Explanation: What happened?
Output contains
graph with community membership + color assignment for each group.
- node1 : group 1, red
- node2 : group 1, red
- node3 : group 2, green
...
You don't have to create your own color palette manually.
To save and look the output data, you can use r.json()['data']
Note
- When you use this output as input of next service, you must use json.dumps(r.json()['data'])
- You must replace single quotation to double quotation in output file.
End of explanation
data2 = json.dumps(r.json()['data']) # 1.
parameter = {'only-layout': False, 'groups': 'community'} # 2.
r2 = requests.post(url=url_layout, headers=headers, data=data2, params=parameter) # 3.
Explanation: 3. graph-tool layout service
In order to perform layout algorithm, graph-tool's layout algorithm service can be used.
C++ optimized parallel, community-structure-aware layout algorithms
You can use the community structure as a parameter for layout, and result reflects its structure.
You can use graph-tool's service in the same way as igraph's service.
Both input and output of cxMate service are CX, NOT igraph's object, graph-tool's object and so on.
So, you don't have to convert igraph object to graph-tools object.
<img src="service.png" alt="Drawing" style="width: 750px;"/>
How to use the service on Jupyter Notebook
open the CX file using json.dumps(r.json()['data'])
set parameters in dictionary format. (About parameters, see the document of service.)
post the CX data to URL of service using requests.post()
End of explanation
import re
with open('output2.cx', 'w') as f:
# single quotation -> double quotation
output = re.sub(string=str(r2.json()['data']), pattern="'", repl='"')
f.write(output)
Explanation: Save .cx file
To save and look the output data, you can use r.json()['data']
End of explanation
%matplotlib inline
import seaborn as sns, numpy as np
from ipywidgets import interact, FloatSlider
Explanation: Color Palette
If you want to change color of communities, you can do it easily.
Many color palettes of seaborn can be used. (See http://seaborn.pydata.org/tutorial/color_palettes.html)
End of explanation
def show_husl(n):
sns.palplot(sns.color_palette('husl', n))
print('palette: husl')
interact(show_husl, n=10);
Explanation: Default Palette
Without setting parameter 'palette', 'husl' is used as color palette.
End of explanation
def show_pal0(palette):
sns.palplot(sns.color_palette(palette, 24))
interact(show_pal0, palette='deep muted pastel bright dark colorblind'.split());
sns.choose_colorbrewer_palette('qualitative');
sns.choose_colorbrewer_palette('sequential');
Explanation: Other palettes
End of explanation |
10,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-ll', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MOHC
Source ID: UKESM1-0-LL
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
10,894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DAT210x - Programming with Python for DS
Module3 - Lab3
Step1: Load up the wheat seeds dataset into a dataframe. We've stored a copy in the Datasets directory.
Step2: Create a new 3D subplot using figure fig, which we've defined for you below. Use that subplot to draw a 3D scatter plot using the area, perimeter, and asymmetry features. Be sure so use the optional display parameter c='red', and also label your axes
Step3: Create another 3D subplot using fig. Then use the subplot to graph a 3D scatter plot of the width, groove, and length features. Be sure so use the optional display parameter c='green', and be sure to label your axes | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
Explanation: DAT210x - Programming with Python for DS
Module3 - Lab3
End of explanation
# .. your code here ..
Explanation: Load up the wheat seeds dataset into a dataframe. We've stored a copy in the Datasets directory.
End of explanation
fig = plt.figure()
# .. your code here ..
Explanation: Create a new 3D subplot using figure fig, which we've defined for you below. Use that subplot to draw a 3D scatter plot using the area, perimeter, and asymmetry features. Be sure so use the optional display parameter c='red', and also label your axes:
End of explanation
fig = plt.figure()
# .. your code here ..
# Finally, display the graphs:
plt.show()
Explanation: Create another 3D subplot using fig. Then use the subplot to graph a 3D scatter plot of the width, groove, and length features. Be sure so use the optional display parameter c='green', and be sure to label your axes:
End of explanation |
10,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Tensorboard in DeepChem
DeepChem Neural Networks models are built on top of tensorflow. Tensorboard is a powerful visualization tool in tensorflow for viewing your model architecture and performance.
In this tutorial we will show how to turn on tensorboard logging for our models, and go show the network architecture for some of our more popular models.
The first thing we have to do is load a dataset that we will monitor model performance over.
Step1: Now we will create our model with tensorboard on. All we have to do to turn tensorboard on is pass the tensorboard=True flag to the constructor of our model
Step2: Viewing the Tensorboard output
When tensorboard is turned on we log all the files needed for tensorboard in model.model_dir. To launch the tensorboard webserver we have to call in a terminal
bash
tensorboard --logdir models/ --port 6006
This will launch the tensorboard web server on your local computer on port 6006. Go to http
Step3: If you click "GRAPHS" at the top you can see a visual layout of the model. Here is what our GraphConvModel Model looks like | Python Code:
from IPython.display import Image, display
import deepchem as dc
from deepchem.molnet import load_tox21
from deepchem.models.tensorgraph.models.graph_models import GraphConvModel
# Load Tox21 dataset
tox21_tasks, tox21_datasets, transformers = load_tox21(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = tox21_datasets
Explanation: Using Tensorboard in DeepChem
DeepChem Neural Networks models are built on top of tensorflow. Tensorboard is a powerful visualization tool in tensorflow for viewing your model architecture and performance.
In this tutorial we will show how to turn on tensorboard logging for our models, and go show the network architecture for some of our more popular models.
The first thing we have to do is load a dataset that we will monitor model performance over.
End of explanation
# Construct the model with tensorbaord on
model = GraphConvModel(len(tox21_tasks), mode='classification', tensorboard=True, model_dir='models')
# Fit the model
model.fit(train_dataset, nb_epoch=10)
Explanation: Now we will create our model with tensorboard on. All we have to do to turn tensorboard on is pass the tensorboard=True flag to the constructor of our model
End of explanation
display(Image(filename='assets/tensorboard_landing.png'))
Explanation: Viewing the Tensorboard output
When tensorboard is turned on we log all the files needed for tensorboard in model.model_dir. To launch the tensorboard webserver we have to call in a terminal
bash
tensorboard --logdir models/ --port 6006
This will launch the tensorboard web server on your local computer on port 6006. Go to http://localhost:6006 in your web browser to look through tensorboard's UI.
The first thing you will see is a graph of the loss vs mini-batches. You can use this data to determine if your model is still improving it's loss function over time or to find out if your gradients are exploding!.
End of explanation
display(Image(filename='assets/GraphConvArch.png'))
Explanation: If you click "GRAPHS" at the top you can see a visual layout of the model. Here is what our GraphConvModel Model looks like
End of explanation |
10,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lambda Expressions
Lambda expressions allow us to create "anonymous" functions i.e functions without a name. This basically means we can quickly make ad-hoc functions without needing to properly define a function using def.
Function objects returned by running lambda expressions work exactly the same as those created and assigned by defs.
However the following are the key differences between lambda functions and 'def' functions.
Lambda's body is a single expression, not a block of statements
'Lambda' is designed for coding simple functions and 'def' handles the larger tasks.
Step1: The syntax for lambda function is quite simple.
lambda argument_list
Step2: Filter Function
The function filter(function,list) offers an elegant way to filter out all the elements of a list, for which the function function returns True.
The function filter(f,1) needs a function f as its first argument. f returns a Boolean value i.e. either True or False. This function will be applied to every element of the list l. Only if f returns True will the element of the list included in the resultset.
Step3: Map Function
map is a function with 2 arguments.
result = map(function,seq)
The first argument func is the name of a function and the second a sequence (e.g. a list) seq. map() applies the function func to all the elements of the sequence seq.
It returns a new list with all the elements changed by func.
Step4: map() can also be applied to more than one list but the lists must have the same length. map() will apply its lambda function to the elements of the argument lists, i.e. it first applies to the elements with the 0th index, then to the elements with the 1st index until the n-th index is reached
Step5: Reduce Function
reduce(func,seq)
If seq = [s1,s2,s3,...,sn], calling reduce(func,seq) works like this
Step6: Zip Function
zip() makes an iterator that aggregates elements from each of the iterables.
Step7: zip() should only be used with unequal length inputs when you don’t care about trailing, unmatched values from the longer iterables. Only the shortest iterable itme will be taken and any extra elements will be ignored
Step8: Zip with Dictionary
Step9: This makes sense because simply iterating through the dictionaries will result in just the keys. We would have to call methods to mix keys and values
Step10: Enumerate Function
Return an enumerate object. iterable must be a sequence, an iterator, or some other object which supports iteration. Returns a tuple containing a count (from start which defaults to 0) and the values obtained from iterating over iterable.
Step11: Any / All
Step12: Complex
complex() returns a complex number with the value real + imag*1j or converts a string or number to a complex number. | Python Code:
# Normal function
def square(num):
result = num**2
return result
square(2)
# Simplified Version #1
def square(num):
return num**2
square(3)
# Simplified Version #1
def square(num):return num**2
square(4)
Explanation: Lambda Expressions
Lambda expressions allow us to create "anonymous" functions i.e functions without a name. This basically means we can quickly make ad-hoc functions without needing to properly define a function using def.
Function objects returned by running lambda expressions work exactly the same as those created and assigned by defs.
However the following are the key differences between lambda functions and 'def' functions.
Lambda's body is a single expression, not a block of statements
'Lambda' is designed for coding simple functions and 'def' handles the larger tasks.
End of explanation
square = lambda num: num **2
square(5)
sum1 = lambda x,y: x+y
print(sum1(3,4))
even = lambda x: x%2==0
print(even(12))
first = lambda str: str[0]
print(first('Hello'))
rev = lambda str:str[::-1]
print(rev('Hello'))
Explanation: The syntax for lambda function is quite simple.
lambda argument_list:expression
The argument list consists of a comma separated list of arguments and the expression is an arithmetic expression using these arguments.
The following example returns the square of a given number.
End of explanation
nums = [2,3,4,7,9,10]
evens = list(filter(lambda x: x%2==0,nums))
print(evens)
Explanation: Filter Function
The function filter(function,list) offers an elegant way to filter out all the elements of a list, for which the function function returns True.
The function filter(f,1) needs a function f as its first argument. f returns a Boolean value i.e. either True or False. This function will be applied to every element of the list l. Only if f returns True will the element of the list included in the resultset.
End of explanation
def far(T):
return ((float(9)/5)* T + 32)
def cel(T):
return (float(5)/9) * (T - 32)
temp = (0,35,90,125)
F = map(far,temp)
temp1 = list(F)
C = map(cel,temp1)
print(temp1)
print(list(C))
temp = (0,35,90,125)
f = map(lambda x: (float(9)/5)*x +32,temp)
f_list = list(f)
c = map(lambda x: (float(5)/9) * (x -32),f_list)
print(f_list)
print(list(c))
Explanation: Map Function
map is a function with 2 arguments.
result = map(function,seq)
The first argument func is the name of a function and the second a sequence (e.g. a list) seq. map() applies the function func to all the elements of the sequence seq.
It returns a new list with all the elements changed by func.
End of explanation
a = [1,2,3,4]
b = [5,6,7,8]
c = [-1,-2,1,2]
list(map(lambda x,y,z:x+y-z,a,b,c))
Explanation: map() can also be applied to more than one list but the lists must have the same length. map() will apply its lambda function to the elements of the argument lists, i.e. it first applies to the elements with the 0th index, then to the elements with the 1st index until the n-th index is reached:
End of explanation
from functools import reduce
reduce(lambda x,y: x+y, [47,23,11,34])
from functools import reduce
reduce(lambda x,y: x if (x > y) else y, [47,12,33,95])
from functools import reduce
reduce(lambda x,y: x * y, range(1,10,2))
Explanation: Reduce Function
reduce(func,seq)
If seq = [s1,s2,s3,...,sn], calling reduce(func,seq) works like this:
* At first the two elements of seq will be applied to func, i,e func(s1,s2). The list on which reduce() works looks now like this: [func(s1,s2),s3,...,sn]
* In the next step func will be applied on the previous result and the third element on the list, i.e.func(func(s1,s2),s3). The list looks like this now: [func(func(s1,s2),s3),...,sn]
* Continue like this until just one element is left and return this element as the result of reduce()
End of explanation
a = [1,2,3]
b = [4,5,6]
list(zip(a,b))
Explanation: Zip Function
zip() makes an iterator that aggregates elements from each of the iterables.
End of explanation
# Example - 1
x = [1,2,3]
y = [4,5]
list(zip(x,y))
# Example - 2
a = ['a','b']
b = ['c','d','e']
list(zip(a,b))
Explanation: zip() should only be used with unequal length inputs when you don’t care about trailing, unmatched values from the longer iterables. Only the shortest iterable itme will be taken and any extra elements will be ignored
End of explanation
d1 = {'a':1,'b':2}
d2 = {'c':3,'c':4}
print(list(zip(d1,d2)))
Explanation: Zip with Dictionary
End of explanation
def swapdic(d1,d2):
dout = {}
for d1key,d2val in zip(d1,d2.values()):
dout[d1key] = d2val
return dout
swapdic(d1,d2)
Explanation: This makes sense because simply iterating through the dictionaries will result in just the keys. We would have to call methods to mix keys and values
End of explanation
colors = ['Blue','Black','White','Red']
list(enumerate(colors))
list(enumerate(colors,1))
for index,item in enumerate(colors):
print(index)
print(item)
for i,j in enumerate(colors):
if i == 2:
break
print(j)
Explanation: Enumerate Function
Return an enumerate object. iterable must be a sequence, an iterator, or some other object which supports iteration. Returns a tuple containing a count (from start which defaults to 0) and the values obtained from iterating over iterable.
End of explanation
lst = [True, True, False, True]
all(lst)
any(lst)
Explanation: Any / All
End of explanation
complex() # if no arguments are given results in 0j
complex(2,4)
complex('2') # if imag part is omitted substitutes with 0j
complex('2','3') # this will create an error as the second parameter cannot be a string.
complex('1+2j')
complex('3',2) # cannot take a secord argument if first is a strink
Explanation: Complex
complex() returns a complex number with the value real + imag*1j or converts a string or number to a complex number.
End of explanation |
10,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem Statement
Step1: IMDB comments dataset has been stored in the following location
Step2: There are 50000 lines in the file. Let's the first line
Step3: Total size of the file is 66MB
Step4: Each line is a self contained json doc. Load the dataset using spark reader specifying the file format as json. As we see above size of the file is 66 MB, we should at least 2 partitons, since I am using dual core system, I will repartition the data to 4. Also will cache the data after repartitioning.
Step5: Find total number of records
Step6: Print Schema and view the field types
Step7: Take a look at a few sample data
Step8: label - column indicate whethet the data belong to training or test bucket.
sentiment - column indicates whether the comment carries positive or negative sentiment. This column has been manually curated.
Find out for each combination of label and sentimnet how many records are there.
Step9: Look at a sample comment value
Step10: Register a UDF function to clean the comment from the html tags. If BeautifulSoup is not installed, you can install it using pip
(shell command)
$ pip install BeautifulSoup4
Step11: Test the remove_html_tags function
Step12: Apply the the udf on the imdb dataframe.
Step13: Use Tokenizer to split the string into terms. Then use StopWordsRemover to remove stop words like prepositions, apply CountVectorizer to find all distinct terms and found of each term per document.
Step14: count_vectorized Dataframe contains a column count_vectors that is a SparseVector representing which term appears and how many times. The key is the index of all unique terms. You can find list of terms count_vectorizer_model.vocabulary. See below.
Step15: SparkVector represents a vector of 103999, that means in the dataset (corpus) there are 103999 unique terms. Per document, only a few will be present. Find density of each count_vectors.
Step16: Density report shows, the count_vectors has very low density which illustrate the benefit of the choice of DenseVector for this column.
Now, calculate tfidf for the document.
Step17: Apply StringIndexer to conver the sentiment column from String type to number type - this is prerequisit to apply the LogisticRegression algorithm.
Step18: Split the data into traininf and testing groups with 70/30 ratio. Cache the dataframe so that training runs faster.
Step19: Verify that the StringIndex has done the expected job and training and testing data maintain the ratio of positive and negative records as in the whole dataset.
Step20: Apply LogisticRegression classifier
Step21: Show the parameters that the LogisticRegression classifier takes.
Step22: From the training summary find out the cost decay of the model.
Step23: Find area under the curve. Closer to 1 is better
Step24: Find the accuracy of the prediction | Python Code:
spark.sparkContext.uiWebUrl
Explanation: Problem Statement: IMDB Comment Sentiment Classifier
Dataset: For this exercise we will use a dataset hosted at http://ai.stanford.edu/~amaas/data/sentiment/
Problem Statement:
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. Raw text and already processed bag of words formats are provided.
Launch a spark session, verify the spark session UI
End of explanation
!wc -l data/imdb-comments.json
Explanation: IMDB comments dataset has been stored in the following location
End of explanation
!du -sh data/imdb-comments.json
Explanation: There are 50000 lines in the file. Let's the first line
End of explanation
!head -n 1 data/imdb-comments.json
Explanation: Total size of the file is 66MB
End of explanation
imdb = spark.read.format("json").load("data/imdb-comments.json").repartition(4).cache()
Explanation: Each line is a self contained json doc. Load the dataset using spark reader specifying the file format as json. As we see above size of the file is 66 MB, we should at least 2 partitons, since I am using dual core system, I will repartition the data to 4. Also will cache the data after repartitioning.
End of explanation
imdb.count()
Explanation: Find total number of records
End of explanation
imdb.printSchema()
Explanation: Print Schema and view the field types
End of explanation
imdb.show()
Explanation: Take a look at a few sample data
End of explanation
from pyspark.sql.functions import *
from pyspark.sql.types import *
imdb.groupBy("sentiment").pivot("label").count().show()
Explanation: label - column indicate whethet the data belong to training or test bucket.
sentiment - column indicates whether the comment carries positive or negative sentiment. This column has been manually curated.
Find out for each combination of label and sentimnet how many records are there.
End of explanation
content = imdb.sample(False, 0.001, 1).first().content
content
Explanation: Look at a sample comment value
End of explanation
from bs4 import BeautifulSoup
from pyspark.sql.types import *
import re
def remove_html_tags(text):
text = BeautifulSoup(text, "html5lib").text.lower() #removed html tags
text = re.sub("[\W]+", " ", text)
return text
spark.udf.register("remove_html_tags", remove_html_tags, StringType())
Explanation: Register a UDF function to clean the comment from the html tags. If BeautifulSoup is not installed, you can install it using pip
(shell command)
$ pip install BeautifulSoup4
End of explanation
remove_html_tags(content)
Explanation: Test the remove_html_tags function
End of explanation
imdb_clean = imdb.withColumn("content", expr("remove_html_tags(content)")).cache()
imdb_clean.sample(False, 0.001, 1).first().content
Explanation: Apply the the udf on the imdb dataframe.
End of explanation
from pyspark.ml.feature import HashingTF, IDF, Tokenizer, CountVectorizer, StopWordsRemover
tokenizer = Tokenizer(inputCol="content", outputCol="terms")
terms_data = tokenizer.transform(imdb_clean)
print(terms_data.sample(False, 0.001, 1).first().terms)
remover = StopWordsRemover(inputCol="terms", outputCol="filtered")
terms_stop_removed = remover.transform(terms_data)
print(terms_stop_removed.sample(False, 0.001, 1).first().filtered)
count_vectorizer = CountVectorizer(inputCol="filtered", outputCol="count_vectors")
count_vectorizer_model = count_vectorizer.fit(terms_stop_removed)
count_vectorized = count_vectorizer_model.transform(terms_stop_removed)
count_vectorized.sample(False, 0.001, 1).first().count_vectors
Explanation: Use Tokenizer to split the string into terms. Then use StopWordsRemover to remove stop words like prepositions, apply CountVectorizer to find all distinct terms and found of each term per document.
End of explanation
print(count_vectorizer_model.vocabulary[:100], "\n\nTotal no of terms", len(count_vectorizer_model.vocabulary))
count_vectorized.show()
Explanation: count_vectorized Dataframe contains a column count_vectors that is a SparseVector representing which term appears and how many times. The key is the index of all unique terms. You can find list of terms count_vectorizer_model.vocabulary. See below.
End of explanation
vocab_len = len(count_vectorizer_model.vocabulary)
spark.udf.register("density", lambda r: r.numNonzeros() / vocab_len, DoubleType())
count_vectorized.select(expr("density(count_vectors) density")).show()
Explanation: SparkVector represents a vector of 103999, that means in the dataset (corpus) there are 103999 unique terms. Per document, only a few will be present. Find density of each count_vectors.
End of explanation
idf = IDF(inputCol="count_vectors", outputCol="features")
idf_model = idf.fit(count_vectorized)
idf_data = idf_model.transform(count_vectorized)
idf_data.sample(False, 0.001, 1).first().features
idf_data.printSchema()
Explanation: Density report shows, the count_vectors has very low density which illustrate the benefit of the choice of DenseVector for this column.
Now, calculate tfidf for the document.
End of explanation
from pyspark.ml.feature import StringIndexer
string_indexer = StringIndexer(inputCol="sentiment", outputCol="sentiment_idx")
string_indexer_model = string_indexer.fit(idf_data)
label_encoded = string_indexer_model.transform(idf_data)
label_encoded.select("sentiment", "sentiment_idx").show()
Explanation: Apply StringIndexer to conver the sentiment column from String type to number type - this is prerequisit to apply the LogisticRegression algorithm.
End of explanation
training, testing = label_encoded.randomSplit(weights=[0.7, 0.3], seed=1)
training.cache()
testing.cache()
Explanation: Split the data into traininf and testing groups with 70/30 ratio. Cache the dataframe so that training runs faster.
End of explanation
training.groupBy("sentiment_idx", "sentiment").count().show()
testing.groupBy("sentiment_idx", "sentiment").count().show()
Explanation: Verify that the StringIndex has done the expected job and training and testing data maintain the ratio of positive and negative records as in the whole dataset.
End of explanation
from pyspark.ml.classification import LogisticRegression
lr = LogisticRegression(maxIter=10000, regParam=0.1, elasticNetParam=0.0,
featuresCol="features", labelCol="sentiment_idx")
Explanation: Apply LogisticRegression classifier
End of explanation
print(lr.explainParams())
lr_model = lr.fit(training)
lr_model.coefficients[:100]
Explanation: Show the parameters that the LogisticRegression classifier takes.
End of explanation
training_summary = lr_model.summary
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
pd.Series(training_summary.objectiveHistory).plot()
plt.xlabel("Iteration")
plt.ylabel("Cost")
Explanation: From the training summary find out the cost decay of the model.
End of explanation
training_summary.areaUnderROC
predictions = lr_model.transform(testing).withColumn("match", expr("prediction == sentiment_idx"))
predictions.select("prediction", "sentiment_idx", "sentiment", "match").sample(False, 0.01).show(10)
predictions.groupBy("sentiment_idx").pivot("prediction").count().show()
Explanation: Find area under the curve. Closer to 1 is better
End of explanation
accuracy = predictions.select(expr("sum(cast(match as int))")).first()[0] / predictions.count()
accuracy
Explanation: Find the accuracy of the prediction
End of explanation |
10,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Data Analysis, 3rd ed
Chapter 10, demo 3
Normal approximaton for Bioassay model.
Step1: Find the mode by minimising negative log posterior. Compute gradients and Hessian analytically, and use Newton's method for optimisation. You may use optimisation routines below for checking your results. See help for scipy.optimize.minimize.
Step2: Compute the normal approximation density in grid. Note tha this is just for the illustration and in real case we would not need to evaluate this, and we would only use the draws from the normal distribution approaximation.
Step3: Compute Pareto smoothed importance sampling weights and Pareto diagnostic
Step4: Importance sampling weights could be used to weight different expectations directly, but for visualisation and easy computation of LD50 histogram, we use resampling importance sampling.
Step5: Create figure with all results | Python Code:
import numpy as np
from scipy import optimize, stats
%matplotlib inline
import matplotlib.pyplot as plt
import arviz as az
import os, sys
# add utilities directory to path
util_path = os.path.abspath(os.path.join(os.path.pardir, 'utilities_and_data'))
if util_path not in sys.path and os.path.exists(util_path):
sys.path.insert(0, util_path)
# import from utilities
import psis
import plot_tools
# edit default plot settings
plt.rc('font', size=12)
# apply custom background plotting style
plt.style.use(plot_tools.custom_styles['gray_background'])
# Bioassay data, (BDA3 page 86)
x = np.array([-0.86, -0.30, -0.05, 0.73])
n = np.array([5, 5, 5, 5])
y = np.array([0, 1, 3, 5])
# compute the posterior density in grid
# - usually should be computed in logarithms!
# - with alternative prior, check that range and spacing of A and B
# are sensible
ngrid = 100
A = np.linspace(-4, 8, ngrid)
B = np.linspace(-10, 40, ngrid)
ilogit_abx = 1 / (np.exp(-(A[:,None] + B[:,None,None] * x)) + 1)
p = np.prod(ilogit_abx**y * (1 - ilogit_abx)**(n - y), axis=2)
# sample from the grid
nsamp = 1000
samp_indices = np.unravel_index(
np.random.choice(p.size, size=nsamp, p=p.ravel()/np.sum(p)),
p.shape
)
samp_A = A[samp_indices[1]]
samp_B = B[samp_indices[0]]
# add random jitter, see BDA3 p. 76
samp_A += (np.random.rand(nsamp) - 0.5) * (A[1]-A[0])
samp_B += (np.random.rand(nsamp) - 0.5) * (B[1]-B[0])
# samples of LD50
samp_ld50 = -samp_A / samp_B
Explanation: Bayesian Data Analysis, 3rd ed
Chapter 10, demo 3
Normal approximaton for Bioassay model.
End of explanation
# define the optimised function
def bioassayfun(w):
a = w[0]
b = w[1]
et = np.exp(a + b * x)
z = et / (1 + et)
e = - np.sum(y * np.log(z) + (n - y) * np.log(1 - z))
return e
# initial guess
w0 = np.array([0.0, 0.0])
# optimise
optim_res = optimize.minimize(bioassayfun, w0)
# extract desired results
w = optim_res['x']
S = optim_res['hess_inv']
Explanation: Find the mode by minimising negative log posterior. Compute gradients and Hessian analytically, and use Newton's method for optimisation. You may use optimisation routines below for checking your results. See help for scipy.optimize.minimize.
End of explanation
# Construct a grid array of shape (ngrid, ngrid, 2) from A and B. Although
# Numpy's concatenation functions do not support broadcasting, a clever trick
# can be applied to overcome this without unnecessary memory copies
# (see Numpy's documentation for strides for more information):
A_broadcasted = np.lib.stride_tricks.as_strided(
A, shape=(ngrid,ngrid), strides=(0, A.strides[0]))
B_broadcasted = np.lib.stride_tricks.as_strided(
B, shape=(ngrid,ngrid), strides=(B.strides[0], 0))
grid = np.dstack((A_broadcasted, B_broadcasted))
p_norm = stats.multivariate_normal.pdf(x=grid, mean=w, cov=S)
# draw samples from the distribution
samp_norm = stats.multivariate_normal.rvs(mean=w, cov=S, size=1000)
Explanation: Compute the normal approximation density in grid. Note tha this is just for the illustration and in real case we would not need to evaluate this, and we would only use the draws from the normal distribution approaximation.
End of explanation
lg = stats.multivariate_normal.logpdf(x=samp_norm, mean=w, cov=S)
Ar = samp_norm[:,0]
Br = samp_norm[:,1]
ilogit_abx = 1 / (np.exp(-(Ar[:,None] + Br[:,None] * x)) + 1)
lp = np.sum(np.log(ilogit_abx**y * (1 - ilogit_abx)**(n - y)), axis=1)
lw = lp - lg
lw, pk = psis.psislw(lw)
print("Pareto khat is {:.2}".format(pk))
Explanation: Compute Pareto smoothed importance sampling weights and Pareto diagnostic
End of explanation
# resampling importance sampling
pis = np.exp(lw)
nsamp = 1000
samp_indices = np.random.choice(pis.size, size=nsamp, p=pis)
rissamp_A = Ar[samp_indices]
rissamp_B = Br[samp_indices]
# add random jitter, see BDA3 p. 76
rissamp_A += (np.random.rand(nsamp) - 0.5) * (A[1]-A[0])
rissamp_B += (np.random.rand(nsamp) - 0.5) * (B[1]-B[0])
# samples of LD50
rissamp_ld50 = - rissamp_A / rissamp_B
Explanation: Importance sampling weights could be used to weight different expectations directly, but for visualisation and easy computation of LD50 histogram, we use resampling importance sampling.
End of explanation
fig, axes = plt.subplots(3, 3, figsize=(13, 10))
# convert samples to InferenceData for plotting
samples = az.convert_to_inference_data({"A": samp_A, "B": samp_B})
samples_norm = az.convert_to_inference_data({"A": samp_norm[:, 0], "B": samp_norm[:, 1]})
rissamples = az.convert_to_inference_data({"A" : rissamp_A, "B" : rissamp_B})
# plot the posterior density
ax = axes[0, 0]
ax.imshow(
p,
origin='lower',
aspect='auto',
extent=(A[0], A[-1], B[0], B[-1])
)
ax.set_xlim([-2, 6])
ax.set_ylim([-5, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.grid('off')
ax.set_yticks(np.linspace(0, 30, 4))
# plot the samples
ax = axes[0, 1]
az.plot_pair(samples, marginals=False, ax=ax)
ax.set_xlim([-2, 6])
ax.set_ylim([-5, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.text(0, -2, 'p(beta>0)={:.2f}'.format(np.mean(samp_B>0)))
ax.set_yticks(np.linspace(0, 30, 4))
# plot the histogram of LD50
ax = axes[0, 2]
#ax.hist(samp_ld50, np.linspace(-0.8, 0.8, 31))
az.plot_posterior(samp_ld50, kind="hist", point_estimate=None, hdi_prob="hide", ax=ax)
ax.set_xlim([-0.8, 0.8])
ax.set_title("")
ax.set_xlabel(r'LD50 = -$\alpha/\beta$')
ax.set_yticks(())
ax.set_xticks(np.linspace(-0.5, 0.5, 3))
# plot the posterior density for normal approx.
ax = axes[1, 0]
ax.imshow(
p_norm,
origin='lower',
aspect='auto',
extent=(A[0], A[-1], B[0], B[-1])
)
ax.set_xlim([-2, 6])
ax.set_ylim([-5, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.grid('off')
ax.set_yticks(np.linspace(0, 30, 4))
# plot the samples from the normal approx.
ax = axes[1, 1]
az.plot_pair(samples_norm, marginals=False, ax=ax)
ax.set_xlim([-2, 6])
ax.set_ylim([-5, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.set_yticks(np.linspace(0, 30, 4))
# Normal approximation does not take into account that the posterior
# is not symmetric and that there is very low density for negative
# beta values. Based on the samples from the normal approximation
# it is estimated that there is about 4% probability that beta is negative!
ax.text(0, -2, 'p(beta>0)={:.2f}'.format(np.mean(samp_norm[:,1]>0)))
# Plot the histogram of LD50
ax = axes[1, 2]
# Since we have strong prior belief that beta should not be negative we can
# improve our normal approximation by conditioning on beta>0.
bpi = samp_norm[:,1] > 0
samp_ld50_norm = - samp_norm[bpi,0] / samp_norm[bpi,1]
az.plot_posterior(samp_ld50_norm, kind="hist", point_estimate=None, hdi_prob="hide", ax=ax)
ax.set_xlim([-0.8, 0.8])
ax.set_title("")
ax.set_xlabel(r'LD50 = -$\alpha/\beta$')
ax.set_yticks(())
ax.set_xticks(np.linspace(-0.5, 0.5, 3))
# plot the samples from the resampling importance sampling
ax = axes[2, 1]
az.plot_pair(rissamples, marginals=False, ax=ax)
ax.set_xlim([-2, 6])
ax.set_ylim([-5, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.set_yticks(np.linspace(0, 30, 4))
# Importance sampling is able to improve the estimate of p(beta>0)
ax.text(0, -2, 'p(beta>0)={:.2f}'.format(np.mean(rissamp_B>0)))
# Plot the histogram of LD50
ax = axes[2, 2]
az.plot_posterior(rissamp_ld50, kind="hist", point_estimate=None, hdi_prob="hide", ax=ax)
ax.set_title("")
ax.set_xlim([-0.8, 0.8])
ax.set_xlabel(r'LD50 = -$\alpha/\beta$')
ax.set_yticks(())
ax.set_xticks(np.linspace(-0.5, 0.5, 3))
# hide unused subplot
axes[2, 0].axis('off')
fig.tight_layout()
Explanation: Create figure with all results
End of explanation |
10,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Короче функция, которая домнажать вектор на случайную матрица!
$$X_{1\times n}R_{n\times k}=C_{1\times k}$$, где
Step1: Вытаскивание даных из файла!
Ссылки | Python Code:
def mm(x, k):
if x.shape[0] > 1:
x=x.T
r = np.random.rand(x.shape[1],k)
print(r)
#print(x.dot(r))
return(x.dot(r))
mm(np.array([[1,2, 1,132, 1,2]]), 5)
Explanation: Короче функция, которая домнажать вектор на случайную матрица!
$$X_{1\times n}R_{n\times k}=C_{1\times k}$$, где: R - рандомная матрица
$$ \begin{pmatrix}x_{11}\x_{21}\x_{31}\x_{41}\x_{51}\end{pmatrix}\begin{pmatrix}b_{11}&b_{12}&b_{13}&b_{14}\b_{21}&b_{22}&b_{23}&b_{24}\b_{31}&b_{32}&b_{33}&b_{34}\b_{41}&b_{42}&b_{43}&b_{44}\end{pmatrix} $$
https://www.latex4technics.com/
End of explanation
from sklearn.preprocessing import scale
X_train_draw = scale(X_train[::, 0:2])
X_test_draw = scale(X_test[::, 0:2])
clf = RandomForestClassifier(n_estimators=100, n_jobs=-1)
clf.fit(X_train_draw, y_train)
x_min, x_max = X_train_draw[:, 0].min() - 1, X_train_draw[:, 0].max() + 1
y_min, y_max = X_train_draw[:, 1].min() - 1, X_train_draw[:, 1].max() + 1
h = 0.02
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
pred = clf.predict(np.c_[xx.ravel(), yy.ravel()])
pred = pred.reshape(xx.shape)
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
plt.figure()
plt.pcolormesh(xx, yy, pred, cmap=cmap_light)
plt.scatter(X_train_draw[:, 0], X_train_draw[:, 1],
c=y_train, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("Score: %.0f percents" % (clf.score(X_test_draw, y_test) * 100))
plt.show()
x = np.eye(1, 18)
x[0][12] = 5
print(x)
Explanation: Вытаскивание даных из файла!
Ссылки:
https://habrahabr.ru/company/wunderfund/blog/316826/
http://www.karsdorp.io/python-course/
https://docs.scipy.org/doc/numpy/reference/index.html
https://stepik.org/lesson/NumPy-%D0%BE%D1%81%D0%BD%D0%BE%D0%B2%D1%8B-16462/step/8?course=%D0%9D%D0%B5%D0%B9%D1%80%D0%BE%D0%BD%D0%BD%D1%8B%D0%B5-%D1%81%D0%B5%D1%82%D0%B8&unit=4283
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.