Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
11,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step5: Factor Network Functions
Messenger Class
Performs transformations on data.
eg. f(x) -> y
Decoupled from the other factor network code, and can be swapped with other implementations
Step13: Factor Network Class
Maintains an internal representation of the factor network as a graph and provides functionality to manipulate state.
eg.
$ git add node
$ git commit node
$ git merge node_a node_b
Step14: Workflow
Initialize Network
Step15: Factor Network Under the Hood
Step16: Factor Networks
Step17: Looking at Elasticsearch | Python Code:
class Messenger:
def __init__(self, config='cdr', size=2000):
:param url: str
Fully qualified url to an elasticsearch instance
:param size: int|
Size limit to set on elasticsearch query
self.conn = connections.get_connection(config)
self.elastic = Search('cdr', extra={'size': size})
def match(self, match_type, **kwargs):
return self.elastic.query(match_type, **kwargs).execute()
@memoize
def available(self, ad_id):
Get's the available factors for a particular ad
:param ad_id: str
Unique ad identifier
:return: factors
:rtype : list
accumulator = lambda x,y: x|y
output = self.match('match_phrase', _id=ad_id)
keys = [
set(i['_source'].keys())
for i in output.hits.hits
]
return list(reduce(accumulator, keys, set()))
def lookup(self, ad_id, field):
Get data from ad_id
:param ad_id: str
String to be queried
if not isinstance(ad_id, list):
ad_id = [ad_id]
results = self.elastic.query(Ids(values=ad_id)).execute()
return set(flatten([
hits['_source'][field] for hits in results.hits.hits
if field in hits['_source']
]))
def reverse_lookup(self, field, field_value):
Get ad_id from a specific field and search term
:param field_value: str
String to be queried
results = self.match(
'match_phrase', **{field:field_value}).hits.hits
if not results:
results = self.match('match', _all=field_value).hits.hits
return [hit['_id'] for hit in results]
def suggest(self, ad_id, field):
The suggest function suggests other ad_ids that share this
field with the input ad_id.
suggestions = {}
field_values = self.lookup(ad_id, field)
for value in field_values:
ads = set(self.reverse_lookup(field, value))
# To prevent cycles
if isinstance(ad_id, list):
ads -= set(ad_id)
else:
ads.discard(ad_id)
suggestions[value] = list(ads)
return suggestions
def flatten(nested):
return (
[x for l in nested for x in flatten(l)]
if isinstance(nested, list) else
[nested]
)
Explanation: Factor Network Functions
Messenger Class
Performs transformations on data.
eg. f(x) -> y
Decoupled from the other factor network code, and can be swapped with other implementations
End of explanation
class FactorNetwork:
Factor Network Constructor
==========================
Manager class for initializing and
handling state in a factor network
def __init__(self, Messenger=Messenger, **kwargs):
:param Messenger:
A class constructor following the suggestion
interface
:param kwargs:
Keyword arguments fed into constructor
to initialize local network object
self.messenger = Messenger()
self.G = nx.DiGraph(**kwargs)
def __repr__(self):
nodes = nx.number_of_nodes(self.G)
edges = nx.number_of_edges(self.G)
return '{nm}(nodes={nodes}, edges={edges})'.format(
nm=self.__class__.__name__,
nodes=nodes,
edges=edges,
)
def get_graph(self, node, factor, **kwargs):
Create the networkx graph representation
:param node: str
Document ID of the root node
:param factor: str
A type of factor to query
:param kwargs:
Keyword arguments fed into constructor
to initialize local network object
G, node = nx.DiGraph(**kwargs), str(node)
G.add_node(node, {'type': 'doc'})
self.messenger.lookup(node, factor)
message = self.messenger.suggest(node, factor)
for value, keys in message.items():
edgelist = itertools.zip_longest([node], keys, fillvalue=node)
metadata = {'value': value, 'factor': factor}
G.add_edges_from(edgelist, **metadata)
return G
def register_node(self, node, factor):
node = str(node)
self.G = nx.compose(self.G, self.get_graph(node, factor))
def to_dict(self):
Serialize graph edges back into JSON
d = collections.defaultdict(list)
for leaf, node in nx.edges(self.G):
d[node].append(leaf)
return dict(d)
def show(self):
nx.draw_networkx(self.G,
pos=nx.layout.fruchterman_reingold_layout(self.G),
with_labels=False,
node_size=100,
)
def commit(self, index_name, user_name):
Commit the current state of factor network to a local Elastic instance
The index_name should remain constant for an organization. The user_name refers to the specific user and provides the functionality to maintain the user provenance by making it the Elastic document type.
Specifically, split the state into 3 components (1) root (the datum with which you started) (2) extension (the data you've confirmed based on factor network suggestions) (3) suggestions (the suggested extensions to your data)
We index a factor network by taking the root and appending a _x to it. We loop through get requests on that particular lead to get based on the most recently committed root_x and we add 1 to x.
The results of the commit will look as follows in Elastic:
{
"_index": "Your_Index_Name",
"_type": "adam",
"_id": "rootid_x",
"_score": 1,
"_source": {
"root": [[0,1],[0,7],...],
"extension": {[[1,2],[2,3],...]},
"suggestions": {[[3,4],[...],...]}
}
}
es = Elasticsearch()
source = set()
target = set()
edges = self.G.edges()
for edge in edges:
source.add(edge[0])
target.add(edge[1])
def split(intersection, edges):
result = []
for i in intersection:
for edge in edges:
if i in edge:
result.append(edge)
return result
state = {}
state["root"] = split(source.difference(target), edges)
state["extension"] = split(target.intersection(source), edges)
state["suggestions"] = split(target.difference(source), edges)
i = 1
preexisting = True
while preexisting:
try:
index_id = state["root"][0][0] + "_" + str(i)
es.get(index=index_name, id=index_id, doc_type=user_name)
i = i + 1
except:
preexisting = False
res = es.index(index=index_name, id=index_id, doc_type=user_name, body=state)
current_state = es.get(index=index_name, id=index_id, doc_type=user_name)
return current_state
def unpack_state_to_graph(self, index_name, user_name, index_id):
Get request to Elastic to return the graph without the lead/extension/suggestions differentiator
es = Elasticsearch()
edges = []
current_state = es.get(index=index_name, id=index_id, doc_type=user_name)
for k, v in current_state["_source"].items():
for edge in v:
edges.append(edge)
G = nx.DiGraph()
G.add_edges_from(edges)
return G
def merge(self, index_name, user_name_a, index_id_a, user_name_b, index_id_b):
Merge two factor states
# state_a = es.get(index=index_name, index_id_a, doc_type=user_name_a)
# state_b = es.get(index=index_name, index_id_b, doc_type=user_name_b)
G_a = set(self.unpack_state_to_graph(index_name, user_name_a, index_id_a).edges())
G_b = set(self.unpack_state_to_graph(index_name, user_name_b, index_id_a).edges())
network = {}
network["intersection"] = G_a.intersection(G_b)
network["workflow_a"] = G_a.difference(G_b)
network["workflow_b"] = G_b.difference(G_a)
n_edges = len(network["intersection"]) + len(network["workflow_a"]) + len(network["workflow_b"])
network["merge_stats"] = {}
network["merge_stats"]["intersection"] = round(len(network["intersection"])/n_edges, 2)
network["merge_stats"]["workflow_a"] = round(len(network["workflow_a"])/n_edges, 2)
network["merge_stats"]["workflow_b"] = round(len(network["workflow_b"])/n_edges, 2)
return(network)
Explanation: Factor Network Class
Maintains an internal representation of the factor network as a graph and provides functionality to manipulate state.
eg.
$ git add node
$ git commit node
$ git merge node_a node_b
End of explanation
username = 'user'
root_node = '67480505'
factor = FactorNetwork(Messenger=Messenger)
Explanation: Workflow
Initialize Network
End of explanation
factor.messenger.lookup(root_node, 'phone')
factor.messenger.reverse_lookup('phone', '5023030050')
Explanation: Factor Network Under the Hood:
Lookup <-> Reverse Lookups
End of explanation
factor.show()
factor.register_node(root_node, 'phone')
factor.commit('factor_state2016', username)
factor.show()
factor.G.adj
factor.messenger.lookup(factor.G.nodes(), 'phone')
factor.messenger.suggest(factor.messenger.reverse_lookup('phone', '6042603774'), 'phone')
suggestions = factor.messenger.reverse_lookup('6042603774', 'phone')
set(suggestions) & set(factor.messenger.reverse_lookup('5874387758', 'phone'))
factor.register_node('32355127', 'phone')
factor.show()
factor.commit('factor_state2016', username)
factor.show()
root_node
factor.merge('factor_state2016', username, root_node+'_1', username, root_node+'_2')
Explanation: Factor Networks
End of explanation
es = connections.get_connection('local')
response = Search(es).query().execute()
[i['_source'] for i in response.hits.hits]
Explanation: Looking at Elasticsearch
End of explanation |
11,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
练习 1:写函数,求n个随机整数均值的平方根,整数范围在m与k之间。
Step1: 练习 2:写函数,共n个随机整数,整数范围在m与k之间,求西格玛log(随机整数)及西格玛1/log(随机整数)
Step2: 练习 3:写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入。
Step3: 挑战性练习:将猜数游戏改成由用户随便选择一个整数,让计算机来猜测的猜数游戏。 | Python Code:
import random,math
def Square():
m=int(input('plz input the min '))
k=int(input('plz input the max '))
n=int(input('plz input n : '))
i=0
total=0
while i<n:
i+=1
temp=random.randint(m,k)
total+=temp
print (math.sqrt(total/n))
Square()
Explanation: 练习 1:写函数,求n个随机整数均值的平方根,整数范围在m与k之间。
End of explanation
import random,math
def total():
m=int(input('plz input the min '))
k=int(input('plz input the max '))
n=int(input('plz input n :'))
i=0
Sum1=0
Sum2=0
while i<n:
i+=1
temp=random.randint(m,k)
Sum1+=math.log10(temp)
Sum2+=1/(math.log10(temp))
print(Sum1,Sum2)
total()
Explanation: 练习 2:写函数,共n个随机整数,整数范围在m与k之间,求西格玛log(随机整数)及西格玛1/log(随机整数)
End of explanation
import random,math
def Sum():
a=random.randint(1,9)
n=int(input('plz input n: ' ))
i=0
Sum=0
total=0
while i<n:
i+=1
temp=10**(i-1)
Sum=Sum+temp*a
total+=Sum
print(a)
print(total)
Sum()
Explanation: 练习 3:写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入。
End of explanation
import random,math
def guess():
n=int(input('plz input n (n:1-10) : '))
r=random.randint(1,10)
print(r)
an=int(input('''
1,bigger
2,smaller
3,u win
'''
))
if(an==3):
print('u win!')
else:
while an!=3:
if(an==1):
print(r)
an=int(input('''
1,bigger
2,smaller
3,u win
'''
))
r=random.randint(1,r)
elif(am==2):
print(r)
an=int(input('''
1,bigger
2,smaller
3,u win
'''
))
r=random.randint(r,10)
guess()
Explanation: 挑战性练习:将猜数游戏改成由用户随便选择一个整数,让计算机来猜测的猜数游戏。
End of explanation |
11,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision Trees in Practice
In this assignment we will explore various techniques for preventing overfitting in decision trees. We will extend the implementation of the binary decision trees that we implemented in the previous assignment. You will have to use your solutions from this previous assignment and extend them.
In this assignment you will
Step1: Load LendingClub Dataset
This assignment will use the LendingClub dataset used in the previous two assignments.
Step2: As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
Step3: We will be using the same 4 categorical features as in the previous assignment
Step4: Transform categorical data into binary features
Since we are implementing binary decision trees, we transform our categorical data into binary data using 1-hot encoding, just as in the previous assignment. Here is the summary of that discussion
Step5: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=1 so that everyone gets the same result.
Step6: Early stopping methods for decision trees
In this section, we will extend the binary tree implementation from the previous assignment in order to handle some early stopping conditions. Recall the 3 early stopping methods that were discussed in lecture
Step7: Quiz Question
Step8: Quiz Question
Step9: We then wrote a function best_splitting_feature that finds the best feature to split on given the data and a list of features to consider.
Please copy and paste your best_splitting_feature code here.
Step10: Finally, recall the function create_leaf from the previous assignment, which creates a leaf node given a set of target values.
Please copy and paste your create_leaf code here.
Step11: Incorporating new early stopping conditions in binary decision tree implementation
Now, you will implement a function that builds a decision tree handling the three early stopping conditions described in this assignment. In particular, you will write code to detect early stopping conditions 2 and 3. You implemented above the functions needed to detect these conditions. The 1st early stopping condition, max_depth, was implemented in the previous assigment and you will not need to reimplement this. In addition to these early stopping conditions, the typical stopping conditions of having no mistakes or no more features to split on (which we denote by "stopping conditions" 1 and 2) are also included as in the previous assignment.
Implementing early stopping condition 2
Step12: Here is a function to count the nodes in your tree
Step13: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step14: Build a tree!
Now that your code is working, we will train a tree model on the train_data with
* max_depth = 6
* min_node_size = 100,
* min_error_reduction = 0.0
Warning
Step15: Let's now train a tree model ignoring early stopping conditions 2 and 3 so that we get the same tree as in the previous assignment. To ignore these conditions, we set min_node_size=0 and min_error_reduction=-1 (a negative value).
Step16: Making predictions
Recall that in the previous assignment you implemented a function classify to classify a new point x using a given tree.
Please copy and paste your classify code here.
Step17: Now, let's consider the first example of the validation set and see what the my_decision_tree_new model predicts for this data point.
Step18: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step19: Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as my_decision_tree_old.
Step20: Quiz Question
Step21: Now, let's use this function to evaluate the classification error of my_decision_tree_new on the validation_set.
Step22: Now, evaluate the validation error using my_decision_tree_old.
Step23: Quiz Question
Step24: Evaluating the models
Let us evaluate the models on the train and validation data. Let us start by evaluating the classification error on the training data
Step25: Now evaluate the classification error on the validation data.
Step26: Quiz Question
Step27: Compute the number of nodes in model_1, model_2, and model_3.
Step28: Quiz Question
Step29: Calculate the accuracy of each model (model_4, model_5, or model_6) on the validation set.
Step30: Using the count_leaves function, compute the number of leaves in each of each models in (model_4, model_5, and model_6).
Step31: Quiz Question
Step32: Now, let us evaluate the models (model_7, model_8, or model_9) on the validation_set.
Step33: Using the count_leaves function, compute the number of leaves in each of each models (model_7, model_8, and model_9). | Python Code:
import numpy as np
import pandas as pd
import json
Explanation: Decision Trees in Practice
In this assignment we will explore various techniques for preventing overfitting in decision trees. We will extend the implementation of the binary decision trees that we implemented in the previous assignment. You will have to use your solutions from this previous assignment and extend them.
In this assignment you will:
Implement binary decision trees with different early stopping methods.
Compare models with different stopping parameters.
Visualize the concept of overfitting in decision trees.
Let's get started!
End of explanation
loans = pd.read_csv('lending-club-data.csv')
loans.head(2)
Explanation: Load LendingClub Dataset
This assignment will use the LendingClub dataset used in the previous two assignments.
End of explanation
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.drop('bad_loans', axis=1)
Explanation: As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
End of explanation
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
Explanation: We will be using the same 4 categorical features as in the previous assignment:
1. grade of the loan
2. the length of the loan term
3. the home ownership status: own, mortgage, rent
4. number of years of employment.
In the dataset, each of these features is a categorical feature. Since we are building a binary decision tree, we will have to convert this to binary data in a subsequent section using 1-hot encoding.
End of explanation
categorical_variables = []
for feat_name, feat_type in zip(loans.columns, loans.dtypes):
if feat_type == object:
categorical_variables.append(feat_name)
for feature in categorical_variables:
loans_one_hot_encoded = pd.get_dummies(loans[feature],prefix=feature)
loans_one_hot_encoded.fillna(0)
#print loans_one_hot_encoded
loans = loans.drop(feature, axis=1)
for col in loans_one_hot_encoded.columns:
loans[col] = loans_one_hot_encoded[col]
print loans.head(2)
print loans.columns
loans.iloc[122602]
Explanation: Transform categorical data into binary features
Since we are implementing binary decision trees, we transform our categorical data into binary data using 1-hot encoding, just as in the previous assignment. Here is the summary of that discussion:
For instance, the home_ownership feature represents the home ownership status of the loanee, which is either own, mortgage or rent. For example, if a data point has the feature
{'home_ownership': 'RENT'}
we want to turn this into three features:
{
'home_ownership = OWN' : 0,
'home_ownership = MORTGAGE' : 0,
'home_ownership = RENT' : 1
}
Since this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding.
End of explanation
with open('module-6-assignment-train-idx.json') as train_data_file:
train_idx = json.load(train_data_file)
with open('module-6-assignment-validation-idx.json') as validation_data_file:
validation_idx = json.load(validation_data_file)
print train_idx[:3]
print validation_idx[:3]
print len(train_idx)
print len(validation_idx)
train_data = loans.iloc[train_idx]
validation_data = loans.iloc[validation_idx]
print len(loans.dtypes )
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=1 so that everyone gets the same result.
End of explanation
def reached_minimum_node_size(data, min_node_size):
# Return True if the number of data points is less than or equal to the minimum node size.
## YOUR CODE HERE
if len(data) <= min_node_size:
return True
else:
return False
Explanation: Early stopping methods for decision trees
In this section, we will extend the binary tree implementation from the previous assignment in order to handle some early stopping conditions. Recall the 3 early stopping methods that were discussed in lecture:
Reached a maximum depth. (set by parameter max_depth).
Reached a minimum node size. (set by parameter min_node_size).
Don't split if the gain in error reduction is too small. (set by parameter min_error_reduction).
For the rest of this assignment, we will refer to these three as early stopping conditions 1, 2, and 3.
Early stopping condition 1: Maximum depth
Recall that we already implemented the maximum depth stopping condition in the previous assignment. In this assignment, we will experiment with this condition a bit more and also write code to implement the 2nd and 3rd early stopping conditions.
We will be reusing code from the previous assignment and then building upon this. We will alert you when you reach a function that was part of the previous assignment so that you can simply copy and past your previous code.
Early stopping condition 2: Minimum node size
The function reached_minimum_node_size takes 2 arguments:
The data (from a node)
The minimum number of data points that a node is allowed to split on, min_node_size.
This function simply calculates whether the number of data points at a given node is less than or equal to the specified minimum node size. This function will be used to detect this early stopping condition in the decision_tree_create function.
Fill in the parts of the function below where you find ## YOUR CODE HERE. There is one instance in the function below.
End of explanation
def error_reduction(error_before_split, error_after_split):
# Return the error before the split minus the error after the split.
## YOUR CODE HERE
return error_before_split - error_after_split
Explanation: Quiz Question: Given an intermediate node with 6 safe loans and 3 risky loans, if the min_node_size parameter is 10, what should the tree learning algorithm do next?
STOP
Early stopping condition 3: Minimum gain in error reduction
The function error_reduction takes 2 arguments:
The error before a split, error_before_split.
The error after a split, error_after_split.
This function computes the gain in error reduction, i.e., the difference between the error before the split and that after the split. This function will be used to detect this early stopping condition in the decision_tree_create function.
Fill in the parts of the function below where you find ## YOUR CODE HERE. There is one instance in the function below.
End of explanation
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
## YOUR CODE HERE
safe_loan = (labels_in_node==1).sum()
# Count the number of -1's (risky loans)
## YOUR CODE HERE
risky_loan = (labels_in_node==-1).sum()
# Return the number of mistakes that the majority classifier makes.
## YOUR CODE HERE
return min(safe_loan, risky_loan)
Explanation: Quiz Question: Assume an intermediate node has 6 safe loans and 3 risky loans. For each of 4 possible features to split on, the error reduction is 0.0, 0.05, 0.1, and 0.14, respectively. If the minimum gain in error reduction parameter is set to 0.2, what should the tree learning algorithm do next?
STOP
Grabbing binary decision tree helper functions from past assignment
Recall from the previous assignment that we wrote a function intermediate_node_num_mistakes that calculates the number of misclassified examples when predicting the majority class. This is used to help determine which feature is best to split on at a given node of the tree.
Please copy and paste your code for intermediate_node_num_mistakes here.
End of explanation
def best_splitting_feature(data, features, target):
target_values = data[target]
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes = intermediate_node_num_mistakes(left_split[target])
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split[target])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = (left_mistakes + right_mistakes) / num_data_points
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_feature = feature
best_error = error
return best_feature # Return the best feature we found
Explanation: We then wrote a function best_splitting_feature that finds the best feature to split on given the data and a list of features to consider.
Please copy and paste your best_splitting_feature code here.
End of explanation
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True } ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = 1 ## YOUR CODE HERE
else:
leaf['prediction'] = -1 ## YOUR CODE HERE
# Return the leaf node
return leaf
Explanation: Finally, recall the function create_leaf from the previous assignment, which creates a leaf node given a set of target values.
Please copy and paste your create_leaf code here.
End of explanation
def decision_tree_create(data, features, target, current_depth = 0,
max_depth = 10, min_node_size=1,
min_error_reduction=0.0):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1: All nodes are of the same type.
if intermediate_node_num_mistakes(target_values) == 0:
print "Stopping condition 1 reached. All data points have the same target value."
return create_leaf(target_values)
# Stopping condition 2: No more features to split on.
if remaining_features == []:
print "Stopping condition 2 reached. No remaining features."
return create_leaf(target_values)
# Early stopping condition 1: Reached max depth limit.
if current_depth >= max_depth:
print "Early stopping condition 1 reached. Reached maximum depth."
return create_leaf(target_values)
# Early stopping condition 2: Reached the minimum node size.
# If the number of data points is less than or equal to the minimum size, return a leaf.
if reached_minimum_node_size(data, min_node_size): ## YOUR CODE HERE
print "Early stopping condition 2 reached. Reached minimum node size."
return create_leaf(target_values) ## YOUR CODE HERE
# Find the best splitting feature
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
# Early stopping condition 3: Minimum error reduction
# Calculate the error before splitting (number of misclassified examples
# divided by the total number of examples)
error_before_split = intermediate_node_num_mistakes(target_values) / float(len(data))
# Calculate the error after splitting (number of misclassified examples
# in both groups divided by the total number of examples)
left_mistakes = intermediate_node_num_mistakes(left_split[target]) ## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split[target]) ## YOUR CODE HERE
error_after_split = (left_mistakes + right_mistakes) / float(len(data))
# If the error reduction is LESS THAN OR EQUAL TO min_error_reduction, return a leaf.
if error_reduction(error_before_split, error_after_split) <= min_error_reduction: ## YOUR CODE HERE
print "Early stopping condition 3 reached. Minimum error reduction."
return create_leaf(target_values) ## YOUR CODE HERE
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
Explanation: Incorporating new early stopping conditions in binary decision tree implementation
Now, you will implement a function that builds a decision tree handling the three early stopping conditions described in this assignment. In particular, you will write code to detect early stopping conditions 2 and 3. You implemented above the functions needed to detect these conditions. The 1st early stopping condition, max_depth, was implemented in the previous assigment and you will not need to reimplement this. In addition to these early stopping conditions, the typical stopping conditions of having no mistakes or no more features to split on (which we denote by "stopping conditions" 1 and 2) are also included as in the previous assignment.
Implementing early stopping condition 2: minimum node size:
Step 1: Use the function reached_minimum_node_size that you implemented earlier to write an if condition to detect whether we have hit the base case, i.e., the node does not have enough data points and should be turned into a leaf. Don't forget to use the min_node_size argument.
Step 2: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.
Implementing early stopping condition 3: minimum error reduction:
Note: This has to come after finding the best splitting feature so we can calculate the error after splitting in order to calculate the error reduction.
Step 1: Calculate the classification error before splitting. Recall that classification error is defined as:
$$
\text{classification error} = \frac{\text{# mistakes}}{\text{# total examples}}
$$
* Step 2: Calculate the classification error after splitting. This requires calculating the number of mistakes in the left and right splits, and then dividing by the total number of examples.
* Step 3: Use the function error_reduction to that you implemented earlier to write an if condition to detect whether the reduction in error is less than the constant provided (min_error_reduction). Don't forget to use that argument.
* Step 4: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.
Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
End of explanation
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
features = list(train_data.columns)
features.remove('safe_loans')
print list(train_data.columns)
print features
Explanation: Here is a function to count the nodes in your tree:
End of explanation
small_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 10, min_error_reduction=0.0)
if count_nodes(small_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_decision_tree)
print 'Number of nodes that should be there : 7'
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
my_decision_tree_new = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 100, min_error_reduction=0.0)
Explanation: Build a tree!
Now that your code is working, we will train a tree model on the train_data with
* max_depth = 6
* min_node_size = 100,
* min_error_reduction = 0.0
Warning: This code block may take a minute to learn.
End of explanation
my_decision_tree_old = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
Explanation: Let's now train a tree model ignoring early stopping conditions 2 and 3 so that we get the same tree as in the previous assignment. To ignore these conditions, we set min_node_size=0 and min_error_reduction=-1 (a negative value).
End of explanation
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
### YOUR CODE HERE
return classify(tree['right'], x, annotate)
Explanation: Making predictions
Recall that in the previous assignment you implemented a function classify to classify a new point x using a given tree.
Please copy and paste your classify code here.
End of explanation
validation_data.iloc[0]
print 'Predicted class: %s ' % classify(my_decision_tree_new, validation_data.iloc[0])
Explanation: Now, let's consider the first example of the validation set and see what the my_decision_tree_new model predicts for this data point.
End of explanation
classify(my_decision_tree_new, validation_data.iloc[0], annotate = True)
Explanation: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class:
End of explanation
classify(my_decision_tree_old, validation_data.iloc[0], annotate = True)
Explanation: Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as my_decision_tree_old.
End of explanation
def evaluate_classification_error(tree, data, target):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x), axis=1)
# Once you've made the predictions, calculate the classification error and return it
## YOUR CODE HERE
return (data[target] != np.array(prediction)).values.sum() / float(len(data))
Explanation: Quiz Question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for validation_set[0] shorter, longer, or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3?
shorter
Quiz Question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for any point always shorter, always longer, always the same, shorter or the same, or longer or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3?
shorter or the same
Quiz Question: For a tree trained on any dataset using max_depth = 6, min_node_size = 100, min_error_reduction=0.0, what is the maximum number of splits encountered while making a single prediction?
6
Evaluating the model
Now let us evaluate the model that we have trained. You implemented this evaluation in the function evaluate_classification_error from the previous assignment.
Please copy and paste your evaluate_classification_error code here.
End of explanation
evaluate_classification_error(my_decision_tree_new, validation_data, target)
Explanation: Now, let's use this function to evaluate the classification error of my_decision_tree_new on the validation_set.
End of explanation
evaluate_classification_error(my_decision_tree_old, validation_data, target)
Explanation: Now, evaluate the validation error using my_decision_tree_old.
End of explanation
model_1 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 0, min_error_reduction=-1)
model_2 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_3 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 14,
min_node_size = 0, min_error_reduction=-1)
Explanation: Quiz Question: Is the validation error of the new decision tree (using early stopping conditions 2 and 3) lower than, higher than, or the same as that of the old decision tree from the previous assignment?
lower
Exploring the effect of max_depth
We will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (too small, just right, and too large).
Train three models with these parameters:
model_1: max_depth = 2 (too small)
model_2: max_depth = 6 (just right)
model_3: max_depth = 14 (may be too large)
For each of these three, we set min_node_size = 0 and min_error_reduction = -1.
Note: Each tree can take up to a few minutes to train. In particular, model_3 will probably take the longest to train.
End of explanation
print "Training data, classification error (model 1):", evaluate_classification_error(model_1, train_data, target)
print "Training data, classification error (model 2):", evaluate_classification_error(model_2, train_data, target)
print "Training data, classification error (model 3):", evaluate_classification_error(model_3, train_data, target)
Explanation: Evaluating the models
Let us evaluate the models on the train and validation data. Let us start by evaluating the classification error on the training data:
End of explanation
print "Validation data, classification error (model 1):", evaluate_classification_error(model_1, validation_data, target)
print "Validation data, classification error (model 2):", evaluate_classification_error(model_2, validation_data, target)
print "Validation data, classification error (model 3):", evaluate_classification_error(model_3, validation_data, target)
Explanation: Now evaluate the classification error on the validation data.
End of explanation
def count_leaves(tree):
if tree['is_leaf']:
return 1
return count_leaves(tree['left']) + count_leaves(tree['right'])
Explanation: Quiz Question: Which tree has the smallest error on the validation data? model 3
Quiz Question: Does the tree with the smallest error in the training data also have the smallest error in the validation data? yes
Quiz Question: Is it always true that the tree with the lowest classification error on the training set will result in the lowest classification error in the validation set? no
Measuring the complexity of the tree
Recall in the lecture that we talked about deeper trees being more complex. We will measure the complexity of the tree as
complexity(T) = number of leaves in the tree T
Here, we provide a function count_leaves that counts the number of leaves in a tree. Using this implementation, compute the number of nodes in model_1, model_2, and model_3.
End of explanation
print "number of leaves in model_1 is : {}".format(count_leaves(model_1))
print "number of leaves in model_2 is : {}".format(count_leaves(model_2))
print "number of leaves in model_3 is : {}".format(count_leaves(model_3))
Explanation: Compute the number of nodes in model_1, model_2, and model_3.
End of explanation
model_4 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_5 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=0)
model_6 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=5)
Explanation: Quiz Question: Which tree has the largest complexity?
model_3
Quiz Question: Is it always true that the most complex tree will result in the lowest classification error in the validation_set?
no
Exploring the effect of min_error
We will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (negative, just right, and too positive).
Train three models with these parameters:
1. model_4: min_error_reduction = -1 (ignoring this early stopping condition)
2. model_5: min_error_reduction = 0 (just right)
3. model_6: min_error_reduction = 5 (too positive)
For each of these three, we set max_depth = 6, and min_node_size = 0.
Note: Each tree can take up to 30 seconds to train.
End of explanation
print "Validation data, classification error (model 4):", evaluate_classification_error(model_4, validation_data, target)
print "Validation data, classification error (model 5):", evaluate_classification_error(model_5, validation_data, target)
print "Validation data, classification error (model 6):", evaluate_classification_error(model_6, validation_data, target)
Explanation: Calculate the accuracy of each model (model_4, model_5, or model_6) on the validation set.
End of explanation
print "number of leaves in model_4 is : {}".format(count_leaves(model_4))
print "number of leaves in model_5 is : {}".format(count_leaves(model_5))
print "number of leaves in model_6 is : {}".format(count_leaves(model_6))
Explanation: Using the count_leaves function, compute the number of leaves in each of each models in (model_4, model_5, and model_6).
End of explanation
model_7 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_8 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 2000, min_error_reduction=-1)
model_9 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 50000, min_error_reduction=-1)
Explanation: Quiz Question: Using the complexity definition above, which model (model_4, model_5, or model_6) has the largest complexity?
Did this match your expectation?
model_4
Quiz Question: model_4 and model_5 have similar classification error on the validation set but model_5 has lower complexity. Should you pick model_5 over model_4?
model_5
Exploring the effect of min_node_size
We will compare three models trained with different values of the stopping criterion. Again, intentionally picked models at the extreme ends (too small, just right, and just right).
Train three models with these parameters:
1. model_7: min_node_size = 0 (too small)
2. model_8: min_node_size = 2000 (just right)
3. model_9: min_node_size = 50000 (too large)
For each of these three, we set max_depth = 6, and min_error_reduction = -1.
Note: Each tree can take up to 30 seconds to train.
End of explanation
print "Validation data, classification error (model 7):", evaluate_classification_error(model_7, validation_data, target)
print "Validation data, classification error (model 8):", evaluate_classification_error(model_8, validation_data, target)
print "Validation data, classification error (model 9):", evaluate_classification_error(model_9, validation_data, target)
Explanation: Now, let us evaluate the models (model_7, model_8, or model_9) on the validation_set.
End of explanation
print "number of leaves in model_7 is : {}".format(count_leaves(model_7))
print "number of leaves in model_8 is : {}".format(count_leaves(model_8))
print "number of leaves in model_9 is : {}".format(count_leaves(model_9))
Explanation: Using the count_leaves function, compute the number of leaves in each of each models (model_7, model_8, and model_9).
End of explanation |
11,603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression with Grid Search (scikit-learn)
<a href="https
Step1: This example features
Step2: Imports
Step3: Log Workflow
This section demonstrates logging model metadata and training artifacts to ModelDB.
Prepare Data
Step4: Prepare Hyperparameters
Step5: Instantiate Client
Step6: Train Models
Step7: Revisit Workflow
This section demonstrates querying and retrieving runs via the Client.
Retrieve Best Run
Step8: Train on Full Dataset
Step9: Calculate Accuracy on Full Training Set
Step10: Deployment and Live Predictions
This section demonstrates model deployment and predictions, if supported by your version of ModelDB.
Step11: Prepare "Live" Data
Step12: Deploy Model
Step13: Query Deployed Model | Python Code:
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
Explanation: Logistic Regression with Grid Search (scikit-learn)
<a href="https://colab.research.google.com/github/VertaAI/modeldb/blob/master/client/workflows/demos/census-end-to-end.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification"
EXPERIMENT_NAME = "Logistic Regression"
# import os
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
Explanation: This example features:
- scikit-learn's LinearRegression model
- verta's Python client logging grid search results
- verta's Python client retrieving the best run from the grid search to calculate full training accuracy
- predictions against a deployed model
End of explanation
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
Explanation: Imports
End of explanation
train_data_url = "http://s3.amazonaws.com/verta-starter/census-train.csv"
train_data_filename = wget.detect_filename(train_data_url)
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
test_data_url = "http://s3.amazonaws.com/verta-starter/census-test.csv"
test_data_filename = wget.detect_filename(test_data_url)
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
Explanation: Log Workflow
This section demonstrates logging model metadata and training artifacts to ModelDB.
Prepare Data
End of explanation
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
Explanation: Prepare Hyperparameters
End of explanation
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
Explanation: Instantiate Client
End of explanation
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, model.predict(X_train))
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api)
run.log_requirements(requirements)
# log Git information as code version
run.log_code()
# NOTE: run_experiment() could also be defined in a module, and executed in parallel
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
Explanation: Train Models
End of explanation
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
Explanation: Revisit Workflow
This section demonstrates querying and retrieving runs via the Client.
Retrieve Best Run
End of explanation
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
Explanation: Train on Full Dataset
End of explanation
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
Explanation: Calculate Accuracy on Full Training Set
End of explanation
model_id = 'YOUR_MODEL_ID'
run = client.set_experiment_run(id=model_id)
Explanation: Deployment and Live Predictions
This section demonstrates model deployment and predictions, if supported by your version of ModelDB.
End of explanation
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
Explanation: Prepare "Live" Data
End of explanation
run.deploy(wait=True)
run
Explanation: Deploy Model
End of explanation
deployed_model = run.get_deployed_model()
for x in itertools.cycle(X_test.values.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
Explanation: Query Deployed Model
End of explanation |
11,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test suite for Jupyter-notebook
Sample example of use of PyCOMPSs from Jupyter
First step
Import ipycompss library
Step1: Second step
Initialize COMPSs runtime
Parameters indicates if the execution will generate task graph, tracefile, monitor interval and debug information. The parameter taskCount is a work around for the dot generation of the legend
Step2: Third step
Import task module before annotating functions or methods
Step3: Fourth step
Declare functions and decorate with @task those that should be tasks
Step4: Fifth step
Invoke tasks
Step5: Sixt step
Import compss_wait_on module and synchronize tasks
Step6: Only those results being sychronized with compss_wait_on will have a valid value
Step7: Stop COMPSs runtime. All data will be synchronized in the main program
Step8: CHECK THE RESULTS FOR THE TEST | Python Code:
import pycompss.interactive as ipycompss
Explanation: Test suite for Jupyter-notebook
Sample example of use of PyCOMPSs from Jupyter
First step
Import ipycompss library
End of explanation
ipycompss.start(graph=True, trace=True, debug=True, project_xml='../project.xml', resources_xml='../resources.xml', comm='GAT')
Explanation: Second step
Initialize COMPSs runtime
Parameters indicates if the execution will generate task graph, tracefile, monitor interval and debug information. The parameter taskCount is a work around for the dot generation of the legend
End of explanation
from pycompss.api.task import task
Explanation: Third step
Import task module before annotating functions or methods
End of explanation
@task(returns=int)
def test(val1):
return val1 * val1
@task(returns=int)
def test2(val2, val3):
return val2 + val3
Explanation: Fourth step
Declare functions and decorate with @task those that should be tasks
End of explanation
a = test(2)
b = test2(a, 5)
Explanation: Fifth step
Invoke tasks
End of explanation
from pycompss.api.api import compss_wait_on
result = compss_wait_on(b)
Explanation: Sixt step
Import compss_wait_on module and synchronize tasks
End of explanation
print("Results: ")
print("a: ", a)
print("b: ", b)
print("result: ", result)
Explanation: Only those results being sychronized with compss_wait_on will have a valid value
End of explanation
ipycompss.stop(sync=True)
print("Results after stopping PyCOMPSs: ")
print("a: ", a)
print("b: ", b)
print("result: ", result)
Explanation: Stop COMPSs runtime. All data will be synchronized in the main program
End of explanation
from pycompss.runtime.binding import Future
if a == 4 and isinstance(b, Future) and result == 9:
print("RESULT=EXPECTED")
else:
print("RESULT=UNEXPECTED")
Explanation: CHECK THE RESULTS FOR THE TEST
End of explanation |
11,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting house prices using k-nearest neighbors regression
In this notebook, we will implement k-nearest neighbors regression. You will
Step1: Unzipping files with house sales data
For this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset.
Step2: Load house sales data
Step3: Import useful functions from previous notebooks
To efficiently compute pairwise distances among data points, we will convert the DataFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2.
Step4: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
Step5: Split data into training, test, and validation sets
Step6: Extract features and normalize
Using all of the numerical inputs listed in feature_list, transform the training, test, and validation DataFrames into Numpy arrays
Step7: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
IMPORTANT
Step8: Compute a single distance
To start, let's just explore computing the "distance" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.
To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1.
Step9: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
Step10: QUIZ QUESTION
What is the Euclidean distance between the query house and the 10th house of the training set?
Note
Step11: Compute multiple distances
Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set.
To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0
Step12: QUIZ QUESTION
Among the first 10 training houses, which house is the closest to the query house?
Step13: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0
Step14: The subtraction operator (-) in Numpy is vectorized as follows
Step15: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below
Step16: Aside
Step17: To test the code above, run the following cell, which should output a value -0.0934339605842
Step18: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
By default, np.sum sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the axis parameter described in the np.sum documentation. In particular, axis=1 computes the sum across each row.
Below, we compute this sum of square feature differences for all training houses and verify that the output for the 16th house in the training set is equivalent to having examined only the 16th row of diff and computing the sum of squares on that row alone.
Step19: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Hint
Step20: To test the code above, run the following cell, which should output a value 0.0237082324496
Step21: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters
Step22: QUIZ QUESTIONS
Q1. Take the query house to be third house of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house?
Step23: Q2. What is the predicted value of the query house based on 1-nearest neighbor regression?
Step24: Perform k-nearest neighbor regression
For k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors.
Fetch k-nearest neighbors
Using the functions above, implement a function that takes in
* the value of k;
* the feature matrix for the training houses; and
* the feature vector of the query house
and returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house.
Hint
Step25: QUIZ QUESTION
Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?
Step26: Make a single prediction by averaging k nearest neighbor outputs
Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters
Step27: QUIZ QUESTION
Again taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above.
Step28: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Step29: Make multiple predictions
Write a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The idea is to have a loop where we take each house in the query set as the query house and make a prediction for that specific house. The new function should take the following parameters
Step30: QUIZ QUESTION
Make predictions for the first 10 houses in the test set using k-nearest neighbors with k=10.
Q1. What is the index of the house in this query set that has the lowest predicted value?
Step31: Q2. What is the predicted value of this house?
Step32: Choosing the best value of k using a validation set
There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following
Step33: To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value
Step34: QUIZ QUESTION
What is the RSS on the TEST data using the value of k found above? To be clear, sum over all houses in the TEST set. | Python Code:
import os
import zipfile
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
Explanation: Predicting house prices using k-nearest neighbors regression
In this notebook, we will implement k-nearest neighbors regression. You will:
* Find the k-nearest neighbors of a given query input
* Predict the output for the query input using the k-nearest neighbors
* Choose the best value of k using a validation set
Importing Libraries
End of explanation
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]
# Filenames of unzipped files
unzip_files = ['kc_house_data_small.csv','kc_house_data_small_train.csv',
'kc_house_data_small_validation.csv', 'kc_house_data_small_test.csv' ]
# If upzipped file not in files_list, unzip the file
for filename in unzip_files:
if filename not in files_list:
zip_file = filename + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close
Explanation: Unzipping files with house sales data
For this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset.
End of explanation
# Defining a dict w/ with the data type for each feature
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float,
'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float,
'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float,
'floors':float, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int,
'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}
sales = pd.read_csv('kc_house_data_small.csv', dtype=dtype_dict)
Explanation: Load house sales data
End of explanation
def get_numpy_data(input_df, features, output):
input_df['constant'] = 1.0 # Adding column 'constant' to input DataFrame with all values = 1.0
features = ['constant'] + features # Adding constant' to List of features
feature_matrix = input_df.as_matrix(columns=features) # Convert DataFrame w/ columns in features list in np.ndarray
output_array = input_df[output].values # Convert column with output feature into np.array
return(feature_matrix, output_array)
Explanation: Import useful functions from previous notebooks
To efficiently compute pairwise distances among data points, we will convert the DataFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2.
End of explanation
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
normalized_features = feature_matrix/norms
return (normalized_features, norms)
Explanation: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
End of explanation
train_data = pd.read_csv('kc_house_data_small_train.csv', dtype=dtype_dict)
test_data = pd.read_csv('kc_house_data_small_test.csv', dtype=dtype_dict)
validation_data = pd.read_csv('kc_house_data_validation.csv', dtype=dtype_dict)
Explanation: Split data into training, test, and validation sets
End of explanation
feature_list = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated',
'lat',
'long',
'sqft_living15',
'sqft_lot15']
features_train, output_train = get_numpy_data(train_data, feature_list, 'price')
features_test, output_test = get_numpy_data(test_data, feature_list, 'price')
features_valid, output_valid = get_numpy_data(validation_data, feature_list, 'price')
Explanation: Extract features and normalize
Using all of the numerical inputs listed in feature_list, transform the training, test, and validation DataFrames into Numpy arrays:
End of explanation
features_train, norms = normalize_features(features_train) # normalize training set features (columns)
features_test = features_test / norms # normalize test set by training set norms
features_valid = features_valid / norms # normalize validation set by training set norms
Explanation: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
IMPORTANT: Make sure to store the norms of the features in the training set. The features in the test and validation sets must be divided by these same norms, so that the training, test, and validation sets are normalized consistently.
End of explanation
print features_test[0]
print len(features_test[0])
Explanation: Compute a single distance
To start, let's just explore computing the "distance" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.
To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1.
End of explanation
print features_train[9]
print len(features_train[9])
Explanation: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
End of explanation
dist_euclid = np.sqrt( np.sum( (features_train[9] - features_test[0] )**2 ) )
print dist_euclid
Explanation: QUIZ QUESTION
What is the Euclidean distance between the query house and the 10th house of the training set?
Note: Do not use the np.linalg.norm function; use np.sqrt, np.sum, and the power operator (**) instead. The latter approach is more easily adapted to computing multiple distances at once.
End of explanation
# Setting the first house as the NN
min_euclid_dist = np.sqrt( np.sum( ( features_train[0] - features_test[0] )**2 ) )
min_house_index = 0
for i in range(1,10,1):
curr_euclid_dist = np.sqrt( np.sum( ( features_train[i] - features_test[0] )**2 ) )
# If distance of current house < current NN, update the NN
if curr_euclid_dist<min_euclid_dist:
min_euclid_dist = curr_euclid_dist
min_house_index = i
Explanation: Compute multiple distances
Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set.
To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0:10]) and then search for the nearest neighbor within this small set of houses. Through restricting ourselves to a small set of houses to begin with, we can visually scan the list of 10 distances to verify that our code for finding the nearest neighbor is working.
Write a loop to compute the Euclidean distance from the query house to each of the first 10 houses in the training set.
End of explanation
print 'House', min_house_index + 1
Explanation: QUIZ QUESTION
Among the first 10 training houses, which house is the closest to the query house?
End of explanation
for i in xrange(3):
print features_train[i]-features_test[0]
# should print 3 vectors of length 18
Explanation: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0:3]):
End of explanation
print features_train[0:3] - features_test[0]
Explanation: The subtraction operator (-) in Numpy is vectorized as follows:
End of explanation
# verify that vectorization works
results = features_train[0:3] - features_test[0]
print results[0] - (features_train[0]-features_test[0])
# should print all 0's if results[0] == (features_train[0]-features_test[0])
print results[1] - (features_train[1]-features_test[0])
# should print all 0's if results[1] == (features_train[1]-features_test[0])
print results[2] - (features_train[2]-features_test[0])
# should print all 0's if results[2] == (features_train[2]-features_test[0])
Explanation: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below:
End of explanation
diff = features_train[:] - features_test[0]
Explanation: Aside: it is a good idea to write tests like this cell whenever you are vectorizing a complicated operation.
Perform 1-nearest neighbor regression
Now that we have the element-wise differences, it is not too hard to compute the Euclidean distances between our query house and all of the training houses. First, write a single-line expression to define a variable diff such that diff[i] gives the element-wise difference between the features of the query house and the i-th training house.
End of explanation
print diff[-1].sum() # sum of the feature differences between the query and last training house
# should print -0.0934339605842
Explanation: To test the code above, run the following cell, which should output a value -0.0934339605842:
End of explanation
print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum
print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above
Explanation: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
By default, np.sum sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the axis parameter described in the np.sum documentation. In particular, axis=1 computes the sum across each row.
Below, we compute this sum of square feature differences for all training houses and verify that the output for the 16th house in the training set is equivalent to having examined only the 16th row of diff and computing the sum of squares on that row alone.
End of explanation
distances = np.sqrt( np.sum(diff**2, axis=1) )
Explanation: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Hint: Do not forget to take the square root of the sum of squares.
End of explanation
print distances[100] # Euclidean distance between the query house and the 101th training house
# should print 0.0237082324496
Explanation: To test the code above, run the following cell, which should output a value 0.0237082324496:
End of explanation
def compute_distances(features_instances, features_query):
diff = features_instances[:] - features_query
distances = np.sqrt( np.sum(diff**2, axis=1) )
return distances
Explanation: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters: (i) the matrix of training features and (ii) the single feature vector associated with the query.
End of explanation
dist_Q1 = compute_distances(features_train, features_test[2])
index_NN = np.where(dist_Q1 == dist_Q1.min())[0][0]
print index_NN
Explanation: QUIZ QUESTIONS
Q1. Take the query house to be third house of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house?
End of explanation
print train_data['price'][index_NN]
Explanation: Q2. What is the predicted value of the query house based on 1-nearest neighbor regression?
End of explanation
def k_nearest_neighbors(k, feature_train, features_query):
distances = compute_distances(feature_train, features_query)
neighbors = np.argsort(distances)[0:k]
return neighbors
Explanation: Perform k-nearest neighbor regression
For k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors.
Fetch k-nearest neighbors
Using the functions above, implement a function that takes in
* the value of k;
* the feature matrix for the training houses; and
* the feature vector of the query house
and returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house.
Hint: Look at the documentation for np.argsort.
End of explanation
QQ_4NN = k_nearest_neighbors(4, features_train, features_test[2])
print QQ_4NN
Explanation: QUIZ QUESTION
Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?
End of explanation
def predict_output_of_query(k, features_train, output_train, features_query):
kNN = k_nearest_neighbors(k, features_train, features_query)
prediction = np.average(output_train[kNN])
return prediction
Explanation: Make a single prediction by averaging k nearest neighbor outputs
Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature vector of the query house, whose price we are predicting.
The function should return a predicted value of the query house.
Hint: You can extract multiple items from a Numpy array using a list of indices. For instance, output_train[[6, 10]] returns the prices of the 7th and 11th training houses.
End of explanation
QQ_pred = predict_output_of_query(4, features_train, train_data['price'].values, features_test[2])
print QQ_pred
Explanation: QUIZ QUESTION
Again taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above.
End of explanation
print '1-NN prediction: ', train_data['price'][382]
print '4-NN prediction: ', QQ_pred
Explanation: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
End of explanation
def predict_output(k, features_train, output_train, features_query):
predictions = np.zeros(features_query.shape[0])
for i in range(len(predictions)):
predictions[i] = predict_output_of_query(k, features_train, output_train, features_query[i])
return predictions
Explanation: Make multiple predictions
Write a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The idea is to have a loop where we take each house in the query set as the query house and make a prediction for that specific house. The new function should take the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature matrix for the query set.
The function should return a set of predicted values, one for each house in the query set.
Hint: To get the number of houses in the query set, use the .shape field of the query features matrix. See the documentation.
End of explanation
QQ_10_preds = predict_output(10, features_train, train_data['price'].values, features_test[0:10])
index_low_pred = np.where(QQ_10_preds == QQ_10_preds.min())[0][0]
print index_low_pred
Explanation: QUIZ QUESTION
Make predictions for the first 10 houses in the test set using k-nearest neighbors with k=10.
Q1. What is the index of the house in this query set that has the lowest predicted value?
End of explanation
print QQ_10_preds[index_low_pred]
Explanation: Q2. What is the predicted value of this house?
End of explanation
kvals = range(1, 16)
rss_all = np.zeros(len(kvals))
for i in range(len(kvals)):
pred_vals = predict_output(kvals[i], features_train, train_data['price'].values, features_valid)
rss_all[i] = sum( (pred_vals- validation_data['price'].values)**2 )
index_min_rss = np.where(rss_all == rss_all.min())[0][0]
print 'Value of k which produces the lowest RSS on VALIDATION set: ', kvals[index_min_rss]
Explanation: Choosing the best value of k using a validation set
There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following:
For k in [1, 2, ..., 15]:
Makes predictions for each house in the VALIDATION set using the k-nearest neighbors from the TRAINING set.
Computes the RSS for these predictions on the VALIDATION set
Stores the RSS computed above in rss_all
Report which k produced the lowest RSS on VALIDATION set.
(Depending on your computing environment, this computation may take 10-15 minutes.)
End of explanation
plt.figure(figsize=(8,6))
plt.plot(kvals, rss_all,'bo-')
plt.xlabel('k-nearest neighbors used', fontsize=16)
plt.ylabel('Residual Sum of Squares', fontsize=16)
plt.title('k vs. RSS on Validation Dataset', fontsize=18)
plt.show()
Explanation: To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value:
End of explanation
pred_vals_test = predict_output(8, features_train, train_data['price'].values, features_test)
rss_test = sum( (pred_vals_test- test_data['price'].values)**2 )
print '%.2e' % rss_test
Explanation: QUIZ QUESTION
What is the RSS on the TEST data using the value of k found above? To be clear, sum over all houses in the TEST set.
End of explanation |
11,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparaison $T_{ext}$ mesurée et celle de la météo
Step1: Comparaison avec une autre position GPS
Step2: Data from ROMMA
http
Step3: laquelle est correcte ??
à priori Romma | Python Code:
coords_grenoble = (45.1973288, 5.7139923) #(45.1973288, 5.7103223)
startday, lastday = pd.to_datetime('22/06/2017', format='%d/%m/%Y'), pd.to_datetime('now')
# download the data:
data = wf.buildmultidayDF(startday, lastday, coords_grenoble )
import emoncmsfeed as getfeeds
dataframefreq = '10min'
feeds = { 'T_ext':2 } # 'T_int':3 ,
df = getfeeds.builddataframe( feeds, dataframefreq ) # startdate=pd.to_datetime('22/06/2017')
df['Tmeteo'] = data['temperature']
df = df.interpolate()
df.plot( figsize=(14, 5) ); # plt.ylim([10, 30])
Explanation: Comparaison $T_{ext}$ mesurée et celle de la météo
End of explanation
coords_bis = (45.1673058,5.7514976)
# download the data:
data_bis = wf.buildmultidayDF(startday, lastday, coords_bis )
data_coors = pd.concat( (data['temperature'], data_bis['temperature']) , axis=1 )
data_coors.plot(figsize=(14, 5))
Explanation: Comparaison avec une autre position GPS
End of explanation
import json
with open('data/romma_temp.json') as data_file:
data_romma = json.load(data_file)
dateindex = pd.to_datetime(data_romma[0], unit='ms')
df_romma = pd.DataFrame( {'T_romma':data_romma[1]}, index=dateindex )#, parse_dates=True )
# Zoom
zoom_start = pd.to_datetime( '22/06/2017' )
mask = (df_romma.index > zoom_start)
df['T_romma'] = df_romma.loc[mask]
df = df.interpolate()
df.plot( figsize=(14, 5) ); plt.ylim([10, 40])
Explanation: Data from ROMMA
http://romma.fr/station_24.php?id=4&tempe=1
http://romma.fr/frame_station24.php?&id_station=4&tempe=1&humi=&pluie=&vent=&pressure=&rayonnement=
/javascript/ copy()
End of explanation
delta = df['T_romma'] - df['T_ext']
dd.plot()
Explanation: laquelle est correcte ??
à priori Romma
End of explanation |
11,607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyGSLIB
PPplot
Step1: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
Step2: gslib probplot with bokeh | Python Code:
#general imports
import pygslib
Explanation: PyGSLIB
PPplot
End of explanation
#get the data in gslib format into a pandas Dataframe
mydata= pygslib.gslib.read_gslib_file('../datasets/cluster.dat')
true= pygslib.gslib.read_gslib_file('../datasets/true.dat')
true['Declustering Weight'] = 1
Explanation: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
End of explanation
parameters_probplt = {
# gslib parameters for histogram calculation
'iwt' : 0, # input boolean (Optional: set True). Use weight variable?
'va' : mydata['Primary'], # input rank-1 array('d') with bounds (nd). Variable
'wt' : mydata['Declustering Weight'], # input rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight.
# visual parameters for figure (if a new figure is created)
'figure' : None, # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure.
'title' : 'Prob blot', # string (Optional, "Histogram"). Figure title
'xlabel' : 'Primary', # string (Optional, default "Z"). X axis label
'ylabel' : 'P[Z<c]', # string (Optional, default "f(%)"). Y axis label
'xlog' : 1, # boolean (Optional, default True). If true plot X axis in log sale.
'ylog' : 1, # boolean (Optional, default True). If true plot Y axis in log sale.
# visual parameter for the probplt
'style' : 'cross', # string with valid bokeh chart type
'color' : 'blue', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a) (Optional, default "navy")
'legend': 'Non declustered', # string (Optional, default "NA").
'alpha' : 1, # float [0-1] (Optional, default 0.5). Transparency of the fill colour
'lwidth': 0, # float (Optional, default 1). Line width
# leyend
'legendloc': 'bottom_right'} # float (Optional, default 'top_right'). Any of top_left, top_center, top_right, center_right, bottom_right, bottom_center, bottom_left, center_left or center
parameters_probplt_dcl = parameters_probplt.copy()
parameters_probplt_dcl['iwt']=1
parameters_probplt_dcl['legend']='Declustered'
parameters_probplt_dcl['color'] = 'red'
parameters_probplt_true = parameters_probplt.copy()
parameters_probplt_true['va'] = true['Primary']
parameters_probplt_true['wt'] = true['Declustering Weight']
parameters_probplt_true['iwt']=0
parameters_probplt_true['legend']='True'
parameters_probplt_true['color'] = 'black'
parameters_probplt_true['style'] = 'line'
parameters_probplt_true['lwidth'] = 1
results, fig = pygslib.plothtml.probplt(parameters_probplt)
# add declustered to the plot
parameters_probplt_dcl['figure']= fig
results, fig = pygslib.plothtml.probplt(parameters_probplt_dcl)
# add true CDF to the plot
parameters_probplt_true['figure']=parameters_probplt_dcl['figure']
results, fig = pygslib.plothtml.probplt(parameters_probplt_true)
# show the plot
pygslib.plothtml.show(fig)
Explanation: gslib probplot with bokeh
End of explanation |
11,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ARC2 download example
In this demo we show how to download ARC2 data
Step1: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Step2: At first, we need to define the dataset names and temporal ranges. Please note that the datasets have different time ranges. So we will download the data from 1981, when CHRIPS starts (ARC2 is from 1983).
Step3: Then we define spatial range. In this case we define all Africa. Keep in mind that it's a huge area and downloading all the data from Africa for 35 years is a lot and can take even several hours.
Step4: Download the data with package API
Create package objects
Send commands for the package creation
Download the package files | Python Code:
%matplotlib notebook
import dh_py_access.lib.datahub as datahub
import dh_py_access.package_api as package_api
Explanation: ARC2 download example
In this demo we show how to download ARC2 data
End of explanation
server = 'api.planetos.com'
API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
version = 'v1'
Explanation: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
End of explanation
dh=datahub.datahub(server,version,API_key)
dataset1='noaa_arc2_africa_01'
variable_name1 = 'pr'
time_start = '1983-01-01T00:00:00'
time_end = '2018-01-01T00:00:00'
Explanation: At first, we need to define the dataset names and temporal ranges. Please note that the datasets have different time ranges. So we will download the data from 1981, when CHRIPS starts (ARC2 is from 1983).
End of explanation
area_name = 'Africa'
latitude_north = 42.24; longitude_west = -24.64
latitude_south = -45.76; longitude_east = 60.28
Explanation: Then we define spatial range. In this case we define all Africa. Keep in mind that it's a huge area and downloading all the data from Africa for 35 years is a lot and can take even several hours.
End of explanation
package_arc2_africa_01 = package_api.package_api(dh,dataset1,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area_name)
package_arc2_africa_01.make_package()
package_arc2_africa_01.download_package()
Explanation: Download the data with package API
Create package objects
Send commands for the package creation
Download the package files
End of explanation |
11,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classify handwritten digits with Keras
Data from
Step1: <a id="01">1. Download the MNIST dataset from Internet </a>
I've made the dataset into a zipped tar file. You'll have to download it now.
Step2: 10 folders of images will be extracted from the downloaded tar file.
<a id="02">2. Preprocessing the dataset</a>
Step3: How many digit classes & how many figures belong to each of the classes?
Step4: Split the image paths into train($70\%$), val($15\%$), test($15\%$)
Step5: Load images into RAM
Step6: Remark
Step7: <a id="03">3. Softmax Regression</a>
Step8: Onehot-encoding the labels
Step9: Construct the model
Step10: More details about the constructed model
Step11: Train the model
Step12: See how the accuracy climbs during training
Step13: Now, you'll probably want to evaluate or save the trained model.
Step14: Save model architecture & weights
Step15: Load the saved model architecture & weights
Step16: Output the classification report (see if the trained model works well on the test data)
Step17: <a id="04">4. A small Convolutional Neural Network</a>
Reshape the tensors (this step is necessary, because the CNN model wants the input tensor to be 4D)
Step18: Create the model
Step19: Train the model
Step20: See how the accuracy climbs during training
Step21: Output the classification report (see if the trained model works well on the test data) | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
import pandas as pd
import sklearn
import os
import requests
from tqdm._tqdm_notebook import tqdm_notebook
import tarfile
Explanation: Classify handwritten digits with Keras
Data from: the MNIST dataset
Download the MNIST dataset from Internet
Preprocessing the dataset
Softmax Regression
A small Convolutional Neural Network
End of explanation
def download_file(url,file):
# Streaming, so we can iterate over the response.
r = requests.get(url, stream=True)
# Total size in bytes.
total_size = int(r.headers.get('content-length', 0));
block_size = 1024
wrote = 0
with open(file, 'wb') as f:
for data in tqdm_notebook(r.iter_content(block_size), total=np.ceil(total_size//block_size) , unit='KB', unit_scale=True):
wrote = wrote + len(data)
f.write(data)
if total_size != 0 and wrote != total_size:
print("ERROR, something went wrong")
url = "https://github.com/chi-hung/PythonTutorial/raw/master/datasets/mnist.tar.gz"
file = "mnist.tar.gz"
print('Retrieving the MNIST dataset...')
download_file(url,file)
print('Extracting the MNIST dataset...')
tar = tarfile.open(file)
tar.extractall()
tar.close()
print('Completed fetching the MNIST dataset.')
Explanation: <a id="01">1. Download the MNIST dataset from Internet </a>
I've made the dataset into a zipped tar file. You'll have to download it now.
End of explanation
def filePathsGen(rootPath):
paths=[]
dirs=[]
for dirPath,dirNames,fileNames in os.walk(rootPath):
for fileName in fileNames:
fullPath=os.path.join(dirPath,fileName)
paths.append((int(dirPath[len(rootPath) ]),fullPath))
dirs.append(dirNames)
return dirs,paths
dirs,paths=filePathsGen('mnist/') # load the image paths
dfPath=pd.DataFrame(paths,columns=['class','path']) # save image paths as a Pandas DataFrame
dfPath.head(5) # see the first 5 paths of the DataFrame
Explanation: 10 folders of images will be extracted from the downloaded tar file.
<a id="02">2. Preprocessing the dataset</a>
End of explanation
dfCountPerClass=dfPath.groupby('class').count()
dfCountPerClass.rename(columns={'path':'amount of figures'},inplace=True)
dfCountPerClass.plot(kind='bar',rot=0)
Explanation: How many digit classes & how many figures belong to each of the classes?
End of explanation
train=dfPath.sample(frac=0.7) # sample 70% data to be the train dataset
test=dfPath.drop(train.index) # the rest 30% are now the test dataset
# take 50% of the test dataset as the validation dataset
val=test.sample(frac=1/2)
test=test.drop(val.index)
# let's check the length of the train, val and test dataset.
print('number of all figures = {:10}.'.format(len(dfPath)))
print('number of train figures= {:9}.'.format(len(train)))
print('number of val figures= {:10}.'.format(len(val)))
print('number of test figures= {:9}.'.format(len(test)))
# let's take a look: plotting 3 figures from the train dataset
for j in range(3):
img=plt.imread(train['path'].iloc[j])
plt.imshow(img,cmap="gray")
plt.axis("off")
plt.show()
Explanation: Split the image paths into train($70\%$), val($15\%$), test($15\%$)
End of explanation
def dataLoad(dfPath):
paths=dfPath['path'].values
x=np.zeros((len(paths),28,28),dtype=np.float32 )
for j in range(len(paths)):
x[j,:,:]=plt.imread(paths[j])/255
y=dfPath['class'].values
return x,y
train_x,train_y=dataLoad(train)
val_x,val_y=dataLoad(val)
test_x,test_y=dataLoad(test)
Explanation: Load images into RAM
End of explanation
print("tensor shapes:\n")
print('train:',train_x.shape,train_y.shape)
print('val :',val_x.shape,val_y.shape)
print('test :',test_x.shape,test_y.shape)
Explanation: Remark: loading all images to RAM might take a while.
End of explanation
from keras.models import Sequential
from keras.layers import Dense,Flatten
from keras.optimizers import SGD
Explanation: <a id="03">3. Softmax Regression</a>
End of explanation
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder()
train_y_onehot = np.float32( enc.fit_transform(train_y.reshape(-1,1)) \
.toarray() )
val_y_onehot = np.float32( enc.fit_transform(val_y.reshape(-1,1)) \
.toarray() )
test_y_onehot = np.float32( enc.fit_transform(test_y.reshape(-1,1)) \
.toarray() )
Explanation: Onehot-encoding the labels:
End of explanation
model = Sequential()
model.add(Flatten(input_shape=(28,28)))
model.add(Dense(10, activation='softmax') )
sgd=SGD(lr=0.2, momentum=0.0, decay=0.0)
model.compile(optimizer='sgd',
loss='categorical_crossentropy',
metrics=['accuracy'])
Explanation: Construct the model:
End of explanation
model.summary()
Explanation: More details about the constructed model:
End of explanation
hist=model.fit(train_x, train_y_onehot,
epochs=20, batch_size=128,
validation_data=(val_x,val_y_onehot))
Explanation: Train the model:
End of explanation
plt.plot(hist.history['acc'],ms=5,marker='o',label='accuracy')
plt.plot(hist.history['val_acc'],ms=5,marker='o',label='val accuracy')
plt.legend()
plt.show()
Explanation: See how the accuracy climbs during training:
End of explanation
# calculate loss & accuracy (evaluated on the test dataset)
score = model.evaluate(test_x, test_y_onehot, batch_size=128)
print("LOSS (evaluated on the test dataset)= {}".format(score[0]))
print("ACCURACY (evaluated on the test dataset)= {}".format(score[1]))
Explanation: Now, you'll probably want to evaluate or save the trained model.
End of explanation
import json
with open('first_try.json', 'w') as jsOut:
json.dump(model.to_json(), jsOut)
model.save_weights('first_try.h5')
Explanation: Save model architecture & weights:
End of explanation
from keras.models import model_from_json
with open('first_try.json', 'r') as jsIn:
model_architecture=json.load(jsIn)
model_new=model_from_json(model_architecture)
model_new.load_weights('first_try.h5')
model_new.summary()
Explanation: Load the saved model architecture & weights:
End of explanation
pred_y=model.predict(test_x).argmax(axis=1)
from sklearn.metrics import classification_report
print( classification_report(test_y,pred_y) )
Explanation: Output the classification report (see if the trained model works well on the test data):
End of explanation
train_x = np.expand_dims(train_x,axis=-1)
val_x = np.expand_dims(val_x,axis=-1)
test_x = np.expand_dims(test_x,axis=-1)
Explanation: <a id="04">4. A small Convolutional Neural Network</a>
Reshape the tensors (this step is necessary, because the CNN model wants the input tensor to be 4D):
End of explanation
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten,Conv2D, MaxPooling2D
from keras.layers import Activation
from keras.optimizers import SGD
in_shape=(28,28,1)
# ========== BEGIN TO CREATE THE MODEL ==========
model = Sequential()
# feature extraction (2 conv layers)
model.add(Conv2D(32, (3,3),
activation='relu',
input_shape=in_shape))
model.add(Conv2D(64, (3,3), activation='relu')
)
model.add(MaxPooling2D(pool_size=(2, 2))
)
model.add(Dropout(0.5))
model.add(Flatten())
# classification (2 dense layers)
model.add(Dense(128, activation='relu')
)
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
# ========== COMPLETED THE MODEL CREATION========
# Compile the model before training.
model.compile(loss='categorical_crossentropy',
optimizer=SGD(lr=0.01,momentum=0.1),
metrics=['accuracy'])
Explanation: Create the model:
End of explanation
%%time
hist=model.fit(train_x, train_y_onehot,
epochs=20,
batch_size=32,
validation_data=(val_x,val_y_onehot),
)
Explanation: Train the model:
End of explanation
plt.plot(hist.history['acc'],ms=5,marker='o',label='accuracy')
plt.plot(hist.history['val_acc'],ms=5,marker='o',label='val accuracy')
plt.legend()
plt.show()
Explanation: See how the accuracy climbs during training:
End of explanation
pred_y=model.predict(test_x).argmax(axis=1)
from sklearn.metrics import classification_report
print( classification_report(test_y,pred_y) )
Explanation: Output the classification report (see if the trained model works well on the test data):
End of explanation |
11,610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducing CivisML 2.0
Note
Step1: Downloading data
Before we build any models, we need a dataset to play with. We're going to use the most recent College Scorecard data from the Department of Education.
This dataset is collected to study the performance of US higher education institutions. You can learn more about it in this technical paper, and you can find details on the dataset features in this data dictionary.
Step2: Data Munging
Before running CivisML, we need to do some basic data munging, such as removing missing data from the dependent variable, and splitting the data into training and test sets.
Throughout this notebook, we'll be trying to predict whether a college is public (labelled as 1), private non-profit (2), or private for-profit (3). The column name for this dependent variable is "CONTROL".
Step3: Some of these columns are duplicates, or contain information we don't want to use in our model (like college names and URLs). CivisML can take a list of columns to exclude and do this part of the data munging for us, so let's make that list here.
Step4: Basic CivisML Usage
When building a supervised model, there are a few basic things you'll probably want to do
Step5: Next, we want to train and validate the model by calling .train on the ModelPipeline object. CivisML uses 4-fold cross-validation on the training set. You can train on local data or query data from Redshift. In this case, we have our data locally, so we just pass the data frame.
Step6: This returns a ModelFuture object, which is non-blocking-- this means that you can keep doing things in your notebook while the model runs on Civis Platform in the background. If you want to make a blocking call (one that doesn't complete until your model is finished), you can use .result().
Step7: Parallel Model Tuning and Validation
We didn't actually specify the number of jobs in the .train() call above, but behind the scenes, the model was actually training in parallel! In CivisML 2.0, model tuning and validation will automatically be distributed across your computing cluster, without ever using more than 90% of the cluster resources. This means that you can build models faster and try more model configurations, leaving you more time to think critically about your data. If you decide you want more control over the resources you're using, you can set the n_jobs parameter to a specific number of jobs, and CivisML won't run more than that at once.
We can see how well the model did by looking at the validation metrics.
Step8: Impressive!
This is the basic CivisML workflow
Step9: This creates a list of columns to categorically expand, identified using the data dictionary available here.
Step10: Model Stacking
Now it's time to fit a model. Let's take a look at model stacking, which is new to CivisML 2.0.
Stacking lets you combine several algorithms into a single model which performs as well or better than the component algorithms. We use stacking at Civis to build more accurate models, which saves our data scientists time comparing algorithm performance. In CivisML, we have two stacking workflows
Step11: Let's plot diagnostics for each of the models. In the Civis Platform, these plots will automatically be built and displayed in the "Models" tab. But for the sake of example, let's also explicitly plot ROC curves and AUCs in the notebook.
There are three classes (public, non-profit private, and for-profit private), so we'll have three curves per model. It looks like all of the models are doing well, with sparse logistic performing slightly worse than the other three.
Step12: All of the models perform quite well, so it's difficult to compare based on the ROC curves. Let's plot the AUCs themselves.
Step13: Here we can see that all models but sparse logistic perform quite well, but stacking appears to perform marginally better than the others. For more challenging modeling tasks, the difference between stacking and other models will often be more pronounced.
Now our models are trained, and we know that they all perform very well. Because the AUCs are all so high, we would expect the models to make similar predictions. Let's see if that's true.
Step14: Looks like the probabilities here aren't exactly the same, but are directionally identical-- so, if you chose the class that had the highest probability for each row, you'd end up with the same predictions for all models. This makes sense, because all of the models performed well.
Model Portability
What if you want to score a model outside of Civis Platform? Maybe you want to deploy this model in an app for education policy makers. In CivisML 2.0, you can easily get the trained model pipeline out of the ModelFuture object.
Step15: This Pipeline contains all of the steps CivisML used to train the model, from ETL to the model itself. We can print each step individually to get a better sense of what is going on.
Step16: Now we can see that there are three steps
Step17: Hyperparameter optimization with Hyperband and Neural Networks
Multilayer Perceptrons (MLPs) are simple neural networks, which are now built in to CivisML. The MLP estimators in CivisML come from muffnn, another open source package written and maintained by Civis Analytics using tensorflow. Let's fit one using hyperband.
Tuning hyperparameters is a critical chore for getting an algorithm to perform at its best, but it can take a long time to run. Using CivisML 2.0, we can use hyperband as an alternative to conventional grid search for hyperparameter optimization-- it runs about twice as fast. While grid search runs every parameter combination for the full time, hyperband runs many combinations for a short time, then filters out the best, runs them for longer, filters again, and so on. This means that you can try more combinations in less time, so we recommend using it whenever possible. The hyperband estimator is open source and available on GitHub. You can learn about the details in the original paper, Li et al. (2016).
Right now, hyperband is implemented in CivisML named preset models for the following algorithms
Step18: Let's dig into the hyperband model a little bit. Like the stacking model, the model below starts with ETL and null imputation, but contains some additional steps
Step19: HyperbandSearchCV essentially works like GridSearchCV. If you want to get the best estimator without all of the extra CV information, you can access it using the best_estimator_ attribute.
Step20: To see how well the best model performed, you can look at the best_score_.
Step21: And to look at information about the different hyperparameter configurations that were tried, you can look at the cv_results_.
Step22: Just like any other model in CivisML, we can use hyperband-tuned models to make predictions using .predict() on the ModelPipeline. | Python Code:
# first, let's import the packages we need
import requests
from io import StringIO
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import model_selection
# import the Civis Python API client
import civis
# ModelPipeline is the class used to build CivisML models
from civis.ml import ModelPipeline
# Suppress warnings for demo purposes. This is not recommended as a general practice.
import warnings
warnings.filterwarnings('ignore')
Explanation: Introducing CivisML 2.0
Note: We are continually releasing changes to CivisML, and this notebook is useful for any versions 2.0.0 and above.
Data scientists are on the front lines of their organization’s most important customer growth and engagement questions, and they need to guide action as quickly as possible by getting models into production. CivisML is a machine learning service that makes it possible for data scientists to massively increase the speed with which they can get great models into production. And because it’s built on open-source packages, CivisML remains transparent and data scientists remain in control.
In this notebook, we’ll go over the new features introduced in CivisML 2.0. For a walkthrough of CivisML’s fundamentals, check out this introduction to the mechanics of CivisML: https://github.com/civisanalytics/civis-python/blob/master/examples/CivisML_parallel_training.ipynb
CivisML 2.0 is full of new features to make modeling faster, more accurate, and more portable. This notebook will cover the following topics:
- CivisML overview
- Parallel training and validation
- Use of the new ETL transformer, DataFrameETL, for easy, customizable ETL
- Stacked models: combine models to get one bigger, better model
- Model portability: get trained models out of CivisML
- Multilayer perceptron models: neural networks built in to CivisML
- Hyperband: a smarter alternative to grid search
CivisML can be used to build models that answer all kinds of business questions, such as what movie to recommend to a customer, or which customers are most likely to upgrade their accounts. For the sake of example, this notebook uses a publicly available dataset on US colleges, and focuses on predicting the type of college (public non-profit, private non-profit, or private for-profit).
End of explanation
# Downloading data; this may take a minute
# Two kind of nulls
df = pd.read_csv("https://ed-public-download.app.cloud.gov/downloads/Most-Recent-Cohorts-All-Data-Elements.csv", sep=",", na_values=['NULL', 'PrivacySuppressed'], low_memory=False)
# How many rows and columns?
df.shape
# What are some of the column names?
df.columns
Explanation: Downloading data
Before we build any models, we need a dataset to play with. We're going to use the most recent College Scorecard data from the Department of Education.
This dataset is collected to study the performance of US higher education institutions. You can learn more about it in this technical paper, and you can find details on the dataset features in this data dictionary.
End of explanation
# Make sure to remove any rows with nulls in the dependent variable
df = df[np.isfinite(df['CONTROL'])]
# split into training and test sets
train_data, test_data = model_selection.train_test_split(df, test_size=0.2)
# print a few sample columns
train_data.head()
Explanation: Data Munging
Before running CivisML, we need to do some basic data munging, such as removing missing data from the dependent variable, and splitting the data into training and test sets.
Throughout this notebook, we'll be trying to predict whether a college is public (labelled as 1), private non-profit (2), or private for-profit (3). The column name for this dependent variable is "CONTROL".
End of explanation
to_exclude = ['ADM_RATE_ALL', 'OPEID', 'OPEID6', 'ZIP', 'INSTNM',
'INSTURL', 'NPCURL', 'ACCREDAGENCY', 'T4APPROVALDATE',
'STABBR', 'ALIAS', 'REPAY_DT_MDN', 'SEPAR_DT_MDN']
Explanation: Some of these columns are duplicates, or contain information we don't want to use in our model (like college names and URLs). CivisML can take a list of columns to exclude and do this part of the data munging for us, so let's make that list here.
End of explanation
# Use a push-button workflow to fit a model with reasonable default parameters
sl_model = ModelPipeline(model='sparse_logistic',
model_name='Example sparse logistic',
primary_key='UNITID',
dependent_variable=['CONTROL'],
excluded_columns=to_exclude)
Explanation: Basic CivisML Usage
When building a supervised model, there are a few basic things you'll probably want to do:
Transform the data into a modelling-friendly format
Train the model on some labelled data
Validate the model
Use the model to make predictions about unlabelled data
CivisML does all of this in three lines of code. Let's fit a basic sparse logistic model to see how.
The first thing we need to do is build a ModelPipeline object. This stores all of the basic configuration options for the model. We'll tell it things like the type of model, dependent variable, and columns we want to exclude. CivisML handles basic ETL for you, including categorical expansion of any string-type columns.
End of explanation
sl_train = sl_model.train(train_data)
Explanation: Next, we want to train and validate the model by calling .train on the ModelPipeline object. CivisML uses 4-fold cross-validation on the training set. You can train on local data or query data from Redshift. In this case, we have our data locally, so we just pass the data frame.
End of explanation
# non-blocking
sl_train
# blocking
sl_train.result()
Explanation: This returns a ModelFuture object, which is non-blocking-- this means that you can keep doing things in your notebook while the model runs on Civis Platform in the background. If you want to make a blocking call (one that doesn't complete until your model is finished), you can use .result().
End of explanation
# loop through the metric names and print to screen
metrics = [print(key) for key in sl_train.metrics.keys()]
# ROC AUC for each of the three categories in our dependent variable
sl_train.metrics['roc_auc']
Explanation: Parallel Model Tuning and Validation
We didn't actually specify the number of jobs in the .train() call above, but behind the scenes, the model was actually training in parallel! In CivisML 2.0, model tuning and validation will automatically be distributed across your computing cluster, without ever using more than 90% of the cluster resources. This means that you can build models faster and try more model configurations, leaving you more time to think critically about your data. If you decide you want more control over the resources you're using, you can set the n_jobs parameter to a specific number of jobs, and CivisML won't run more than that at once.
We can see how well the model did by looking at the validation metrics.
End of explanation
# The ETL transformer used in CivisML can be found in the civismlext module
from civismlext.preprocessing import DataFrameETL
Explanation: Impressive!
This is the basic CivisML workflow: create the model, train, and make predictions. There are other configuration options for more complex use cases; for example, you can create a custom estimator, pass custom dependencies, manage the computing resources for larger models, and more. For more information, see the Machine Learning section of the Python API client docs.
Now that we can build a simple model, let's see what's new to CivisML 2.0!
Custom ETL
CivisML can do several data transformations to prepare your data for modeling. This makes data preprocessing easier, and makes it part of your model pipeline rather than an additional script you have to run. CivisML's built-in ETL includes:
- Categorical expansion: expand a single column of strings or categories into separate binary variables.
- Dropping columns: remove columns not needed in a model, such as an ID number.
- Removing null columns: remove columns that contain no data.
With CivisML 2.0, you can now recreate and customize this ETL using DataFrameETL, our open source ETL transformer, available on GitHub.
By default, CivisML will use DataFrameETL to automatically detect non-numeric columns for categorical expansion. Our example college dataset has a lot of integer columns which are actually categorical, but we can make sure they're handled correctly by passing CivisML a custom ETL transformer.
End of explanation
# column indices for columns to expand
to_expand = list(df.columns[:21]) + list(df.columns[23:36]) + list(df.columns[99:290]) + \
list(df.columns[[1738, 1773, 1776]])
# create ETL estimator to pass to CivisML
etl = DataFrameETL(cols_to_drop=to_exclude,
cols_to_expand=to_expand, # we made this column list during data munging
check_null_cols='warn')
Explanation: This creates a list of columns to categorically expand, identified using the data dictionary available here.
End of explanation
workflows = ['stacking_classifier',
'sparse_logistic',
'random_forest_classifier',
'gradient_boosting_classifier']
models = []
# create a model object for each of the four model types
for wf in workflows:
model = ModelPipeline(model=wf,
model_name=wf + ' v2 example',
primary_key='UNITID',
dependent_variable=['CONTROL'],
etl=etl # use the custom ETL we created
)
models.append(model)
# iterate over the model objects and run a CivisML training job for each
trains = []
for model in models:
train = model.train(train_data)
trains.append(train)
Explanation: Model Stacking
Now it's time to fit a model. Let's take a look at model stacking, which is new to CivisML 2.0.
Stacking lets you combine several algorithms into a single model which performs as well or better than the component algorithms. We use stacking at Civis to build more accurate models, which saves our data scientists time comparing algorithm performance. In CivisML, we have two stacking workflows: stacking_classifier (sparse logistic, GBT, and random forest, with a logistic regression model as a "meta-estimator" to combine predictions from the other models); and stacking_regressor (sparse linear, GBT, and random forest, with a non-negative linear regression as the meta-estimator). Use them the same way you use sparse_logistic or other pre-defined models. If you want to learn more about how stacking works under the hood, take a look at this talk by the person at Civis who wrote it!
Let's fit both a stacking classifier and some un-stacked models, so we can compare the performance.
End of explanation
%matplotlib inline
# Let's look at how the model performed during validation
def extract_roc(fut_job, model_name):
'''Build a data frame of ROC curve data from the completed training job `fut_job`
with model name `model_name`. Note that this function will only work for a classification
model where the dependent variable has more than two classes.'''
aucs = fut_job.metrics['roc_auc']
roc_curve = fut_job.metrics['roc_curve_by_class']
n_classes = len(roc_curve)
fpr = []
tpr = []
class_num = []
auc = []
for i, curve in enumerate(roc_curve):
fpr.extend(curve['fpr'])
tpr.extend(curve['tpr'])
class_num.extend([i] * len(curve['fpr']))
auc.extend([aucs[i]] * len(curve['fpr']))
model_vec = [model_name] * len(fpr)
df = pd.DataFrame({
'model': model_vec,
'class': class_num,
'fpr': fpr,
'tpr': tpr,
'auc': auc
})
return df
# extract ROC curve information for all of the trained models
workflows_abbrev = ['stacking', 'logistic', 'RF', 'GBT']
roc_dfs = [extract_roc(train, w) for train, w in zip(trains, workflows_abbrev)]
roc_df = pd.concat(roc_dfs)
# create faceted ROC curve plots. Each row of plots is a different model type, and each
# column of plots is a different class of the dependent variable.
g = sns.FacetGrid(roc_df, col="class", row="model")
g = g.map(plt.plot, "fpr", "tpr", color='blue')
Explanation: Let's plot diagnostics for each of the models. In the Civis Platform, these plots will automatically be built and displayed in the "Models" tab. But for the sake of example, let's also explicitly plot ROC curves and AUCs in the notebook.
There are three classes (public, non-profit private, and for-profit private), so we'll have three curves per model. It looks like all of the models are doing well, with sparse logistic performing slightly worse than the other three.
End of explanation
# Plot AUCs for each model
%matplotlib inline
auc_df = roc_df[['model', 'class', 'auc']]
auc_df.drop_duplicates(inplace=True)
plt.show(sns.swarmplot(x=auc_df['model'], y=auc_df['auc']))
Explanation: All of the models perform quite well, so it's difficult to compare based on the ROC curves. Let's plot the AUCs themselves.
End of explanation
# kick off a prediction job for each of the four models
preds = [model.predict(test_data) for model in models]
# This will run on Civis Platform cloud resources
[pred.result() for pred in preds]
# print the top few rows for each of the models
pred_df = [pred.table.head() for pred in preds]
import pprint
pprint.pprint(pred_df)
Explanation: Here we can see that all models but sparse logistic perform quite well, but stacking appears to perform marginally better than the others. For more challenging modeling tasks, the difference between stacking and other models will often be more pronounced.
Now our models are trained, and we know that they all perform very well. Because the AUCs are all so high, we would expect the models to make similar predictions. Let's see if that's true.
End of explanation
train_stack = trains[0] # Get the ModelFuture for the stacking model
trained_model = train_stack.estimator
Explanation: Looks like the probabilities here aren't exactly the same, but are directionally identical-- so, if you chose the class that had the highest probability for each row, you'd end up with the same predictions for all models. This makes sense, because all of the models performed well.
Model Portability
What if you want to score a model outside of Civis Platform? Maybe you want to deploy this model in an app for education policy makers. In CivisML 2.0, you can easily get the trained model pipeline out of the ModelFuture object.
End of explanation
# print each of the estimators in the pipeline, separated by newlines for readability
for step in train_stack.estimator.steps:
print(step[1])
print('\n')
Explanation: This Pipeline contains all of the steps CivisML used to train the model, from ETL to the model itself. We can print each step individually to get a better sense of what is going on.
End of explanation
# drop the dependent variable so we don't use it to predict itself!
predictions = trained_model.predict(test_data.drop(labels=['CONTROL'], axis=1))
# print out the class predictions. These will be integers representing the predicted
# class rather than probabilities.
predictions
Explanation: Now we can see that there are three steps: the DataFrameETL object we passed in, a null imputation step, and the stacking estimator itself.
We can use this outside of CivisML simply by calling .predict on the estimator. This will make predictions using the model in the notebook without using CivisML.
End of explanation
# build a model specifying the MLP model with hyperband
model_mlp = ModelPipeline(model='multilayer_perceptron_classifier',
model_name='MLP example',
primary_key='UNITID',
dependent_variable=['CONTROL'],
cross_validation_parameters='hyperband',
etl=etl
)
train_mlp = model_mlp.train(train_data,
n_jobs=10) # parallel hyperparameter optimization and validation!
# block until the job finishes
train_mlp.result()
Explanation: Hyperparameter optimization with Hyperband and Neural Networks
Multilayer Perceptrons (MLPs) are simple neural networks, which are now built in to CivisML. The MLP estimators in CivisML come from muffnn, another open source package written and maintained by Civis Analytics using tensorflow. Let's fit one using hyperband.
Tuning hyperparameters is a critical chore for getting an algorithm to perform at its best, but it can take a long time to run. Using CivisML 2.0, we can use hyperband as an alternative to conventional grid search for hyperparameter optimization-- it runs about twice as fast. While grid search runs every parameter combination for the full time, hyperband runs many combinations for a short time, then filters out the best, runs them for longer, filters again, and so on. This means that you can try more combinations in less time, so we recommend using it whenever possible. The hyperband estimator is open source and available on GitHub. You can learn about the details in the original paper, Li et al. (2016).
Right now, hyperband is implemented in CivisML named preset models for the following algorithms:
- Multilayer Perceptrons (MLPs)
- Stacking
- Random forests
- GBTs
- ExtraTrees
Unlike grid search, you don't need to specify values to search over. If you pass cross_validation_parameters='hyperband' to ModelPipeline, hyperparameter combinations will be randomly drawn from preset distributions.
End of explanation
for step in train_mlp.estimator.steps:
print(step[1])
print('\n')
Explanation: Let's dig into the hyperband model a little bit. Like the stacking model, the model below starts with ETL and null imputation, but contains some additional steps: a step to scale the predictor variables (which improves neural network performance), and a hyperband searcher containing the MLP.
End of explanation
train_mlp.estimator.steps[3][1].best_estimator_
Explanation: HyperbandSearchCV essentially works like GridSearchCV. If you want to get the best estimator without all of the extra CV information, you can access it using the best_estimator_ attribute.
End of explanation
train_mlp.estimator.steps[3][1].best_score_
Explanation: To see how well the best model performed, you can look at the best_score_.
End of explanation
train_mlp.estimator.steps[3][1].cv_results_
Explanation: And to look at information about the different hyperparameter configurations that were tried, you can look at the cv_results_.
End of explanation
predict_mlp = model_mlp.predict(test_data)
predict_mlp.table.head()
Explanation: Just like any other model in CivisML, we can use hyperband-tuned models to make predictions using .predict() on the ModelPipeline.
End of explanation |
11,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 03 - Supplemental
Using Categorical data in machine learning
Now that we've created some categorical data or other created features, we would like to use them as inputs for our machine learning algorithm. However, we need to tell the computer that the categorical data isn't the same as other numerical data. For example, I could have the following two types of categorical data
Step1: We can turn the date column into a real datetime object and get days since the first day in order to work with a more reasonable set of values.
Step2: Ordered Categorical Values
The 'Rank' column are ranked categorical values where the ranking matters on a linear scale. So we can create a categorical column for these values right away. We are lucky here that the values are in alphabetical order - pandas can pick out that order and use it for us.
Step3: Unordered Categorical Values
Let's now put the states into a categorical column. Even though Pandas will sort them, there is no real 'rank' for the states
Step4: Modeling with Categorical Data
Let's split the dataset and try modeling - we want to predict the output value. We need the categorical codes as columns to do this, so we'll take care of that part first.
Step5: So we see that this didn't do a very good job to start with. However, that's not surprising as it used the states as a ranked categorical value when they obviously aren't.
Using Unranked categorical values
What we want is called a dummy variable. It will tell the machine learning algorithm to look at whether an entry is one of the states or not. Here's basically how it works. Suppose we have two categories
Step6: We now want to join this back with the original set of features so that we can use it instead of the ranked column of data. Here's one way to do that.
Step7: We now want to select out all 50 columns from the dummy variable. There is a python way to do this easily, since we used the prefix 'S_' for each of those columns | Python Code:
import pandas as pd
import numpy as np
sampledata = pd.read_csv('Class03_supplemental_data.csv')
print(sampledata.dtypes)
sampledata.head()
Explanation: Class 03 - Supplemental
Using Categorical data in machine learning
Now that we've created some categorical data or other created features, we would like to use them as inputs for our machine learning algorithm. However, we need to tell the computer that the categorical data isn't the same as other numerical data. For example, I could have the following two types of categorical data:
Ordered Categorical Data: items like rankings or scales where the size of the output corresponds to some placement along a line. One example is the grade scale where A=4, B=3, C=2, D=1, F=0.
Unordered Categorical Data: Categories like gender, race, state, or color don't have any rational scale to place them on. So assigning red=4, blue=3 doesn't mean red is 'better' than blue.
We want to treat both of these slightly differently. We've got a sample dataset with both types of categorical data in it to work with. Our goal will be to predict the Output value.
End of explanation
sampledata["Date2"] = pd.to_datetime(sampledata["Date"])
firstdate = sampledata['Date2'][0]
sampledata['DaysSinceStart'] = sampledata['Date2'].apply(lambda date: ((date - firstdate ).seconds)/86400.0) # divided by the number of seconds in a day
sampledata.dtypes
Explanation: We can turn the date column into a real datetime object and get days since the first day in order to work with a more reasonable set of values.
End of explanation
sampledata['CatRank'] = sampledata['Rank'].astype('category')
print(sampledata["CatRank"].cat.categories)
sampledata["CatRank"][1:10].cat.codes
Explanation: Ordered Categorical Values
The 'Rank' column are ranked categorical values where the ranking matters on a linear scale. So we can create a categorical column for these values right away. We are lucky here that the values are in alphabetical order - pandas can pick out that order and use it for us.
End of explanation
sampledata['CatState'] = sampledata['State'].astype('category')
print(sampledata["CatState"].cat.categories)
sampledata["CatState"][1:10].cat.codes
Explanation: Unordered Categorical Values
Let's now put the states into a categorical column. Even though Pandas will sort them, there is no real 'rank' for the states
End of explanation
sampledata['RankCode'] = sampledata['CatRank'].cat.codes
sampledata['StateCode'] = sampledata['CatState'].cat.codes
sampledata.columns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
train1, test1 = train_test_split(sampledata, test_size=0.2, random_state=23)
# Step 1: Create linear regression object
regr1 = LinearRegression()
# Step 2: Train the model using the training sets
inputcolumns = ['DaysSinceStart','RankCode','StateCode']
features = train1[inputcolumns].values
labels = train1['Output'].values
regr1.fit(features,labels)
# Step 5: Get the predictions
testinputs = test1[inputcolumns].values
predictions = regr1.predict(testinputs)
actuals = test1['Output'].values
# Step 6: Plot the results
#
# Note the change here in how we plot the test inputs. We can only plot one variable, so we choose the first.
# Also, it no longer makes sense to plot the fit points as lines. They have more than one input, so we only visualize them as points.
#
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter(testinputs[:,0], actuals, color='black', label='Actual')
plt.scatter(testinputs[:,0], predictions, color='blue', label='Prediction')
plt.legend(loc='upper left', shadow=False, scatterpoints=1)
# Step 7: Get the RMS value
print("RMS Error: {0:.3f}".format( np.sqrt(np.mean((predictions - actuals) ** 2))))
Explanation: Modeling with Categorical Data
Let's split the dataset and try modeling - we want to predict the output value. We need the categorical codes as columns to do this, so we'll take care of that part first.
End of explanation
dummydf = pd.get_dummies(sampledata['CatState'],prefix='S')
dummydf.head()
Explanation: So we see that this didn't do a very good job to start with. However, that's not surprising as it used the states as a ranked categorical value when they obviously aren't.
Using Unranked categorical values
What we want is called a dummy variable. It will tell the machine learning algorithm to look at whether an entry is one of the states or not. Here's basically how it works. Suppose we have two categories: red and blue. Our categorical column may look like this:
| Row | Color |
|--- |---|
|0 | red |
| 1 | red |
| 2 | blue |
|3 | red |
What we want are two new columns that identify whether the row belongs in one of the categories. We'll use 1 when it belongs and 0 when it doesn't. This is what we get:
| Row | IsRed | IsBlue |
| --- | --- | ---|
| 0 | 1 | 0 |
| 1 | 1 | 0 |
| 2 | 0 | 1 |
| 3 | 1 | 0 |
We now use these new dummy variable columns as the inputs: they are binary and will only have a 1 value where the original row matched up with the category column. Here's what it looks like in pandas.
End of explanation
sampledata2 = sampledata.join(dummydf)
sampledata2.head()
Explanation: We now want to join this back with the original set of features so that we can use it instead of the ranked column of data. Here's one way to do that.
End of explanation
inputcolumns = ['DaysSinceStart','RankCode'] + [col for col in sampledata2.columns if 'S_' in col]
train2, test2 = train_test_split(sampledata2, test_size=0.2, random_state=23)
# Step 1: Create linear regression object
regr2= LinearRegression()
features = train2[inputcolumns].values
labels = train2['Output'].values
regr2.fit(features,labels)
# Step 5: Get the predictions
testinputs = test2[inputcolumns].values
predictions = regr2.predict(testinputs)
actuals = test2['Output'].values
plt.scatter(testinputs[:,0], actuals, color='black', label='Actual')
plt.scatter(testinputs[:,0], predictions, color='blue', label='Prediction')
plt.legend(loc='upper left', shadow=False, scatterpoints=1)
# Step 7: Get the RMS value
print("RMS Error: {0:.3f}".format( np.sqrt(np.mean((predictions - actuals) ** 2))))
Explanation: We now want to select out all 50 columns from the dummy variable. There is a python way to do this easily, since we used the prefix 'S_' for each of those columns
End of explanation |
11,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
El "problema"
Es posible que al usar funciones con parámetros por defecto se encuentren con cierto comportamiento inesperado o poco intuitivo de Python. Por estas cosas siempre hay que revisar el código, conocerlo lo mejor posible y saber responder cuando las cosas no funcionan como uno espera.
Veamos el comportamiento de los parametros por defecto en funciones
Step1: Si llamamos a la función una vez...
Step2: ... todo funciona como lo suponemos, pero y si probamos otra vez...
Step3: ... ok? No funciona como lo supondriamos.
Esto también podemos extenderlo a clases, donde es comun usar parámetros por defecto
Step4: Investigando nuestro código
Veamos un poco qué está pasando en nuestro código
Step5: ¿Qué está pasando? D
Step6: El código que define a función es evaluado una vez y dicho valor evaluado es el que se usa en cada llamado posterior. Por lo tanto, al modificar el valor de un parámetro por defecto que es mutable (list, dict, etc.) se modifica el valor por defecto para el siguiente llamado.
¿Cómo evitar esto?
Una solución simple es usar None como el valor predeterminado para los parámetros por defecto. Y otra solución es la declaración de variables condicionales | Python Code:
def funcion(lista=[]):
lista.append(1)
print("La lista vale: {}".format(lista))
Explanation: El "problema"
Es posible que al usar funciones con parámetros por defecto se encuentren con cierto comportamiento inesperado o poco intuitivo de Python. Por estas cosas siempre hay que revisar el código, conocerlo lo mejor posible y saber responder cuando las cosas no funcionan como uno espera.
Veamos el comportamiento de los parametros por defecto en funciones
End of explanation
funcion()
Explanation: Si llamamos a la función una vez...
End of explanation
funcion()
funcion()
Explanation: ... todo funciona como lo suponemos, pero y si probamos otra vez...
End of explanation
class Clase:
def __init__(self, lista=[]):
self.lista = lista
self.lista.append(1)
print("Lista de la clase: {}".format(self.lista))
# Instanciamos dos objetos
A = Clase()
B = Clase()
# Modificamos el parametro en una
A.lista.append(5)
# What??
print(A.lista)
print(B.lista)
Explanation: ... ok? No funciona como lo supondriamos.
Esto también podemos extenderlo a clases, donde es comun usar parámetros por defecto:
End of explanation
# Instanciemos algunos objetos
A = Clase()
B = Clase()
C = Clase(lista=["GG"]) # Usaremos esta isntancia como control
print("\nLos objetos son distintos!")
print("id(A): {} \nid(B): {} \nid(C): {}".format(id(A), id(B), id(C)))
print("\nPero la lista es la misma para A y para B :O")
print("id(A.lista): {} \nid(B.lista): {} \nid(C.lista): {}".format(id(A.lista), id(B.lista), id(C.lista)))
Explanation: Investigando nuestro código
Veamos un poco qué está pasando en nuestro código:
End of explanation
# De hecho, tienen atributos...
def funcion(lista=[]):
lista.append(5)
# En la funcion "funcion"...
print("{}".format(funcion.__defaults__))
# ... si la invocamos...
funcion()
# ahora tenemos...
print("{}".format(funcion.__defaults__))
# Si vemos como quedo el metodo "__init__" de la clase Clase...
print("{}".format(Clase.__init__.__defaults__))
Explanation: ¿Qué está pasando? D:
En Python, las funciones son objetos del tipo callable, es decir, que son llamables, ejecutan una operación.
End of explanation
class Clase:
def __init__(self, lista=None):
# Version "one-liner":
self.lista = lista if lista is not None else list()
# En su version extendida:
if lista is not None:
self.lista = lista
else:
self.lista = list()
Explanation: El código que define a función es evaluado una vez y dicho valor evaluado es el que se usa en cada llamado posterior. Por lo tanto, al modificar el valor de un parámetro por defecto que es mutable (list, dict, etc.) se modifica el valor por defecto para el siguiente llamado.
¿Cómo evitar esto?
Una solución simple es usar None como el valor predeterminado para los parámetros por defecto. Y otra solución es la declaración de variables condicionales:
End of explanation |
11,613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
数据应用学院 Data Scientist Program Hw2
<h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
Step1: 1. Gnerate x = a sequence of points, y = sin(x)+a where a is a small random error.
Step2: 2. Draw a scatter plot of x and y.
Step3: 3. Use linear regression model to predict y, with only one feature--x. Please print out the training and validation score of your model and the mathematical formula of your model.
You need to split the data into training and testing data before you build the model. This is the same procedure you need to do in the following questions.
Step4: 怎么理解cv_scores是负数?
4. Draw a plot showing your predicted y, real y, and ground truth--sin(x) of x.
Step5: 5. Try to build a linear model using two features--x and x^2. Please print out the training and validation score score and mathematical formula.
Step6: 6. Try to build linear models with features from x to x, x^2, x^3,... x^15, and plot the changes of training score and validation score with the number of features gets larger. Accoding to the result you get, what's the best number of features here?
In this question, you need to build 15 models, with features of [x],[x,x^2],[x,x^2,x^3],...,[x,x^2,...,x^15]. For each model you need to calculate the training score and validation score then make the plot as we required. | Python Code:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
# import the necessary package at the very beginning
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import sklearn
Explanation: 数据应用学院 Data Scientist Program Hw2
<h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
End of explanation
## Type Your Answer Below ##
np.random.seed(1)
X = np.random.random([100, 1]).ravel()*10 # generate a set of 100 random float in range [0, 10]
X[:5]
random_error = np.random.randn(100) # genrate a set of 100 random error from a standard normal distribution
random_error[:5]
Y = np.sin(X) + random_error # y = sin(x)+a where a is a small random error
Y[:5]
Explanation: 1. Gnerate x = a sequence of points, y = sin(x)+a where a is a small random error.
End of explanation
## Type Your Answer Below ##
plt.scatter(x=X, y=Y, marker='o', alpha=0.4, color='b')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Y=sin(X) + random_error')
print('X: ', X.shape, ' ', 'Y: ', Y.shape )
Explanation: 2. Draw a scatter plot of x and y.
End of explanation
## Type Your Answer Below ##
# reshape X from row vector in shape(100, ) to column vector in shape (100, 1)
X_re = X.reshape(X.shape[0], 1)
X_re.shape
# initiate a linear regression model
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr
# Use train_test_split to train and test lr
from sklearn import model_selection
Xtrain, Xtest, Ytrain, Ytest = model_selection.train_test_split(X_re, Y, train_size=70, random_state=1)
print(Xtrain.shape, Xtest.shape, Ytrain.shape, Ytest.shape)
lr.fit(Xtrain, Ytrain)
Ypred = lr.predict(Xtest)
print('The mathematical formula of linear regression model: ', 'Y = ' + str(lr.coef_) + '*' + 'X + ' + str(lr.intercept_), '\n')
print('The coefficient of determination R^2 of the training set: ', lr.score(Xtrain, Ytrain), '\n')
print('The coefficient of determination R^2 of the testing set: ', lr.score(Xtest, Ytest), '\n')
plt.scatter(Ytest, Ypred, marker='o', alpha=0.5)
plt.xlabel('Ytest')
plt.ylabel('Ypred')
plt.title('Linear regression model performance')
# Get the training and validation score of your model
# training and validation score具体指的什么?
from sklearn.model_selection import cross_val_score
cv_scores = cross_val_score(lr, X_re, Y, cv=3) # 3-fold cross validation
print('cv_scores: ', cv_scores)
print('mean of cv_scores: ', cv_scores.mean())
#The mean score and the 95% confidence interval of the score estimate are hence given by:
print("Accuracy: %0.2f (+/- %0.2f)" % (cv_scores.mean(), cv_scores.std() * 2))
Explanation: 3. Use linear regression model to predict y, with only one feature--x. Please print out the training and validation score of your model and the mathematical formula of your model.
You need to split the data into training and testing data before you build the model. This is the same procedure you need to do in the following questions.
End of explanation
## Type Your Answer Below ##
# show predicted y in red color
Ypred = lr.predict(X_re)
plt.plot(X, Ypred, label='Predicted Y', color='r')
# show real y in blue color
plt.scatter(X, Y, label='Real Y', color='b')
# show ground truth - sin(X) in green color
Yground = np.sin(X)
plt.scatter(X, Yground, label='Ground truth Y', color='g')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Three types of Y in a plot')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: 怎么理解cv_scores是负数?
4. Draw a plot showing your predicted y, real y, and ground truth--sin(x) of x.
End of explanation
## Type Your Answer Below ##
X2 = X_re**2
X2 = np.hstack([X_re, X2])
print(X2.shape)
lr2 = LinearRegression()
lr2.fit(X2, Y)
cv_scores2 = cross_val_score(lr2, X2, Y, cv=3)
print('cv_scores for model using x and x^2: ', cv_scores2)
print('mean of cv_scores for model using x and x^2: ', cv_scores2.mean())
#The mean score and the 95% confidence interval of the score estimate are hence given by:
print("Accuracy: %0.2f (+/- %0.2f)" % (cv_scores2.mean(), cv_scores2.std() * 2))
print('The mathematical formula of linear regression model: ', 'Y = ' + str(lr2.coef_[0]) + '*X ' + str(lr2.coef_[1]) + "*X^2 + " + str(lr.intercept_), '\n')
# visualize new set of Ypred, Y, Yground_truth
Ypred2 = lr2.predict(X2)
Yground = np.sin(X)
plt.scatter(X, Ypred2, label='predicted y using x and x**2', color='r')
plt.scatter(X, Y, label='real y', color='b')
plt.scatter(X, Yground, label='ground truth - sin(x)', color='g')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Three types of Y in a plot')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: 5. Try to build a linear model using two features--x and x^2. Please print out the training and validation score score and mathematical formula.
End of explanation
from sklearn.model_selection import validation_curve
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
index =[] # generate an array with number 1 to 15
for i in range(1, 16):
index.append(i)
df = pd.DataFrame(columns = index) # create a new dataframe with 15 columns
df.iloc[:, 0] = X # the 1st column is X**1
mean_cv_scores = []
mean_train_scores = []
mean_valid_scores= []
for i in index:
print("################ Adding " + "x**" + str(i) + " ######################")
df.loc[:, i] = X**i # Add a new column of values
lr = LinearRegression() # start a new linear regression model with the new column taking into consideration
#lr.fit(df.iloc[:, :i], Y)
#Ypredict = lr.predict(df.iloc[:, :i])
cv_scores = cross_val_score(lr, df.iloc[:, :i], Y, cv=3)
print("mean cv score for the model is:", np.mean(cv_scores))
mean_cv_scores.append(np.mean(cv_scores))
train_score, valid_score = validation_curve(Ridge(), df.iloc[:, :i], Y, "alpha", np.logspace(-7, 3, 3))
print("mean train score is: ", np.mean(train_score))
print("mean valid score is: ", np.mean(valid_score))
mean_train_scores.append(np.mean(train_score))
mean_valid_scores.append(np.mean(valid_score))
print()
plt.plot(df.columns, mean_train_scores, c='b', label='mean train scores') #plot the training score and validation score showing what happens when feature set gets larger
plt.plot(df.columns, mean_valid_scores, c='r', label = 'mean valid scores')
plt.xlabel('feature')
plt.ylabel('mean of evaluation scores')
plt.legend(loc=0)
plt.plot(df.columns, mean_cv_scores, label='mean cv scores') #plot the training score and validation score showing what happens when feature set gets larger
plt.xlabel('feature')
plt.ylabel('mean of cross validation score')
plt.legend(loc=0)
Explanation: 6. Try to build linear models with features from x to x, x^2, x^3,... x^15, and plot the changes of training score and validation score with the number of features gets larger. Accoding to the result you get, what's the best number of features here?
In this question, you need to build 15 models, with features of [x],[x,x^2],[x,x^2,x^3],...,[x,x^2,...,x^15]. For each model you need to calculate the training score and validation score then make the plot as we required.
End of explanation |
11,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
< 04 - Time and Chronology | Home | 06 - Stable Roommates, Marriages, and Gender >
Cliques and Communities
Step1: Communities are just as important in the social structure of novels as they are in real-world social structures. They're also just as obvious - it's easy to think of a tight cluster of characters in your favourite novel which are isolated from the rest of the story. Visually, they're also quite apparent. Refer back to the Dursley's little clique which we saw back in notebook 3.
However, NetworkX's clique finding algorithm isn't ideal - it enumerates all cliques, giving us plenty of overlapping cliques which aren't that descriptive of the existing communities. In mathematical terms, we want to maximise the modularity of the whole graph at once.
A cleaner solution to our problem is the python implementation of louvain community detection given here by Thomas Aynaud.
Step2: This implementation of louvain modularity is a very smart piece of maths, first given by Blondel et al in Fast unfolding of communities in large networks. If you're not a mathematical reader, just skip over this section and go straight to the results.
If you are interested in the maths, it goes roughly like this
Step3: Sweet - that works nicely. We can wrap this up neatly into a single function call | Python Code:
from bookworm import *
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12,9)
import pandas as pd
import numpy as np
import networkx as nx
Explanation: < 04 - Time and Chronology | Home | 06 - Stable Roommates, Marriages, and Gender >
Cliques and Communities
End of explanation
import community
Explanation: Communities are just as important in the social structure of novels as they are in real-world social structures. They're also just as obvious - it's easy to think of a tight cluster of characters in your favourite novel which are isolated from the rest of the story. Visually, they're also quite apparent. Refer back to the Dursley's little clique which we saw back in notebook 3.
However, NetworkX's clique finding algorithm isn't ideal - it enumerates all cliques, giving us plenty of overlapping cliques which aren't that descriptive of the existing communities. In mathematical terms, we want to maximise the modularity of the whole graph at once.
A cleaner solution to our problem is the python implementation of louvain community detection given here by Thomas Aynaud.
End of explanation
book = nx.from_pandas_dataframe(bookworm('data/raw/hp_chamber_of_secrets.txt'),
source='source',
target='target')
partitions = community.best_partition(book)
values = [partitions.get(node) for node in book.nodes()]
nx.draw(book,
cmap=plt.get_cmap("RdYlBu"),
node_color=values,
with_labels=True)
Explanation: This implementation of louvain modularity is a very smart piece of maths, first given by Blondel et al in Fast unfolding of communities in large networks. If you're not a mathematical reader, just skip over this section and go straight to the results.
If you are interested in the maths, it goes roughly like this:
We want to calculate a value Q between -1 and 1 for a partition of our graph, where $Q$ denotes the modularity of the network. Modularity is a comparative measure of the density within the communities in question and the density between them. A high modularity indicates a good splitting. Through successive, gradual changes to our labelling of nodes and close monitoring of the value of $Q$, we can optimise our partition(s). $Q$ and its change for each successve optimisation epoch ($\Delta Q$) are calculated in two stages, as follows.
$$
Q = \frac{1}{2m} \sum_{ij} \left[\ A_{ij}\ \frac{k_i k_j}{2m}\ \right]\ \delta(c_i, c_j)
$$
$m$ is the sum of all of the edge weights in the graph
$A_{ij}$ represents the edge weight between nodes $i$ and $j$
$k_{i}$ and $k_{j}$ are the sum of the weights of the edges attached to nodes $i$ and $j$, respectively
$\delta$ is the delta function.
$c_{i}$ and $c_{j}$ are the communities of the nodes
First, each node in the network is assigned to its own community. Then for each node $i$, the change in modularity is calculated by removing $i$ from its own community and moving it into the community of each neighbor $j$ of $i$:
$$
\Delta Q = \left[\frac{\sum_{in} +\ k_{i,in}}{2m} - \left(\frac{\sum_{tot} +\ k_{i}}{2m}\right)^2 \right] -
\left[ \frac{\sum_{in}}{2m} - \left(\frac{\sum_{tot}}{2m}\right)^2 - \left(\frac{k_{i}}{2m}\right)^2 \right]
$$
$\sum_{in}$ is sum of all the weights of the links inside the community $i$ is moving into
$k_{i,in}$ is the sum of the weights of the links between $i$ and other nodes in the community
$m$ is the sum of the weights of all links in the network
$\Sigma _{tot}$ is the sum of all the weights of the links to nodes in the community
$k_{i}$ is the weighted degree of $i$
Once this value is calculated for all communities that $i$ is connected to, $i$ is placed into the community that resulted in the greatest modularity increase. If no increase is possible, $i$ remains in its original community. This process is applied repeatedly and sequentially to all nodes until no modularity increase can occur. Once this local maximum of modularity is hit, we move on to the second stage.
All of the nodes in the same community are grouped to create a new network, where nodes are the communities from the previous phase. Links between nodes within communities are represented by self loops on these new community nodes, and links from multiple nodes in the same community to a node in a different community are represented by weighted edges. The first stage is then applied to this new weighted network, and the process repeats.
Actually doing the thing
Let's load in a book and try applying python-louvain's implementation of this algorithm to it:
End of explanation
def draw_with_communities(book):
'''
draw a networkx graph with communities partitioned and coloured
according to their louvain modularity
Parameters
----------
book : nx.Graph (required)
the book graph to be visualised
'''
partitions = community.best_partition(book)
values = [partitions.get(node) for node in book.nodes()]
nx.draw(book,
cmap=plt.get_cmap("RdYlBu"),
node_color=values,
with_labels=True)
book = nx.from_pandas_dataframe(bookworm('data/raw/fellowship_of_the_ring.txt'),
source='source',
target='target')
draw_with_communities(book)
Explanation: Sweet - that works nicely. We can wrap this up neatly into a single function call
End of explanation |
11,615 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I use linear SVM from scikit learn (LinearSVC) for binary classification problem. I understand that LinearSVC can give me the predicted labels, and the decision scores but I wanted probability estimates (confidence in the label). I want to continue using LinearSVC because of speed (as compared to sklearn.svm.SVC with linear kernel) Is it reasonable to use a logistic function to convert the decision scores to probabilities? | Problem:
import numpy as np
import pandas as pd
import sklearn.svm as suppmach
X, y, x_test = load_data()
assert type(X) == np.ndarray
assert type(y) == np.ndarray
assert type(x_test) == np.ndarray
# Fit model:
svmmodel=suppmach.LinearSVC()
from sklearn.calibration import CalibratedClassifierCV
calibrated_svc = CalibratedClassifierCV(svmmodel, cv=5, method='sigmoid')
calibrated_svc.fit(X, y)
proba = calibrated_svc.predict_proba(x_test) |
11,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Recurrent Neural Networks (RNN) with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Built-in RNN layers
Step3: Built-in RNNs support a number of useful features
Step4: In addition, a RNN layer can return its final internal state(s). The returned states
can be used to resume the RNN execution later, or
to initialize another RNN.
This setting is commonly used in the
encoder-decoder sequence-to-sequence model, where the encoder final state is used as
the initial state of the decoder.
To configure a RNN layer to return its internal state, set the return_state parameter
to True when creating the layer. Note that LSTM has 2 state tensors, but GRU
only has one.
To configure the initial state of the layer, just call the layer with additional
keyword argument initial_state.
Note that the shape of the state needs to match the unit size of the layer, like in the
example below.
Step5: RNN layers and RNN cells
In addition to the built-in RNN layers, the RNN API also provides cell-level APIs.
Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only
processes a single timestep.
The cell is the inside of the for loop of a RNN layer. Wrapping a cell inside a
keras.layers.RNN layer gives you a layer capable of processing batches of
sequences, e.g. RNN(LSTMCell(10)).
Mathematically, RNN(LSTMCell(10)) produces the same result as LSTM(10). In fact,
the implementation of this layer in TF v1.x was just creating the corresponding RNN
cell and wrapping it in a RNN layer. However using the built-in GRU and LSTM
layers enable the use of CuDNN and you may see better performance.
There are three built-in RNN cells, each of them corresponding to the matching RNN
layer.
keras.layers.SimpleRNNCell corresponds to the SimpleRNN layer.
keras.layers.GRUCell corresponds to the GRU layer.
keras.layers.LSTMCell corresponds to the LSTM layer.
The cell abstraction, together with the generic keras.layers.RNN class, make it
very easy to implement custom RNN architectures for your research.
Cross-batch statefulness
When processing very long sequences (possibly infinite), you may want to use the
pattern of cross-batch statefulness.
Normally, the internal state of a RNN layer is reset every time it sees a new batch
(i.e. every sample seen by the layer is assumed to be independent of the past). The
layer will only maintain a state while processing a given sample.
If you have very long sequences though, it is useful to break them into shorter
sequences, and to feed these shorter sequences sequentially into a RNN layer without
resetting the layer's state. That way, the layer can retain information about the
entirety of the sequence, even though it's only seeing one sub-sequence at a time.
You can do this by setting stateful=True in the constructor.
If you have a sequence s = [t0, t1, ... t1546, t1547], you would split it into e.g.
s1 = [t0, t1, ... t100]
s2 = [t101, ... t201]
...
s16 = [t1501, ... t1547]
Then you would process it via
Step6: RNN State Reuse
<a id="rnn_state_reuse"></a>
The recorded states of the RNN layer are not included in the layer.weights(). If you
would like to reuse the state from a RNN layer, you can retrieve the states value by
layer.states and use it as the
initial state for a new layer via the Keras functional API like new_layer(inputs,
initial_state=layer.states), or model subclassing.
Please also note that sequential model might not be used in this case since it only
supports layers with single input and output, the extra input of initial state makes
it impossible to use here.
Step7: Bidirectional RNNs
For sequences other than time series (e.g. text), it is often the case that a RNN model
can perform better if it not only processes sequence from start to end, but also
backwards. For example, to predict the next word in a sentence, it is often useful to
have the context around the word, not only just the words that come before it.
Keras provides an easy API for you to build such bidirectional RNNs
Step8: Under the hood, Bidirectional will copy the RNN layer passed in, and flip the
go_backwards field of the newly copied layer, so that it will process the inputs in
reverse order.
The output of the Bidirectional RNN will be, by default, the concatenation of the forward layer
output and the backward layer output. If you need a different merging behavior, e.g.
concatenation, change the merge_mode parameter in the Bidirectional wrapper
constructor. For more details about Bidirectional, please check
the API docs.
Performance optimization and CuDNN kernels
In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN
kernels by default when a GPU is available. With this change, the prior
keras.layers.CuDNNLSTM/CuDNNGRU layers have been deprecated, and you can build your
model without worrying about the hardware it will run on.
Since the CuDNN kernel is built with certain assumptions, this means the layer will
not be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or
GRU layers. E.g.
Step9: Let's load the MNIST dataset
Step10: Let's create a model instance and train it.
We choose sparse_categorical_crossentropy as the loss function for the model. The
output of the model has shape of [batch_size, 10]. The target for the model is an
integer vector, each of the integer is in the range of 0 to 9.
Step11: Now, let's compare to a model that does not use the CuDNN kernel
Step12: When running on a machine with a NVIDIA GPU and CuDNN installed,
the model built with CuDNN is much faster to train compared to the
model that uses the regular TensorFlow kernel.
The same CuDNN-enabled model can also be used to run inference in a CPU-only
environment. The tf.device annotation below is just forcing the device placement.
The model will run on CPU by default if no GPU is available.
You simply don't have to worry about the hardware you're running on anymore. Isn't that
pretty cool?
Step13: RNNs with list/dict inputs, or nested inputs
Nested structures allow implementers to include more information within a single
timestep. For example, a video frame could have audio and video input at the same
time. The data shape in this case could be
Step14: Build a RNN model with nested input/output
Let's build a Keras model that uses a keras.layers.RNN layer and the custom cell
we just defined.
Step15: Train the model with randomly generated data
Since there isn't a good candidate dataset for this model, we use random Numpy data for
demonstration. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: Recurrent Neural Networks (RNN) with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/rnn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/rnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/keras-team/keras-io/blob/master/guides/working_with_rnns.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/rnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Introduction
Recurrent neural networks (RNN) are a class of neural networks that is powerful for
modeling sequence data such as time series or natural language.
Schematically, a RNN layer uses a for loop to iterate over the timesteps of a
sequence, while maintaining an internal state that encodes information about the
timesteps it has seen so far.
The Keras RNN API is designed with a focus on:
Ease of use: the built-in keras.layers.RNN, keras.layers.LSTM,
keras.layers.GRU layers enable you to quickly build recurrent models without
having to make difficult configuration choices.
Ease of customization: You can also define your own RNN cell layer (the inner
part of the for loop) with custom behavior, and use it with the generic
keras.layers.RNN layer (the for loop itself). This allows you to quickly
prototype different research ideas in a flexible way with minimal code.
Setup
End of explanation
model = keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# Add a LSTM layer with 128 internal units.
model.add(layers.LSTM(128))
# Add a Dense layer with 10 units.
model.add(layers.Dense(10))
model.summary()
Explanation: Built-in RNN layers: a simple example
There are three built-in RNN layers in Keras:
keras.layers.SimpleRNN, a fully-connected RNN where the output from previous
timestep is to be fed to next timestep.
keras.layers.GRU, first proposed in
Cho et al., 2014.
keras.layers.LSTM, first proposed in
Hochreiter & Schmidhuber, 1997.
In early 2015, Keras had the first reusable open-source Python implementations of LSTM
and GRU.
Here is a simple example of a Sequential model that processes sequences of integers,
embeds each integer into a 64-dimensional vector, then processes the sequence of
vectors using a LSTM layer.
End of explanation
model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)
model.add(layers.GRU(256, return_sequences=True))
# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)
model.add(layers.SimpleRNN(128))
model.add(layers.Dense(10))
model.summary()
Explanation: Built-in RNNs support a number of useful features:
Recurrent dropout, via the dropout and recurrent_dropout arguments
Ability to process an input sequence in reverse, via the go_backwards argument
Loop unrolling (which can lead to a large speedup when processing short sequences on
CPU), via the unroll argument
...and more.
For more information, see the
RNN API documentation.
Outputs and states
By default, the output of a RNN layer contains a single vector per sample. This vector
is the RNN cell output corresponding to the last timestep, containing information
about the entire input sequence. The shape of this output is (batch_size, units)
where units corresponds to the units argument passed to the layer's constructor.
A RNN layer can also return the entire sequence of outputs for each sample (one vector
per timestep per sample), if you set return_sequences=True. The shape of this output
is (batch_size, timesteps, units).
End of explanation
encoder_vocab = 1000
decoder_vocab = 2000
encoder_input = layers.Input(shape=(None,))
encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(
encoder_input
)
# Return states in addition to output
output, state_h, state_c = layers.LSTM(64, return_state=True, name="encoder")(
encoder_embedded
)
encoder_state = [state_h, state_c]
decoder_input = layers.Input(shape=(None,))
decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(
decoder_input
)
# Pass the 2 states to a new LSTM layer, as initial state
decoder_output = layers.LSTM(64, name="decoder")(
decoder_embedded, initial_state=encoder_state
)
output = layers.Dense(10)(decoder_output)
model = keras.Model([encoder_input, decoder_input], output)
model.summary()
Explanation: In addition, a RNN layer can return its final internal state(s). The returned states
can be used to resume the RNN execution later, or
to initialize another RNN.
This setting is commonly used in the
encoder-decoder sequence-to-sequence model, where the encoder final state is used as
the initial state of the decoder.
To configure a RNN layer to return its internal state, set the return_state parameter
to True when creating the layer. Note that LSTM has 2 state tensors, but GRU
only has one.
To configure the initial state of the layer, just call the layer with additional
keyword argument initial_state.
Note that the shape of the state needs to match the unit size of the layer, like in the
example below.
End of explanation
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
output = lstm_layer(paragraph3)
# reset_states() will reset the cached state to the original initial_state.
# If no initial_state was provided, zero-states will be used by default.
lstm_layer.reset_states()
Explanation: RNN layers and RNN cells
In addition to the built-in RNN layers, the RNN API also provides cell-level APIs.
Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only
processes a single timestep.
The cell is the inside of the for loop of a RNN layer. Wrapping a cell inside a
keras.layers.RNN layer gives you a layer capable of processing batches of
sequences, e.g. RNN(LSTMCell(10)).
Mathematically, RNN(LSTMCell(10)) produces the same result as LSTM(10). In fact,
the implementation of this layer in TF v1.x was just creating the corresponding RNN
cell and wrapping it in a RNN layer. However using the built-in GRU and LSTM
layers enable the use of CuDNN and you may see better performance.
There are three built-in RNN cells, each of them corresponding to the matching RNN
layer.
keras.layers.SimpleRNNCell corresponds to the SimpleRNN layer.
keras.layers.GRUCell corresponds to the GRU layer.
keras.layers.LSTMCell corresponds to the LSTM layer.
The cell abstraction, together with the generic keras.layers.RNN class, make it
very easy to implement custom RNN architectures for your research.
Cross-batch statefulness
When processing very long sequences (possibly infinite), you may want to use the
pattern of cross-batch statefulness.
Normally, the internal state of a RNN layer is reset every time it sees a new batch
(i.e. every sample seen by the layer is assumed to be independent of the past). The
layer will only maintain a state while processing a given sample.
If you have very long sequences though, it is useful to break them into shorter
sequences, and to feed these shorter sequences sequentially into a RNN layer without
resetting the layer's state. That way, the layer can retain information about the
entirety of the sequence, even though it's only seeing one sub-sequence at a time.
You can do this by setting stateful=True in the constructor.
If you have a sequence s = [t0, t1, ... t1546, t1547], you would split it into e.g.
s1 = [t0, t1, ... t100]
s2 = [t101, ... t201]
...
s16 = [t1501, ... t1547]
Then you would process it via:
python
lstm_layer = layers.LSTM(64, stateful=True)
for s in sub_sequences:
output = lstm_layer(s)
When you want to clear the state, you can use layer.reset_states().
Note: In this setup, sample i in a given batch is assumed to be the continuation of
sample i in the previous batch. This means that all batches should contain the same
number of samples (batch size). E.g. if a batch contains [sequence_A_from_t0_to_t100,
sequence_B_from_t0_to_t100], the next batch should contain
[sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200].
Here is a complete example:
End of explanation
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
existing_state = lstm_layer.states
new_lstm_layer = layers.LSTM(64)
new_output = new_lstm_layer(paragraph3, initial_state=existing_state)
Explanation: RNN State Reuse
<a id="rnn_state_reuse"></a>
The recorded states of the RNN layer are not included in the layer.weights(). If you
would like to reuse the state from a RNN layer, you can retrieve the states value by
layer.states and use it as the
initial state for a new layer via the Keras functional API like new_layer(inputs,
initial_state=layer.states), or model subclassing.
Please also note that sequential model might not be used in this case since it only
supports layers with single input and output, the extra input of initial state makes
it impossible to use here.
End of explanation
model = keras.Sequential()
model.add(
layers.Bidirectional(layers.LSTM(64, return_sequences=True), input_shape=(5, 10))
)
model.add(layers.Bidirectional(layers.LSTM(32)))
model.add(layers.Dense(10))
model.summary()
Explanation: Bidirectional RNNs
For sequences other than time series (e.g. text), it is often the case that a RNN model
can perform better if it not only processes sequence from start to end, but also
backwards. For example, to predict the next word in a sentence, it is often useful to
have the context around the word, not only just the words that come before it.
Keras provides an easy API for you to build such bidirectional RNNs: the
keras.layers.Bidirectional wrapper.
End of explanation
batch_size = 64
# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).
# Each input sequence will be of size (28, 28) (height is treated like time).
input_dim = 28
units = 64
output_size = 10 # labels are from 0 to 9
# Build the RNN model
def build_model(allow_cudnn_kernel=True):
# CuDNN is only available at the layer level, and not at the cell level.
# This means `LSTM(units)` will use the CuDNN kernel,
# while RNN(LSTMCell(units)) will run on non-CuDNN kernel.
if allow_cudnn_kernel:
# The LSTM layer with default options uses CuDNN.
lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim))
else:
# Wrapping a LSTMCell in a RNN layer will not use CuDNN.
lstm_layer = keras.layers.RNN(
keras.layers.LSTMCell(units), input_shape=(None, input_dim)
)
model = keras.models.Sequential(
[
lstm_layer,
keras.layers.BatchNormalization(),
keras.layers.Dense(output_size),
]
)
return model
Explanation: Under the hood, Bidirectional will copy the RNN layer passed in, and flip the
go_backwards field of the newly copied layer, so that it will process the inputs in
reverse order.
The output of the Bidirectional RNN will be, by default, the concatenation of the forward layer
output and the backward layer output. If you need a different merging behavior, e.g.
concatenation, change the merge_mode parameter in the Bidirectional wrapper
constructor. For more details about Bidirectional, please check
the API docs.
Performance optimization and CuDNN kernels
In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN
kernels by default when a GPU is available. With this change, the prior
keras.layers.CuDNNLSTM/CuDNNGRU layers have been deprecated, and you can build your
model without worrying about the hardware it will run on.
Since the CuDNN kernel is built with certain assumptions, this means the layer will
not be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or
GRU layers. E.g.:
Changing the activation function from tanh to something else.
Changing the recurrent_activation function from sigmoid to something else.
Using recurrent_dropout > 0.
Setting unroll to True, which forces LSTM/GRU to decompose the inner
tf.while_loop into an unrolled for loop.
Setting use_bias to False.
Using masking when the input data is not strictly right padded (if the mask
corresponds to strictly right padded data, CuDNN can still be used. This is the most
common case).
For the detailed list of constraints, please see the documentation for the
LSTM and
GRU layers.
Using CuDNN kernels when available
Let's build a simple LSTM model to demonstrate the performance difference.
We'll use as input sequences the sequence of rows of MNIST digits (treating each row of
pixels as a timestep), and we'll predict the digit's label.
End of explanation
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
sample, sample_label = x_train[0], y_train[0]
Explanation: Let's load the MNIST dataset:
End of explanation
model = build_model(allow_cudnn_kernel=True)
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
Explanation: Let's create a model instance and train it.
We choose sparse_categorical_crossentropy as the loss function for the model. The
output of the model has shape of [batch_size, 10]. The target for the model is an
integer vector, each of the integer is in the range of 0 to 9.
End of explanation
noncudnn_model = build_model(allow_cudnn_kernel=False)
noncudnn_model.set_weights(model.get_weights())
noncudnn_model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
noncudnn_model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
Explanation: Now, let's compare to a model that does not use the CuDNN kernel:
End of explanation
import matplotlib.pyplot as plt
with tf.device("CPU:0"):
cpu_model = build_model(allow_cudnn_kernel=True)
cpu_model.set_weights(model.get_weights())
result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)
print(
"Predicted result is: %s, target result is: %s" % (result.numpy(), sample_label)
)
plt.imshow(sample, cmap=plt.get_cmap("gray"))
Explanation: When running on a machine with a NVIDIA GPU and CuDNN installed,
the model built with CuDNN is much faster to train compared to the
model that uses the regular TensorFlow kernel.
The same CuDNN-enabled model can also be used to run inference in a CPU-only
environment. The tf.device annotation below is just forcing the device placement.
The model will run on CPU by default if no GPU is available.
You simply don't have to worry about the hardware you're running on anymore. Isn't that
pretty cool?
End of explanation
class NestedCell(keras.layers.Layer):
def __init__(self, unit_1, unit_2, unit_3, **kwargs):
self.unit_1 = unit_1
self.unit_2 = unit_2
self.unit_3 = unit_3
self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
super(NestedCell, self).__init__(**kwargs)
def build(self, input_shapes):
# expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]
i1 = input_shapes[0][1]
i2 = input_shapes[1][1]
i3 = input_shapes[1][2]
self.kernel_1 = self.add_weight(
shape=(i1, self.unit_1), initializer="uniform", name="kernel_1"
)
self.kernel_2_3 = self.add_weight(
shape=(i2, i3, self.unit_2, self.unit_3),
initializer="uniform",
name="kernel_2_3",
)
def call(self, inputs, states):
# inputs should be in [(batch, input_1), (batch, input_2, input_3)]
# state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]
input_1, input_2 = tf.nest.flatten(inputs)
s1, s2 = states
output_1 = tf.matmul(input_1, self.kernel_1)
output_2_3 = tf.einsum("bij,ijkl->bkl", input_2, self.kernel_2_3)
state_1 = s1 + output_1
state_2_3 = s2 + output_2_3
output = (output_1, output_2_3)
new_states = (state_1, state_2_3)
return output, new_states
def get_config(self):
return {"unit_1": self.unit_1, "unit_2": unit_2, "unit_3": self.unit_3}
Explanation: RNNs with list/dict inputs, or nested inputs
Nested structures allow implementers to include more information within a single
timestep. For example, a video frame could have audio and video input at the same
time. The data shape in this case could be:
[batch, timestep, {"video": [height, width, channel], "audio": [frequency]}]
In another example, handwriting data could have both coordinates x and y for the
current position of the pen, as well as pressure information. So the data
representation could be:
[batch, timestep, {"location": [x, y], "pressure": [force]}]
The following code provides an example of how to build a custom RNN cell that accepts
such structured inputs.
Define a custom cell that supports nested input/output
See Making new Layers & Models via subclassing
for details on writing your own layers.
End of explanation
unit_1 = 10
unit_2 = 20
unit_3 = 30
i1 = 32
i2 = 64
i3 = 32
batch_size = 64
num_batches = 10
timestep = 50
cell = NestedCell(unit_1, unit_2, unit_3)
rnn = keras.layers.RNN(cell)
input_1 = keras.Input((None, i1))
input_2 = keras.Input((None, i2, i3))
outputs = rnn((input_1, input_2))
model = keras.models.Model([input_1, input_2], outputs)
model.compile(optimizer="adam", loss="mse", metrics=["accuracy"])
Explanation: Build a RNN model with nested input/output
Let's build a Keras model that uses a keras.layers.RNN layer and the custom cell
we just defined.
End of explanation
input_1_data = np.random.random((batch_size * num_batches, timestep, i1))
input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))
target_1_data = np.random.random((batch_size * num_batches, unit_1))
target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))
input_data = [input_1_data, input_2_data]
target_data = [target_1_data, target_2_data]
model.fit(input_data, target_data, batch_size=batch_size)
Explanation: Train the model with randomly generated data
Since there isn't a good candidate dataset for this model, we use random Numpy data for
demonstration.
End of explanation |
11,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KNN
Motivation
The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point, and predict the label from these. The number of samples can be a user-defined constant (k-nearest neighbor learning), or vary based on the local density of points (radius-based neighbor learning). The distance can, in general, be any metric measure
Step1: Remove Columns
Step2: Which are the factors?
Step3: Pre-Processing | Python Code:
import pandas
import numpy
import csv
#from scipy.stats import mode
from sklearn import neighbors
from sklearn.neighbors import DistanceMetric
from pprint import pprint
MY_TITANIC_TRAIN = 'train.csv'
MY_TITANIC_TEST = 'test.csv'
titanic_dataframe = pandas.read_csv(MY_TITANIC_TRAIN, header=0)
print('length: {0} '.format(len(titanic_dataframe)))
titanic_dataframe.head(5)
Explanation: KNN
Motivation
The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point, and predict the label from these. The number of samples can be a user-defined constant (k-nearest neighbor learning), or vary based on the local density of points (radius-based neighbor learning). The distance can, in general, be any metric measure: standard Euclidean distance is the most common choice. Neighbors-based methods are known as non-generalizing machine learning methods, since they simply “remember” all of its training data
~scikit-learn
It's a beautiful day in this neighborhood,
A beautiful day for a neighbor.
Would you be mine?
Could you be mine?
~ Mr. Rogers
Readings:
* openCV: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_ml/py_knn/py_knn_understanding/py_knn_understanding.html
* dataquest: https://www.dataquest.io/blog/k-nearest-neighbors/
* k-d tree: https://ashokharnal.wordpress.com/2015/01/20/a-working-example-of-k-d-tree-formation-and-k-nearest-neighbor-algorithms/
* euclidean: http://machinelearningmastery.com/tutorial-to-implement-k-nearest-neighbors-in-python-from-scratch/
Data
End of explanation
titanic_dataframe.drop(['Name', 'Ticket', 'Cabin'], axis=1, inplace=True)
print('dropped')
titanic_dataframe.describe()
Explanation: Remove Columns
End of explanation
titanic_dataframe.info()
Explanation: Which are the factors?
End of explanation
# age_mean = numpy.mean(titanic_dataframe['Age'])
titanic_dataframe['Age'].fillna(numpy.mean(titanic_dataframe['Age']),inplace=True)
# titanic_dataframe.fillna(value=age_mean, axis=0)
titanic_dataframe.info()
titanic_dataframe.info()
# titanic_dataframe = titanic_dataframe.dropna()
titanic_dataframe['Embarked'].fillna(titanic_dataframe['Embarked'].mode().item(),inplace=True)
titanic_dataframe['Port'] = titanic_dataframe['Embarked'].map({'C':1, 'S':2, 'Q':3}).astype(int)
titanic_dataframe['Gender'] = titanic_dataframe['Sex'].map({'female': 0, 'male': 1}).astype(int)
titanic_dataframe = titanic_dataframe.drop(['Sex', 'Embarked', 'PassengerId', ], axis=1)
titanic_dataframe.info()
#Convert Columns to List
cols = titanic_dataframe.columns.tolist()
titanic_dataframe = titanic_dataframe[cols]
train_cols = [x for x in cols if x != 'Survived']
target_cols = [cols[0]]
print(train_cols, target_cols)
train_data = titanic_dataframe[train_cols]
target_data = titanic_dataframe[target_cols]
algorithm_data_model = neighbors.KNeighborsClassifier()
algorithm_data_model.fit(train_data.values, [value[0] for value in target_data.values])
df_test = pandas.read_csv('test.csv')
ids = df_test.PassengerId.values
df_test.drop(['Name', 'Ticket', 'Cabin', 'PassengerId'], axis=1, inplace=True)
print(len(df_test))
df_test.info()
mean_age = df_test.Age.mean()
df_test.Age.fillna(mean_age, inplace=True)
mean_fare = df_test.Fare.mean()
df_test.Fare.fillna(mean_fare, inplace=True)
df_test['Gender'] = df_test['Sex'].map({'female': 0, 'male': 1}).astype(int)
df_test['Port'] = df_test['Embarked'].map({'C':1, 'S':2, 'Q':3}).astype(int)
df_test = df_test.drop(['Sex', 'Embarked'], axis=1)
test_data = df_test.values
df_test.info()
titanic_dataframe.info()
output = algorithm_data_model.predict(df_test).astype(int)
print(output[:10])
result = numpy.c_[ids.astype(int), output]
print(result)
prediction_file = open('ourpredictions.csv', 'w')
open_file = csv.writer(prediction_file)
open_file.writerow(['PassengerId', 'Survived'])
open_file.writerows(zip(ids, output))
prediction_file.close()
%timeit algorithm_data_model.predict(df_test).astype(int)
Explanation: Pre-Processing
End of explanation |
11,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Here is an example of simulated sea surface height from the NEMO model run at 1/4°, that is represented by an xarray.DataArray object. Note that dask array can be used by precising chunks.
Step1: A window object that is linked to the previous data is created by the following line
Step2: Temporal filtering
Step3: The weight distribution and the frequency response of the window may be plotted for one dimensional windows using the plot method
Step4: This window can now be applied on the data by the associated procedure
Step5: By default the filtering is computed when using the apply function. If compute is set to False, the filtering will be computed when the output data is required. This allows the definition of several dask object before the global computation is performed.
Temporal filtering
Step6: The following figure gives the comparison between the boxcar and the lanczos window applied for the low-pass filtering at one grid point of the dataset.
Step7: Spatial filtering
The window object extend to multidimensional filtering such bidimensional spatial filtering. The filtering method used is able to deal with missing data or coastline by reweighting the filters weights. For example, a 2D Lanczos window may thus be associated to a dataset.
Step8: Original dataset
Temporal standard deviation of the raw timeseries
Step9: Large Scales (>6°)
Temporal standard deviation of the large field
Step10: Small Scales (<6°)
Temporal standard deviation of the large field | Python Code:
signal_xyt = xr.open_dataset(sigdir + test_file, decode_times=False)['sossheig'].chunk(chunks={'time_counter': 50})
print signal_xyt
signal_xyt.isel(time_counter=0).plot(vmin=-0.07, vmax=0.07, cmap='seismic')
Explanation: Here is an example of simulated sea surface height from the NEMO model run at 1/4°, that is represented by an xarray.DataArray object. Note that dask array can be used by precising chunks.
End of explanation
win1D = signal_xyt.win
Explanation: A window object that is linked to the previous data is created by the following line:
End of explanation
win1D.set(window_name='boxcar', n=[5], dims=['time_counter'])
print win1D._depth.values()
print win1D
Explanation: Temporal filtering: Boxcar window
A boxcar window object that will be applied along the time dimension is simply be defined by setting its different properties:
End of explanation
win1D.plot()
Explanation: The weight distribution and the frequency response of the window may be plotted for one dimensional windows using the plot method:
End of explanation
signal_LF_box = win1D.apply(compute=True)
Explanation: This window can now be applied on the data by the associated procedure:
End of explanation
win1D.set(window_name='lanczos', n=[5], dims=['time_counter'], fc=0.1)
print win1D
win1D.plot()
signal_LF_lcz = win1D.apply(compute=True)
Explanation: By default the filtering is computed when using the apply function. If compute is set to False, the filtering will be computed when the output data is required. This allows the definition of several dask object before the global computation is performed.
Temporal filtering: Lanczos window
Setting now different properties to use a Lanczos window:
End of explanation
signal_xyt.isel(x=50, y=50).plot()
signal_LF_box.isel(x=50, y=50).plot(color='g')
signal_LF_lcz.isel(x=50, y=50).plot(color='r')
plt.legend(['raw', 'boxcar 10yr', 'lanczos 10yr'])
Explanation: The following figure gives the comparison between the boxcar and the lanczos window applied for the low-pass filtering at one grid point of the dataset.
End of explanation
signal_xyt = xr.open_dataset(sigdir + test_file, decode_times=False)['sossheig'].chunk(chunks={'x': 40, 'y':40})
win_box2D = signal_xyt.win
win_box2D.set(window_name='lanczos', n=[24, 24], dims=['x', 'y'], fc=0.0416)
print signal_xyt
print win_box2D
win_box2D.plot()
Explanation: Spatial filtering
The window object extend to multidimensional filtering such bidimensional spatial filtering. The filtering method used is able to deal with missing data or coastline by reweighting the filters weights. For example, a 2D Lanczos window may thus be associated to a dataset.
End of explanation
bw = win_box2D.boundary_weights()
bw.isel(time_counter=1).plot(vmin=0, vmax=1, cmap='spectral')
signal_LS = win_box2D.apply(weights=bw)
# Original dataset
signal_xyt.std(dim='time_counter').plot(cmap='jet', vmin=0, vmax=0.06)
Explanation: Original dataset
Temporal standard deviation of the raw timeseries:
End of explanation
# Large-scale (>6°) dataset
signal_LS.std(dim='time_counter').plot(cmap='jet', vmin=0, vmax=0.06)
Explanation: Large Scales (>6°)
Temporal standard deviation of the large field:
End of explanation
# Small-scale (<6°) dataset
(signal_xyt-signal_LS).std(dim='time_counter').plot(cmap='jet', vmin=0, vmax=0.06)
Explanation: Small Scales (<6°)
Temporal standard deviation of the large field:
End of explanation |
11,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Denavit Hartenberg Notation
Kevin Walchko
Created
Step1: Rise of the Robots
Robot arms and legs are hard to control (lots of math), but are required for most robotic applications.
<img src="pics/oe.png" width="50%">
DARPA Orbital Express, was a space robotics demo to perform on-orbit survicing (repair, refueling, etc). This demo proved if we build modular satellites, we can extend the on-orbit life.
<img src="pics/eod.jpg" width="50%">
EOD robots used to neutralize IEDs. Their arm is equiped with a variety of payloads to handle various types of IEDs.
<img src="pics/arm.jpg" width="50%">
Prosthetic limbs in the olden days werew a wooden peg to replace your leg. Today, robotic arms and legs are becomming fully functional replacements, controlled via neural impulses from the person. Although people might tie these robotic limbs to the rise of cyborgs, but depending on your definition of cyborg, you could also call a person with an implanted pacemaker a cyborg (see definition below).
Cyborg
Step2: Position and Orientation
You should aready be familar with vectors from other courses. We are only going to deal with 2D (in a plane
Step3: Homogeneous Transforms
This is typically not how we do the math. Instead, robotics combines these opertions together in a form that becomes easier to program. A compact representation of the translation and rotation is known as the Homogeneous Transformation. This allows us to combine the rotation ($R^A_B$) and translation ($P^A_B$) of the general transform in a single matrix form.
$$
T^A_B = \begin{bmatrix}
& R^A_B & & P^A_B \
0 & 0 & 0 & 1
\end{bmatrix} \
\begin{bmatrix}
P^A \
1
\end{bmatrix} =
T^A_B
\begin{bmatrix}
P^B \
1
\end{bmatrix}
$$
This compact notation allows us to use numpy or matlab to write series of equations as matricies and do standard matrix operations on them. Now, as we attach frames to serial manipulator links, we will be able to combine these matricies together to calculate where the end effector is.
<img src="dh_pics/frame4.png" width="600px">
Step4: Denavit-Hartenberg (DH)
Now that we have a basic understanding of translation and rotation, we can look at a process (for serial manipulators) to automate it. We will use the DH method to develop the symbolic equations and use python sympy to simplify them so we can program the equations.
Process (Craig, section 3.4 & 3.6, pg 67)
Now the process laid out in Craig's book, defines some parameters for eash link in a robot arm
Step7: Simple 2D Example
Following the process for assigning frames to a manipulator, you get the following
Step8: So that looks a little messy with all of the sines and cosines ... let's use the python symbolic capabilities in sympy to help us reduce this a little and figure out where the end-effector is relative to the base (e.g. inertial frame).
Step9: Later, we will derive the same 2 link manipulator a different way and come up with the same equations for position of the end-effector in the x, y plane. For simple manipulators, DH is overkill, however, for most real manipulators (remember the DARPA videos), it is useful to understand what is going on if you do robotics. | Python Code:
%matplotlib inline
from __future__ import print_function
from __future__ import division
import numpy as np
from math import cos, sin, pi
from IPython.display import HTML # need this for embedding a movie in an iframe
Explanation: Denavit Hartenberg Notation
Kevin Walchko
Created: 10 July 2017
Denavit Hartenberg (DH) is an attempt to standardize how we represent serial manipulators (i.e., robot arms). It is typically one of the first ways you learn. It is really easy (methodical) to do forward kinematics, but becomes more challenging when doing inverse kinematics. Here we are going to introduce what is goning on, but you need to focus on the DH process. If you follow the process, then all will work out fine. Don't get too hung up on the begining math, understand the concepts so you can follow the DH process.
Objectives
understand coordinate frames (we will see these again)
apply rotations and translations to objects in 3d space (we will see these again)
calculate DH forward kinematics for a serial link mechanism
understand homogenous transformations (we will see these again)
understand Euler sequences (we will see these again)
References
Wikipedia modified DH
darpa robot challenge
darpa robot fails
Walking robot
Setup
End of explanation
# rotation example
# euler angles: 0 0 0
Rab = np.eye(3) # so this rotation transforms from b to a
Pb = np.array([0,0,1])
Pa = Rab.dot(Pb)
print('Original position', Pb)
print('\nRotation:\n', Rab)
print('\nNew orientation:', Pa)
# rotate 45 deg about x-axis
Rab = np.array([[1,0,0], [0,cos(pi/4), -sin(pi/4)], [0,sin(pi/4),cos(pi/4)]])
Pb = np.array([0,0,1])
Pa = Rab.dot(Pb)
print('Original position', Pb)
print('\nRotation:\n', Rab)
print('\nNew orientation:', Pa)
Explanation: Rise of the Robots
Robot arms and legs are hard to control (lots of math), but are required for most robotic applications.
<img src="pics/oe.png" width="50%">
DARPA Orbital Express, was a space robotics demo to perform on-orbit survicing (repair, refueling, etc). This demo proved if we build modular satellites, we can extend the on-orbit life.
<img src="pics/eod.jpg" width="50%">
EOD robots used to neutralize IEDs. Their arm is equiped with a variety of payloads to handle various types of IEDs.
<img src="pics/arm.jpg" width="50%">
Prosthetic limbs in the olden days werew a wooden peg to replace your leg. Today, robotic arms and legs are becomming fully functional replacements, controlled via neural impulses from the person. Although people might tie these robotic limbs to the rise of cyborgs, but depending on your definition of cyborg, you could also call a person with an implanted pacemaker a cyborg (see definition below).
Cyborg: [noun] a person whose physiological functioning is aided by or dependent upon a mechanical or electronic device. ref
Kinematics of Serial Manipulators
Kinematics is the study of motion without regard to the forces which cause it. Kinematics of manipulators involves the study of the geometric and time based properties of the motion, and in particular how the various links move with respect to one another and with time.
Pose: The combination of position and orientation of a frame relative to a reference or inertial frame. Think Zoolander on a fashion shoot, "strike a pose"
This lesson will talk about matrix and vector operations. A nice review of the various mathematical operators is wikipedia. Take a look at how you multiply 2 matrices together (Matrix Product (two matrices)) and it will give you an idea of how it works. Ultimately we will use numpy to do these operations for us.
Coordinate Frames
We want to describe positions and orientations of bodies in space relative to a reference (or in some cases an inertial) frame. So if we wanted to know where something was, we could define it from this coordinate frame as $[x,y,z]$.
We are going to make this more complex and define multiple coordinate frames, which will all be rotated in some fashion, and try to determine a point (maybe the end-effector of a robot arm) relative to a base (inertial) reference frame.
Manipulators
There are 2 types of manipulators in robotics: serial and parallel. They are used in different applications for manufacturing.
Definitions:
Forward Kinematics: Given a robot's joint angles, where is the robot's end-effector located?
Inverse Kinematics: Given a point in 3D space where we want the end-effector, what are the joint angles to get us there?
For most robotic applications, we need to be able to calculate both the forward and reverse.
| Type | Pro | Con |
|----------|-------------------------|----------------------------|
| Serial | easy forward kinematics | complex inverse kinematics |
| Parallel | easy inverse kinematics | complex forward kinematics |
For this class we are going to focus on serial manipulators.
Serial Manipulators
We generally draw a simplified version of the manipulator and attach a “frame” to each rigid body link
The simplified version only has:
Revolute joints: joints that rotate
Prismatic joints: joints that move linearly (think telescoping)
By combining these in various combinations, we can make anything. For example, a spherical joint (ball-and-socket like your shoulder) is generally composed of 3 co-located (not physically real) rotational joints. This makes the math easier.
The frames follow the serial manipulator's body
All of our frames will be a righthand-coordinate system (RHS) ... sorry lefties
There's some freedom in how we choose the frame's position and orientation relative to the body
Denavit-Hartenberg (DH) notation partially standardizes this process, however, there are classical DH parameters and modified DH parameters. We are using modified (see reading in Craig). In the end you end up with the same equations, however, some people (i.e., Craig) didn't like how the classical notation was written.
An example (sort of) is shown below. Notice the real robot is represented as a simpler drawing of little metal bars:
The KR270 has 6 joints and is a standard industrial type robot for manufacturing. Notice again, the wrist, is represented as 2 revolute joints co-located (on top of each other) which is not realistic.
Rotation Matrix
A rotation matrix transforms (or rotates) a point from one location to another. If we start off simple and look at a 2D matrix:
$$
R =
\begin{bmatrix}
cos(\theta) & -sin(\theta) \
sin(\theta) & cos(\theta)
\end{bmatrix}
$$
If I have a 2D point located at $v = [x,y]$ and I want to rotate it (or turn it in the 2D plane), I can do that with this matrix as $v' = Rv$ where $v'$ is the new 2D position.
If we expand this to 3D, so our matrix would be a 3x3, we can start to do things like this:
Now, please note, the beginning of this gif starts off with a translation (or movement), the does a rotation as the cube spins around. Another great example, is your arm! If you rotate your elbow, your hand moves. You can define that rotation my a matrix operation. Also note, we are not changing the length or size of anything when we do a rotation. We are just moving something (usually in some sort of circular/arc fashion) from one place to another. Again, when you rotate your arm, it doesn't change size does it? If it does, go see a doctor!
Properties
A nice thing about rotation matricies, is:
$$
R^B_A = (R^A_B)^T = (R^A_B)^{-1}
$$
This is because they are orthonormal (the dot product of the x, y, and z components is 0). This means a matrix inverse and matrix transpose produce the same results, which is good since matrix inverse is CPU intensive compared to a matrix transpose (shown below).
$$
R = \begin{bmatrix}
1 & 2 & 3 \
4 & 5 & 6 \
7 & 8 & 9
\end{bmatrix} \
R^T = \begin{bmatrix}
1 & 4 & 7 \
2 & 5 & 8 \
3 & 6 & 9
\end{bmatrix} = inv(R) = R^{-1}
$$
Notice a transpose just turns matrix columns into matrix rows ... no math!
Note: the magnitude of the rows and columns of a real rotation matrix are 1. The example above is sloppy in that fact. I was lazy and just wanted to remind you how transpose worked.
End of explanation
# rotate 45 deg about x-axis
# add a translation in now
Rab = np.array([[1,0,0], [0,cos(pi/4), -sin(pi/4)], [0,sin(pi/4),cos(pi/4)]])
Pb = np.array([0,0,1])
Pab = np.array([3,0,0]) # {b} is 3 units infront of {a}
Pa = Pab + Rab.dot(Pb)
print('Original position', Pb)
print('\nRotation:\n', Rab)
print('\nTranslation:\n', Pab)
print('\nNew position:', Pa)
Explanation: Position and Orientation
You should aready be familar with vectors from other courses. We are only going to deal with 2D (in a plane: x, y) or 3D (x, y, z) vectors for our position. We are going to have to reference the position of an object relative to multiple referenece frames. This section is going to lay the basic mathematical foundation.
<img src="dh_pics/frame1.png" width="400px">
Now the position/orientation, or pose, of {B} relative to {A} is:
$$
P^A = R^A_B P^B
$$
Now ultimately, we want to know the location of a point in {B} relative to an inertial frame (say the base of our robot arm) in {A}.
<img src="dh_pics/frame2.png" width="400px">
<img src="dh_pics/frame3.png" width="400px">
What the final equation means is the position of $P^B$ in frame A is equal to the offset between {A} and {B} (i.e., $P^A_B$) plus the change in orientation between {A} and {B} (i.e., $R^A_B P^B$).
End of explanation
# Now let's do the combined homogenious matrix and see if we get the same answer
# rotate 45 deg about x-axis
# add a translation in now
Tab = np.array([[1,0,0,3], [0,cos(pi/4), -sin(pi/4),0], [0,sin(pi/4),cos(pi/4),0],[0,0,0,1]])
Pb = np.array([0,0,1,1])
Pa = Tab.dot(Pb)
print('Original position', Pb)
print('\nRotation and translation:\n', Tab)
print('\nNew position:', Pa)
Explanation: Homogeneous Transforms
This is typically not how we do the math. Instead, robotics combines these opertions together in a form that becomes easier to program. A compact representation of the translation and rotation is known as the Homogeneous Transformation. This allows us to combine the rotation ($R^A_B$) and translation ($P^A_B$) of the general transform in a single matrix form.
$$
T^A_B = \begin{bmatrix}
& R^A_B & & P^A_B \
0 & 0 & 0 & 1
\end{bmatrix} \
\begin{bmatrix}
P^A \
1
\end{bmatrix} =
T^A_B
\begin{bmatrix}
P^B \
1
\end{bmatrix}
$$
This compact notation allows us to use numpy or matlab to write series of equations as matricies and do standard matrix operations on them. Now, as we attach frames to serial manipulator links, we will be able to combine these matricies together to calculate where the end effector is.
<img src="dh_pics/frame4.png" width="600px">
End of explanation
HTML('<iframe src="https://player.vimeo.com/video/238147402" width="640" height="360" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe><p><a href="https://vimeo.com/238147402">Denavit–Hartenberg parameters</a> from <a href="https://vimeo.com/user59907133">kevin</a> on <a href="https://vimeo.com">Vimeo</a>.</p>')
Explanation: Denavit-Hartenberg (DH)
Now that we have a basic understanding of translation and rotation, we can look at a process (for serial manipulators) to automate it. We will use the DH method to develop the symbolic equations and use python sympy to simplify them so we can program the equations.
Process (Craig, section 3.4 & 3.6, pg 67)
Now the process laid out in Craig's book, defines some parameters for eash link in a robot arm:
| | | |
|------------|:--------------|:------------------------------------------------------|
| $a_i$ | link length | distance from $z_i$ to $z_{i+1}$ measured along $x_i$ |
| $d_i$ | offset | distance from $x_{i-1}$ to $x_i$ along $z_i$ |
| $\alpha_i$ | twist | angle from $z_i$ to $z_{i+1}$ measured about $x_i$ |
| $\theta_i$ | rotation | angle from $x_{i−1}$ to $x_i$ measured about $z_i$ |
Note: on a quiz or GR I will give you these definitions, I don't even memorize them. But know the process below
Summary of steps:
Identify the joint axes and imagine (or draw) infinite lines along them. For steps 2 through 5 below, consider two of these neighboring lines (at axes i and i+1).
Identify the comon perpendicular betwen them,or point of intersection. At the point of intersection, or at the point where the comon perpendicular meets the $i^{th}$ axis, asign the link-frame origin.
Asign the $Z_i$ axis pointing along the $i^{th}$ joint axis.
A sign the axis pointing along the comon perpendicular, or, if the axes intersect, asign k1 to be normal to the plane containing the two axes.
Asign the y axis to complete a right-hand cordinate system ... honestly don't draw y-axis
Asign {0} to match {1} when the first joint variable is zero. For {N}, chose an origin location and $X_N$ direction freely, but generaly so as to cause as many linkage parameters as possible to become zero.
Now, once you have the parameters, you can enter them into the followng matrix to get the relationship between frame i and frame i+1. Note, that these are not euler angles, but rather:
A translation along z by d
Rotation about z by $\theta$
translation along x by a
Rotation about x by $\alpha$
This sequence turns into the the following homogenious transform:
$$
\begin{eqnarray}
T^{i-1}i = R_x(\alpha{i-1}) D_x(a_{i-1}) R_z(\theta_i) D_z(d_i) \
\
T^{i-1}i = \begin{bmatrix}
\cos(\theta_i) & -\sin(\theta_i) & 0 & a{i-1} \
\sin(\theta_i)\cos(\alpha_{i-1}) & \cos(\theta_i)\cos(\alpha_{i-1}) & -\sin(\alpha_{i-1}) & -\sin(\alpha_{i-1})d_i \
\sin(\theta_i)\sin(\alpha_{i-1}) & \cos(\theta_i)\sin(\alpha_{i-1}) & \cos(\alpha_{i-1}) & \cos(\alpha_{i-1})d_i \
0 & 0 & 0 & 1
\end{bmatrix}
\end{eqnarray}
$$
You will create 1 matrix for each link in your serial manipulator. Then you can multiply these matricies together to get the transform from base frame {0} to end effector frame {3} by: $T^0_3 = T^0_1 T^1_2 T^2_3$
Also, the general format of every homogenious matrix is:
$$
T_{4x4} =
\begin{bmatrix}
R_{3x3} & t_{3x1} \
\begin{bmatrix} 0 & 0 & 0 \end{bmatrix} & 1
\end{bmatrix}
$$
where $R$ is your rotation matrix, $t$ is your translation, and T is your homogenious matrix. Note, later when we do computer vision, you will see the homogenious matrix refered to as $H$. Basically no one has agreed to what variables are called across different fields of study. So be flexible and try to recognize things for what they are ... you will see these again.
End of explanation
# Let's grab some libraries to help us manipulate symbolic equations
import sympy
from sympy import symbols, sin, cos, pi, simplify
def makeT(a, alpha, d, theta):
# create a modified DH homogenious matrix
# this is the same matrix as above
return np.array([
[ cos(theta), -sin(theta), 0, a],
[sin(theta)*cos(alpha), cos(theta)*cos(alpha), -sin(alpha), -d*sin(alpha)],
[sin(theta)*sin(alpha), cos(theta)*sin(alpha), cos(alpha), d*cos(alpha)],
[ 0, 0, 0, 1]
])
def simplifyT(tt):
This goes through each element of a matrix and tries to simplify it.
ret = tt.copy()
for i, row in enumerate(tt):
for j, col in enumerate(row):
ret[i,j] = simplify(col)
return ret
def subs(tt, m):
This allows you to simplify the trigonomic mess that kinematics can
create and also substitute in some inputs in the process
Yes, this is basically the same as above. I could combine these into 1
function, but I wanted to beclearer on what I am doing.
ret = tt.copy()
for i, row in enumerate(tt):
for j, col in enumerate(row):
try:
ret[i,j] = col.subs(m)
except:
ret[i,j] = simplify(col)
return ret
# make thetas (t) and link lengths (a) symbolics
t1, t2 = symbols('t1 t2')
a1, a2 = symbols('a1 a2')
# let's create our matrices
T1 = makeT(0, 0, 0, t1)
T2 = makeT(a1, 0, 0, t2)
T3 = makeT(a2, 0, 0, 0)
# T13 = T1 * T2 * T3
T13 = T1.dot(T2.dot(T3))
print('T1 = ', T1)
print('T2 = ', T2)
print('T3 = ', T3)
print('\nSo the combined homogenious matrix is:\n')
print('T13 = ', T13)
Explanation: Simple 2D Example
Following the process for assigning frames to a manipulator, you get the following:
Looking at the above frames, they are related by:
| Link | $a_{i-1}$ | $\alpha_{i-1}$ | $d_i$ | $\theta_i$ |
|------|-----------|----------------|-------|------------|
| 1 | 0 | 0 | 0 | $\theta_1$ |
| 2 | $a_1$ | 0 | 0 | $\theta_2$ |
| 3 | $a_2$ | 0 | 0 | 0 |
Now using Craig eqn 3.6, we can substitute these values in and get the relationship between the inertial frame and the end effector. However note, $\theta_i$ are variable parameters. Typically we would simplify these equations down leaving only the $\theta_i$ parameters. Let's use the python symbolic toolbox to generate the equations of motion.
End of explanation
ans = simplifyT(T13)
print(ans)
print('-'*25)
print('position x: {}'.format(ans[0,3]))
print('position y: {}'.format(ans[1,3]))
Explanation: So that looks a little messy with all of the sines and cosines ... let's use the python symbolic capabilities in sympy to help us reduce this a little and figure out where the end-effector is relative to the base (e.g. inertial frame).
End of explanation
# what if I wanted to substitute in an angle?
# just give it an array of tuples
ans = subs(T13, [(t1, 0)]) # here it is only t1, but I could do: [(t1, angle), (t2, angle)]
print(ans)
print('-'*25)
print('position x: {}'.format(ans[0,3]))
print('position y: {}'.format(ans[1,3]))
Explanation: Later, we will derive the same 2 link manipulator a different way and come up with the same equations for position of the end-effector in the x, y plane. For simple manipulators, DH is overkill, however, for most real manipulators (remember the DARPA videos), it is useful to understand what is going on if you do robotics.
End of explanation |
11,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Preprocessing for Machine Learning
Learning Objectives
* Understand the different approaches for data preprocessing in developing ML models
* Use Dataflow to perform data preprocessing steps
Introduction
In the previous notebook we achieved an RMSE of 3.85. Let's see if we can improve upon that by creating a data preprocessing pipeline in Cloud Dataflow.
Preprocessing data for a machine learning model involves both data engineering and feature engineering. During data engineering, we convert raw data into prepared data which is necessary for the model. Feature engineering then takes that prepared data and creates the features expected by the model. We have already seen various ways we can engineer new features for a machine learning model and where those steps take place. We also have flexibility as to where data preprocessing steps can take place; for example, BigQuery, Cloud Dataflow and Tensorflow. In this lab, we'll explore different data preprocessing strategies and see how they can be accomplished with Cloud Dataflow.
One perspective in which to categorize different types of data preprocessing operations is in terms of the granularity of the operation. Here, we will consider the following three types of operations
Step1: Next, set the environment variables related to your GCP Project.
Step6: Create data preprocessing job with Cloud Dataflow
The following code reads from BigQuery and saves the data as-is on Google Cloud Storage. We could also do additional preprocessing and cleanup inside Dataflow. Note that, in this case we'd have to remember to repeat that prepreprocessing at prediction time to avoid training/serving skew. In general, it is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at how tf.transform works in another notebook. For now, we are simply moving data from BigQuery to CSV using Dataflow.
It's worth noting that while we could read from BQ directly from TensorFlow, it is quite convenient to export to CSV and do the training off CSV. We can do this at scale with Cloud Dataflow. Furthermore, because we are running this on the cloud, you should go to the GCP Console to view the status of the job. It will take several minutes for the preprocessing job to launch.
Define our query and pipeline functions
To start we'll copy over the create_query function we created in the 01_bigquery/c_extract_and_benchmark notebook.
Step7: Then, we'll write the csv we create to a Cloud Storage bucket. So, we'll look to see that the location is empty, and if not clear out its contents so that it is.
Step9: Next, we'll create a function and pipeline for preprocessing the data. First, we'll define a to_csv function which takes a row dictionary (a dictionary created from a BigQuery reader representing each row of a dataset) and returns a comma separated string for each record
Step11: Next, we define our primary preprocessing function. Reading through the code this creates a pipeline to read data from BigQuery, use our to_csv function above to make a comma separated string, then write to a file in Google Cloud Storage.
Exercise 1
In the code below, complete the pipeline to accomplish the tasks stated above. Have a look at the Apache Beam documentation to remind yourself how to read data from BigQuery and apply map functions. Then write the comma separated string to a file in Cloud Storage.
Step12: Now that we have the preprocessing pipeline function, we can execute the pipeline locally or on the cloud. To run our pipeline locally, we specify the RUNNER variable as DirectRunner. To run our pipeline in the cloud, we set RUNNER to be DataflowRunner. In either case, this variable is passed to the options dictionary that we use to instantiate the pipeline.
As with training a model, it is good practice to test your preprocessing pipeline locally with a subset of your data before running it against your entire dataset.
Run Beam pipeline locally
We'll start by testing our pipeline locally. This takes upto 5 minutes. You will see a message "Done" when it has finished.
Step13: Run Beam pipeline on Cloud Dataflow¶
Again, we'll clear out our bucket to GCS to ensure a fresh run.
Step14: The following step will take 15-20 minutes. Monitor job progress on the Dataflow section of Cloud Console. Note, you can change the first arugment to "None" to process the full dataset.
Step15: Once the job finishes, we can look at the files that have been created and have a look at what they contain. You will notice that the files have been sharded into many csv files.
Step16: Develop a model with new inputs
We can now develop a model with these inputs. Download the first shard of the preprocessed data to a subfolder called sample so we can develop locally first.
Step17: To begin let's copy the model.py and task.py we developed in the previous notebooks here.
Step18: Let's have a look at the files contained within the taxifaremodel folder. Within model.py we see that feature_cols has three engineered features.
Step19: We can also see the engineered features that are created by the add_engineered_features function here.
Step20: We can try out this model on the local sample we've created to make sure everything works as expected. Note, this takes about 5 minutes to complete.
Step21: We've only done 10 training steps, so we don't expect the model to have good performance. Let's have a look at the exported files from our training job.
Step22: You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.
Step23: To test out prediciton with out model, we create a temporary json file containing the expected feature values.
Step24: Train on the Cloud
This will take 10-15 minutes even though the prompt immediately returns after the job is submitted. Monitor job progress on the ML Engine section of Cloud Console and wait for the training job to complete.
Step25: Once the model has finished training on the cloud, we can check the export folder to see that a model has been correctly saved.
Step26: As before, we can use the saved_model_cli to examine the exported signature.
Step27: And check out model's prediction with a local predict job on our test file. | Python Code:
#Ensure that we have the correct version of Apache Beam installed
!pip freeze | grep apache-beam || sudo pip install apache-beam[gcp]==2.12.0
import tensorflow as tf
import apache_beam as beam
import shutil
import os
print(tf.__version__)
Explanation: Data Preprocessing for Machine Learning
Learning Objectives
* Understand the different approaches for data preprocessing in developing ML models
* Use Dataflow to perform data preprocessing steps
Introduction
In the previous notebook we achieved an RMSE of 3.85. Let's see if we can improve upon that by creating a data preprocessing pipeline in Cloud Dataflow.
Preprocessing data for a machine learning model involves both data engineering and feature engineering. During data engineering, we convert raw data into prepared data which is necessary for the model. Feature engineering then takes that prepared data and creates the features expected by the model. We have already seen various ways we can engineer new features for a machine learning model and where those steps take place. We also have flexibility as to where data preprocessing steps can take place; for example, BigQuery, Cloud Dataflow and Tensorflow. In this lab, we'll explore different data preprocessing strategies and see how they can be accomplished with Cloud Dataflow.
One perspective in which to categorize different types of data preprocessing operations is in terms of the granularity of the operation. Here, we will consider the following three types of operations:
1. Instance-level transformations
2. Full-pass transformations
3. Time-windowed aggregations
Cloud Dataflow can perform each of these types of operations and is particularly useful when performing computationally expensive operations as it is an autoscaling service for batch and streaming data processing pipelines. We'll say a few words about each of these below. For more information, have a look at this article about data preprocessing for machine learning from Google Cloud.
1. Instance-level transformations
These are transformations which take place during training and prediction, looking only at values from a single data point. For example, they might include clipping the value of a feature, polynomially expand a feature, multiply two features, or compare two features to create a Boolean flag.
It is necessary to apply the same transformations at training time and at prediction time. Failure to do this results in training/serving skew and will negatively affect the performance of the model.
2. Full-pass transformations
These transformations occur during training, but occur as instance-level operations during prediction. That is, during training you must analyze the entirety of the training data to compute quantities such as maximum, minimum, mean or variance while at prediction time you need only use those values to rescale or normalize a single data point.
A good example to keep in mind is standard scaling (z-score normalization) of features for training. You need to compute the mean and standard deviation of that feature across the whole training data set, thus it is called a full-pass transformation. At prediction time you use those previously computed values to appropriately normalize the new data point. Failure to do so results in training/serving skew.
3. Time-windowed aggregations
These types of transformations occur during training and at prediction time. They involve creating a feature by summarizing real-time values by aggregating over some temporal window clause. For example, if we wanted our model to estimate the taxi trip time based on the traffic metrics for the route in the last 5 minutes, in the last 10 minutes or the last 30 minutes we would want to create a time-window to aggreagate these values.
At prediction time these aggregations have to be computed in real-time from a data stream.
Set environment variables and load necessary libraries
Apache Beam only works in Python 2 at the moment, so switch to the Python 2 kernel in the upper right hand side. Then execute the following cells to install the necessary libraries if they have not been installed already.
End of explanation
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.13" # TF version for CMLE to use
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`
Explanation: Next, set the environment variables related to your GCP Project.
End of explanation
def create_query(phase, sample_size):
basequery =
SELECT
(tolls_amount + fare_amount) AS fare_amount,
EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek,
EXTRACT(HOUR from pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N)) = 1
if phase == "TRAIN":
subsample =
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 0)
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 70)
elif phase == "VALID":
subsample =
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 70)
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 85)
elif phase == "TEST":
subsample =
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 85)
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 100)
query = basequery + subsample
return query.replace("EVERY_N", sample_size)
Explanation: Create data preprocessing job with Cloud Dataflow
The following code reads from BigQuery and saves the data as-is on Google Cloud Storage. We could also do additional preprocessing and cleanup inside Dataflow. Note that, in this case we'd have to remember to repeat that prepreprocessing at prediction time to avoid training/serving skew. In general, it is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at how tf.transform works in another notebook. For now, we are simply moving data from BigQuery to CSV using Dataflow.
It's worth noting that while we could read from BQ directly from TensorFlow, it is quite convenient to export to CSV and do the training off CSV. We can do this at scale with Cloud Dataflow. Furthermore, because we are running this on the cloud, you should go to the GCP Console to view the status of the job. It will take several minutes for the preprocessing job to launch.
Define our query and pipeline functions
To start we'll copy over the create_query function we created in the 01_bigquery/c_extract_and_benchmark notebook.
End of explanation
%%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi
Explanation: Then, we'll write the csv we create to a Cloud Storage bucket. So, we'll look to see that the location is empty, and if not clear out its contents so that it is.
End of explanation
def to_csv(rowdict):
Arguments:
-rowdict: Dictionary. The beam bigquery reader returns a PCollection in
which each row is represented as a python dictionary
Returns:
-rowstring: a comma separated string representation of the record
days = ["null", "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
CSV_COLUMNS = "fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat".split(',')
rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])
return rowstring
Explanation: Next, we'll create a function and pipeline for preprocessing the data. First, we'll define a to_csv function which takes a row dictionary (a dictionary created from a BigQuery reader representing each row of a dataset) and returns a comma separated string for each record
End of explanation
import datetime
import datetime
def preprocess(EVERY_N, RUNNER):
Arguments:
-EVERY_N: Integer. Sample one out of every N rows from the full dataset.
Larger values will yield smaller sample
-RUNNER: "DirectRunner" or "DataflowRunner". Specfy to run the pipeline
locally or on Google Cloud respectively.
Side-effects:
-Creates and executes dataflow pipeline.
See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline
job_name = "preprocess-taxifeatures" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S")
print("Launching Dataflow job {} ... hang on".format(job_name))
OUTPUT_DIR = "gs://{0}/taxifare/ch4/taxi_preproc/".format(BUCKET)
#dictionary of pipeline options
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": "preprocess-taxifeatures" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S"),
"project": PROJECT,
"runner": RUNNER
}
#instantiate PipelineOptions object using options dictionary
opts = beam.pipeline.PipelineOptions(flags = [], **options)
#instantantiate Pipeline object using PipelineOptions
with beam.Pipeline(options=opts) as p:
for phase in ["TRAIN", "VALID", "TEST"]:
query = create_query(phase, EVERY_N)
outfile = os.path.join(OUTPUT_DIR, "{}.csv".format(phase))
(
p | "read_{}".format(phase) >> # TODO: Your code goes here
| "tocsv_{}".format(phase) >> # TODO: Your code goes here
| "write_{}".format(phase) >> # TODO: Your code goes here
)
print("Done")
Explanation: Next, we define our primary preprocessing function. Reading through the code this creates a pipeline to read data from BigQuery, use our to_csv function above to make a comma separated string, then write to a file in Google Cloud Storage.
Exercise 1
In the code below, complete the pipeline to accomplish the tasks stated above. Have a look at the Apache Beam documentation to remind yourself how to read data from BigQuery and apply map functions. Then write the comma separated string to a file in Cloud Storage.
End of explanation
preprocess("50*10000", "DirectRunner")
Explanation: Now that we have the preprocessing pipeline function, we can execute the pipeline locally or on the cloud. To run our pipeline locally, we specify the RUNNER variable as DirectRunner. To run our pipeline in the cloud, we set RUNNER to be DataflowRunner. In either case, this variable is passed to the options dictionary that we use to instantiate the pipeline.
As with training a model, it is good practice to test your preprocessing pipeline locally with a subset of your data before running it against your entire dataset.
Run Beam pipeline locally
We'll start by testing our pipeline locally. This takes upto 5 minutes. You will see a message "Done" when it has finished.
End of explanation
%%bash
if gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://${BUCKET}/taxifare/ch4/taxi_preproc/
fi
Explanation: Run Beam pipeline on Cloud Dataflow¶
Again, we'll clear out our bucket to GCS to ensure a fresh run.
End of explanation
preprocess("50*100", "DataflowRunner")
Explanation: The following step will take 15-20 minutes. Monitor job progress on the Dataflow section of Cloud Console. Note, you can change the first arugment to "None" to process the full dataset.
End of explanation
%%bash
gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/
%%bash
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/TRAIN.csv-00000-of-*" | head
Explanation: Once the job finishes, we can look at the files that have been created and have a look at what they contain. You will notice that the files have been sharded into many csv files.
End of explanation
%%bash
if [ -d sample ]; then
rm -rf sample
fi
mkdir sample
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/TRAIN.csv-00000-of-*" > sample/train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/VALID.csv-00000-of-*" > sample/valid.csv
Explanation: Develop a model with new inputs
We can now develop a model with these inputs. Download the first shard of the preprocessed data to a subfolder called sample so we can develop locally first.
End of explanation
%%bash
MODELDIR=./taxifaremodel
test -d $MODELDIR || mkdir $MODELDIR
cp -r ../../03_model_performance/taxifaremodel/* $MODELDIR
Explanation: To begin let's copy the model.py and task.py we developed in the previous notebooks here.
End of explanation
%%bash
grep -A 15 "feature_cols =" taxifaremodel/model.py
Explanation: Let's have a look at the files contained within the taxifaremodel folder. Within model.py we see that feature_cols has three engineered features.
End of explanation
%%bash
grep -A 5 "add_engineered_features(" taxifaremodel/model.py
Explanation: We can also see the engineered features that are created by the add_engineered_features function here.
End of explanation
%%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m taxifaremodel.task \
--train_data_path=${PWD}/sample/train.csv \
--eval_data_path=${PWD}/sample/valid.csv \
--output_dir=${PWD}/taxi_trained \
--train_steps=10 \
--job-dir=/tmp
Explanation: We can try out this model on the local sample we've created to make sure everything works as expected. Note, this takes about 5 minutes to complete.
End of explanation
%%bash
ls -R taxi_trained/export
Explanation: We've only done 10 training steps, so we don't expect the model to have good performance. Let's have a look at the exported files from our training job.
End of explanation
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${PWD}/taxi_trained/export/exporter/${model_dir} --all
Explanation: You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.
End of explanation
%%writefile /tmp/test.json
{"dayofweek": 0, "hourofday": 17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403}
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
gcloud ml-engine local predict \
--model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
Explanation: To test out prediciton with out model, we create a temporary json file containing the expected feature values.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=taxifaremodel.task \
--package-path=${PWD}/taxifaremodel \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/TRAIN*" \
--eval_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/VALID*" \
--train_steps=5000 \
--output_dir=$OUTDIR
Explanation: Train on the Cloud
This will take 10-15 minutes even though the prompt immediately returns after the job is submitted. Monitor job progress on the ML Engine section of Cloud Console and wait for the training job to complete.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1
Explanation: Once the model has finished training on the cloud, we can check the export folder to see that a model has been correctly saved.
End of explanation
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${model_dir} --all
Explanation: As before, we can use the saved_model_cli to examine the exported signature.
End of explanation
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
gcloud ml-engine local predict \
--model-dir=${model_dir} \
--json-instances=/tmp/test.json
Explanation: And check out model's prediction with a local predict job on our test file.
End of explanation |
11,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collaborative filtering on the MovieLense Dataset
Learning objectives
1. Explore the data using BigQuery.
2. Use the model to make recommendations for a user.
3. Use the model to recommend an item to a group of users.
Introduction
This notebook is based on part of Chapter 9 of BigQuery
Step1: Exploring the data
Two tables should now be available in <a href="https
Step2: A quick exploratory query yields that the dataset consists of over 138 thousand users, nearly 27 thousand movies, and a little more than 20 million ratings, confirming that the data has been loaded successfully.
Step3: On examining the first few movies using the query following query, we can see that the genres column is a formatted string
Step4: We can parse the genres into an array and rewrite the table as follows
Step5: Matrix factorization
Matrix factorization is a collaborative filtering technique that relies on factorizing the ratings matrix into two vectors called the user factors and the item factors. The user factors is a low-dimensional representation of a user_id and the item factors similarly represents an item_id.
Step6: When we did that, we discovered that the evaluation loss was lower (0.97) with num_factors=16 than with num_factors=36 (1.67) or num_factors=24 (1.45). We could continue experimenting, but we are likely to see diminishing returns with further experimentation.
Making recommendations
With the trained model, we can now provide recommendations. For example, let’s find the best comedy movies to recommend to the user whose userId is 903. In the query below, we are calling ML.PREDICT passing in the trained recommendation model and providing a set of movieId and userId to carry out the predictions on. In this case, it’s just one userId (903), but all movies whose genre includes Comedy.
Step7: Filtering out already rated movies
Of course, this includes movies the user has already seen and rated in the past. Let’s remove them.
TODO 1
Step8: For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen.
Customer targeting
In the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the customers who are likely to appreciate it. Suppose, for example, we wish to get more reviews for movieId=96481 which has only one rating and we wish to send coupons to the 5 users who are likely to rate it the highest.
TODO 2
Step9: Batch predictions for all users and movies
What if we wish to carry out predictions for every user and movie combination? Instead of having to pull distinct users and movies as in the previous query, a convenience function is provided to carry out batch predictions for all movieId and userId encountered during training. A limit is applied here, otherwise, all user-movie predictions will be returned and will crash the notebook. | Python Code:
import os
import tensorflow as tf
PROJECT = "your-project-here" # REPLACE WITH YOUR PROJECT ID
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["TFVERSION"] = '2.6'
%%bash
mkdir bqml_data
cd bqml_data
curl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip'
unzip ml-20m.zip
yes | bq rm -r $PROJECT:movielens
bq --location=US mk --dataset \
--description 'Movie Recommendations' \
$PROJECT:movielens
bq --location=US load --source_format=CSV \
--autodetect movielens.ratings gs://cloud-training/recommender-systems/movielens/ratings.csv
bq --location=US load --source_format=CSV \
--autodetect movielens.movies_raw gs://cloud-training/recommender-systems/movielens/movies.csv
Explanation: Collaborative filtering on the MovieLense Dataset
Learning objectives
1. Explore the data using BigQuery.
2. Use the model to make recommendations for a user.
3. Use the model to recommend an item to a group of users.
Introduction
This notebook is based on part of Chapter 9 of BigQuery: The Definitive Guide by Lakshmanan and Tigani.
MovieLens dataset
To illustrate recommender systems in action, let’s use the MovieLens dataset. This is a dataset of movie reviews released by GroupLens, a research lab in the Department of Computer Science and Engineering at the University of Minnesota, through funding by the US National Science Foundation.
Download the data and load it as a BigQuery table using:
End of explanation
%%bigquery --project $PROJECT
SELECT *
FROM movielens.ratings
LIMIT 10
Explanation: Exploring the data
Two tables should now be available in <a href="https://console.cloud.google.com/bigquery">BigQuery</a>.
Collaborative filtering provides a way to generate product recommendations for users, or user targeting for products. The starting point is a table, <b>movielens.ratings</b>, with three columns: a user id, an item id, and the rating that the user gave the product. This table can be sparse -- users don’t have to rate all products. Then, based on just the ratings, the technique finds similar users and similar products and determines the rating that a user would give an unseen product. Then, we can recommend the products with the highest predicted ratings to users, or target products at users with the highest predicted ratings.
End of explanation
%%bigquery --project $PROJECT
SELECT
COUNT(DISTINCT userId) numUsers,
COUNT(DISTINCT movieId) numMovies,
COUNT(*) totalRatings
FROM movielens.ratings
Explanation: A quick exploratory query yields that the dataset consists of over 138 thousand users, nearly 27 thousand movies, and a little more than 20 million ratings, confirming that the data has been loaded successfully.
End of explanation
%%bigquery --project $PROJECT
SELECT *
FROM movielens.movies_raw
WHERE movieId < 5
Explanation: On examining the first few movies using the query following query, we can see that the genres column is a formatted string:
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.movies AS
SELECT * REPLACE(SPLIT(genres, "|") AS genres)
FROM movielens.movies_raw
%%bigquery --project $PROJECT
SELECT *
FROM movielens.movies
WHERE movieId < 5
Explanation: We can parse the genres into an array and rewrite the table as follows:
End of explanation
%%bash
bq --location=US cp \
cloud-training-demos:movielens.recommender_16 \
movielens.recommender
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model:
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender`)
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model:
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender_16`)
Explanation: Matrix factorization
Matrix factorization is a collaborative filtering technique that relies on factorizing the ratings matrix into two vectors called the user factors and the item factors. The user factors is a low-dimensional representation of a user_id and the item factors similarly represents an item_id.
End of explanation
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g
WHERE g = 'Comedy'
))
ORDER BY predicted_rating DESC
LIMIT 5
Explanation: When we did that, we discovered that the evaluation loss was lower (0.97) with num_factors=16 than with num_factors=36 (1.67) or num_factors=24 (1.45). We could continue experimenting, but we are likely to see diminishing returns with further experimentation.
Making recommendations
With the trained model, we can now provide recommendations. For example, let’s find the best comedy movies to recommend to the user whose userId is 903. In the query below, we are calling ML.PREDICT passing in the trained recommendation model and providing a set of movieId and userId to carry out the predictions on. In this case, it’s just one userId (903), but all movies whose genre includes Comedy.
End of explanation
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
WITH seen AS (
SELECT ARRAY_AGG(movieId) AS movies
FROM movielens.ratings
WHERE userId = 903
)
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g, seen
WHERE g = 'Comedy' AND movieId NOT IN UNNEST(seen.movies)
))
ORDER BY predicted_rating DESC
LIMIT 5
Explanation: Filtering out already rated movies
Of course, this includes movies the user has already seen and rated in the past. Let’s remove them.
TODO 1: Make a prediction for user 903 that does not include already seen movies.
End of explanation
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
WITH allUsers AS (
SELECT DISTINCT userId
FROM movielens.ratings
)
SELECT
96481 AS movieId,
(SELECT title FROM movielens.movies WHERE movieId=96481) title,
userId
FROM
allUsers
))
ORDER BY predicted_rating DESC
LIMIT 5
Explanation: For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen.
Customer targeting
In the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the customers who are likely to appreciate it. Suppose, for example, we wish to get more reviews for movieId=96481 which has only one rating and we wish to send coupons to the 5 users who are likely to rate it the highest.
TODO 2: Find the top five users who will likely enjoy American Mullet (2001)
End of explanation
%%bigquery --project $PROJECT
SELECT *
FROM ML.RECOMMEND(MODEL `cloud-training-demos.movielens.recommender_16`)
LIMIT 10
Explanation: Batch predictions for all users and movies
What if we wish to carry out predictions for every user and movie combination? Instead of having to pull distinct users and movies as in the previous query, a convenience function is provided to carry out batch predictions for all movieId and userId encountered during training. A limit is applied here, otherwise, all user-movie predictions will be returned and will crash the notebook.
End of explanation |
11,622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize the model
https
Step1: Show convolutional filters
Step2: Show activations with quiver
Install quiver | Python Code:
from keras.models import load_model,Model
import dogs_vs_cats as dvc
import numpy as np
modelname = "cnn_model_trained.h5"
cnn_model = load_model(modelname)
# Load some data
from keras.applications.imagenet_utils import preprocess_input
all_files = dvc.image_files()
all_files = np.array(all_files)
files_ten = all_files[np.random.choice(len(all_files),10)]
ten_img_features,ten_img_labels = dvc.load_image_set(files_ten,(3,50,50))
ten_img_features = preprocess_input(ten_img_features)
# Test network work as good as before
results = cnn_model.evaluate(ten_img_features,ten_img_labels)
print(" ".join(["%s: %.4f"%(metric_name,valor) for metric_name,valor in zip(cnn_model.metrics_names,results)]))
# https://keras.io/getting-started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer
model_maxpooling1 = Model(input=cnn_model.input,
output=cnn_model.get_layer("maxpooling2d_1").output)
print(model_maxpooling1.input_shape,
model_maxpooling1.output_shape)
feat_max_pooling = model_maxpooling1.predict(ten_img_features)
feat_max_pooling.shape
from IPython.display import Image, display
img_show = 3
display(Image(files_ten[img_show]))
import matplotlib.pyplot as plt
%matplotlib notebook
fig, axs = plt.subplots(8,4,figsize=(10,15),sharex=True,sharey=True)
axs = axs.flatten()
feat_show = feat_max_pooling[img_show]
for ax,feats in zip(axs,feat_show):
ax.imshow(feats,cmap="gray")
ax.axis("off")
plt.subplots_adjust(wspace=0, hspace=0)
Explanation: Visualize the model
https://github.com/keplr-io/quiver
https://transcranial.github.io/keras-js
Show output (activations) of specific layer
End of explanation
W,b = cnn_model.get_layer("convolution2d_1").get_weights()
W.shape,b.shape
W_show = (W-W.min())/(W.max()-W.min())
%matplotlib notebook
fig, axs = plt.subplots(8,4,figsize=(5,10),sharex=True,sharey=True)
axs = axs.flatten()
for ax,w in zip(axs,W_show):
ax.imshow(w)
from keras.applications.vgg16 import VGG16
import numpy as np
# https://keras.io/applications/#vgg16
vgg16_model = VGG16(weights='imagenet')
W,b = vgg16_model.get_layer("block1_conv1").get_weights()
W.shape,b.shape
%matplotlib notebook
W_show = (W-W.min())/(W.max()-W.min())
fig, axs = plt.subplots(8,8,figsize=(10,10),sharex=True,sharey=True)
axs = axs.flatten()
for ax,w in zip(axs,W_show):
ax.imshow(w)
plt.subplots_adjust(wspace=0, hspace=0)
Explanation: Show convolutional filters
End of explanation
import os
import shutil
# copy some images in other dir to avoid saturate quiqver:
dirname = "trains_subset"
if not os.path.exists(dirname):
os.mkdir(dirname)
all_files = dvc.image_files()
for img in all_files[np.random.choice(len(all_files),15)]:
shutil.copy(img,
os.path.join(dirname,os.path.basename(img)))
import quiver_engine.server as server
server.launch(cnn_model,classes=["dog","cat"],input_folder=dirname)
!ls
Explanation: Show activations with quiver
Install quiver:
pip install --no-deps git+git://github.com/jakebian/quiver.git
End of explanation |
11,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 17
Step1: Dictionaries are equivalent.
Step2: A Dictionary cannot lookup a value that is not available.
Step3: You can check if a variable is in a dictionary.
Step4: You can use multiple commands to query dictionaries. Some of the basic ones are
Step5: You can use the values in for loops.
Step6: To check quickly if a key exists in a Dictionary, a useful command is
Step7: This is useful to avoid errors when keys aren't populated
Step8: Another useful command is
dictionary.setdefault('key', 'value)
Which sets a default value in a dictionary faster than looping through the dictionary.
Step9: Character Count Program
Step10: Without .setdefault() we would get an error message because there is no default character for those values
Step11: This method would work for a string of any conceivable length. For example, the entire text of Romeo and Juliet.
Step12: Now hand that to to the Character Count Program to see if it works, and use pprint to do a 'pretty print'. | Python Code:
eggs = {
'name': 'Zophie',
'species': 'cat',
'age': 8
}
ham = {
'species': 'cat',
'name': 'Zophie',
'age': 8
}
print(eggs)
print(ham)
Explanation: Lesson 17:
The Dictionary Data Type
Switched to the Jupyter Notebook for REPL convenience.
Dictionaries use key pairs to store values in variables:
variable = {key: value}
End of explanation
eggs == ham
Explanation: Dictionaries are equivalent.
End of explanation
eggs['color']
Explanation: A Dictionary cannot lookup a value that is not available.
End of explanation
'name' in eggs
'color' not in eggs
Explanation: You can check if a variable is in a dictionary.
End of explanation
list(eggs.keys())
list(eggs.values())
list(eggs.items())
Explanation: You can use multiple commands to query dictionaries. Some of the basic ones are:
list(dictionary.keys()) # lists keys
list(dictionary.values()) # lists values
list(dictionary.items()) # lists keys and values as tuples
Use list() to get actual list values, otherwise you get dictionary 'list-like' data types.
End of explanation
for v in eggs.values():
print(v)
for k, v in eggs.items():
print(k,v)
for i in eggs.items():
print(i)
Explanation: You can use the values in for loops.
End of explanation
eggs.get('age','0')
eggs.get('color','')
picnicItems = {'apples': 5, 'cups': 2}
Explanation: To check quickly if a key exists in a Dictionary, a useful command is:
get('key','return this value if key isn't there.)
End of explanation
print('I am bringing ' + str(picnicItems['napkins']) + ' napkins to the picnic.')
print('I am bringing ' + str(picnicItems.get('napkins', 0)) + ' napkins to the picnic.')
eggs =
Explanation: This is useful to avoid errors when keys aren't populated:
End of explanation
eggs = {
'name': 'Zophie',
'species': 'cat',
'age': 8
}
print(eggs)
if 'color' not in eggs:
eggs['color'] = 'black'
print(eggs)
eggs = {
'name': 'Zophie',
'species': 'cat',
'age': 8
}
print(eggs)
eggs.setdefault('color','black')
print(eggs)
Explanation: Another useful command is
dictionary.setdefault('key', 'value)
Which sets a default value in a dictionary faster than looping through the dictionary.
End of explanation
message = 'It was a bright cold day in April, and the clocks were striking thirteen.'
count = {}
for character in message.upper(): # Use upper case method to avoid counting them seperately
count.setdefault(character, 0) # Start the counter at 0 so theres no error on the next line
count[character] = count[character] + 1 # Now add a value for that character everytime you run into that character
print(count)
Explanation: Character Count Program
End of explanation
count = {}
for character in message.upper(): # Use upper case method to avoid counting lower and upper case letters seperately
#count.setdefault(character, 0) # Start the counter at 0 so theres no error on the next line
count[character] = count[character] + 1 # Now add a value for that character everytime you run into that character
print(count)
Explanation: Without .setdefault() we would get an error message because there is no default character for those values:
End of explanation
import requests # get the Requests library
link = "https://automatetheboringstuff.com/files/rj.txt" # store the link where the file is at
rj = requests.get(link) # use Requests to get the link
print(rj.text) # Print out the text from Requests, and check that its stored
Explanation: This method would work for a string of any conceivable length. For example, the entire text of Romeo and Juliet.
End of explanation
import pprint
count = {}
for character in rj.text.upper(): # Use upper case method to avoid counting lower and upper case letters seperately
count.setdefault(character, 0) # Start the counter at 0 so theres no error on the next line
count[character] = count[character] + 1 # Now add a value for that character everytime you run into that character
pprint.pprint(count)
Explanation: Now hand that to to the Character Count Program to see if it works, and use pprint to do a 'pretty print'.
End of explanation |
11,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1.0 / (1.0 + np.exp(-x)) #
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, error)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error # f'(h) == 1 for output unit
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 800
learning_rate = 0.8
hidden_nodes = 16
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
11,625 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have two csr_matrix, c1, c2. | Problem:
from scipy import sparse
c1 = sparse.csr_matrix([[0, 0, 1, 0], [2, 0, 0, 0], [0, 0, 0, 0]])
c2 = sparse.csr_matrix([[0, 3, 4, 0], [0, 0, 0, 5], [6, 7, 0, 8]])
Feature = sparse.hstack((c1, c2)).tocsr() |
11,626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estimating Current Cases by Category
This notebook explores a methodology to estimate current mild, severe and critical patients. Both mild and critical categories appear to be correlated to independently reported categories from Italy's ministry of health.
Most of the reporting of COVID-19 data, including what is reported by the ECDC, focuses on the daily counts of new cases and deaths. While this is useful for tracking the general development of the disease, it provides little information about the capacity required by a health care system to cope with the case load. Here we explore a methodology to estimate total active cases (cases between onset and recovery / death) broken down by category.
Breakdown of cases
Early data from China classified the cases into mild, severe and critical with 80.9%, 13.8% and 4.7% of occurrences respectively (source). While this might be useful for categorising cases, it does not appear to match some outcome-based data like hospitalization rate where, as of March 26, Italy reported 43.2%, Spain 56.80% and New York 15% of all confirmed cases.
Such wild range in hospitalization rates is potentially due to different criteria being used across health care systems as well as hitting potential system capacity limits. Therefore, the estimations performed here cannot be used as a proxy for hospitalization rates unless a country-dependent correcting factor is applied. However, using an estimate of 5% of all cases appears to be a good predictor for ICU admission rates, and 15% of all cases seem to correlate to a rate of confirmed cases that only experience mild symptoms.
Step1: Discharge time for severe vs critical cases
Unfortunately, early data from Chinese sources only reported a median stay of 12 and a mean stay of 12.8 for all hospitalizations without specifying which of them required ICU resources.
Since we know the ratio of severe vs critical cases, we only need to guess the discharge time of severe cases since there will only be one way to satisfy the constraint of overall hospitalization median and mean. Here, we plot the estimated discharge time for critical cases (y axis) given a discharge time for severe cases (x axis)
Step2: Because the data is reported daily, we can only pick an estimated whole number for both variables. The only possible value for hospital discharge days that would result in a median discharge time of 12 days is, unsurprisingly, 12. Then, the estimated ICU discharge days is 15 days.
Step3: Recovery time for mild cases
In order to estimate the current number of mild cases, there is one more number that we must guess
Step4: Estimating new cases breakdown
Using Italy's data up to March 26, assuming the proportion of each category remains constant for every new case, daily breakdowns can be estimated by multiplying the number of new cases by the ratios of the respective categories
Step5: Estimating current cases breakdown
Now that we have an estimate for the category breakdown of new cases as well as for the discharge time per category, we can estimate the number of current cases per category by adding up each category over a rolling window
Step6: Comparing with Italy's home care, hospitalizations and ICU counts
Italy categorises the cases into home care, hospitalizations and ICU admission, which appear to map well into mild, severe and critical categories. From Italy's ministry of health, this is the breakdown as of March 26 of cases compared to our model estimates | Python Code:
# Since reported numbers are approximate, they are rounded for the sake of simplicity
severe_ratio = .15
critical_ratio = .05
mild_ratio = 1 - severe_ratio - critical_ratio
Explanation: Estimating Current Cases by Category
This notebook explores a methodology to estimate current mild, severe and critical patients. Both mild and critical categories appear to be correlated to independently reported categories from Italy's ministry of health.
Most of the reporting of COVID-19 data, including what is reported by the ECDC, focuses on the daily counts of new cases and deaths. While this is useful for tracking the general development of the disease, it provides little information about the capacity required by a health care system to cope with the case load. Here we explore a methodology to estimate total active cases (cases between onset and recovery / death) broken down by category.
Breakdown of cases
Early data from China classified the cases into mild, severe and critical with 80.9%, 13.8% and 4.7% of occurrences respectively (source). While this might be useful for categorising cases, it does not appear to match some outcome-based data like hospitalization rate where, as of March 26, Italy reported 43.2%, Spain 56.80% and New York 15% of all confirmed cases.
Such wild range in hospitalization rates is potentially due to different criteria being used across health care systems as well as hitting potential system capacity limits. Therefore, the estimations performed here cannot be used as a proxy for hospitalization rates unless a country-dependent correcting factor is applied. However, using an estimate of 5% of all cases appears to be a good predictor for ICU admission rates, and 15% of all cases seem to correlate to a rate of confirmed cases that only experience mild symptoms.
End of explanation
import numpy as np
import pandas as pd
import seaborn as sns
sns.set()
# Data from early Chinese reports
mean_discharge_time = 12.8
severe_ratio_norm = severe_ratio / (severe_ratio + critical_ratio)
critical_ratio_norm = critical_ratio / (severe_ratio + critical_ratio)
def compute_icu_discharge_time(severe_discharge_time, mean_discharge_time: float = 12.8):
''' Using mean discharge time from https://www.cnn.com/2020/03/20/health/covid-19-recovery-rates-intl/index.html '''
return (mean_discharge_time - severe_discharge_time * severe_ratio_norm) / critical_ratio_norm
X = np.linspace(10, 15, 100)
y = np.array([compute_icu_discharge_time(x) for x in X])
X_name = 'Hospitalization time for severe cases (days)'
y_name = 'Hospitalization time for critical cases(days)'
df = pd.DataFrame([(x, y_) for x, y_ in zip(X, y)], columns=[X_name, y_name]).set_index(X_name)
df.plot(figsize=(16, 8), grid=True, ylim=(0, max(y)));
Explanation: Discharge time for severe vs critical cases
Unfortunately, early data from Chinese sources only reported a median stay of 12 and a mean stay of 12.8 for all hospitalizations without specifying which of them required ICU resources.
Since we know the ratio of severe vs critical cases, we only need to guess the discharge time of severe cases since there will only be one way to satisfy the constraint of overall hospitalization median and mean. Here, we plot the estimated discharge time for critical cases (y axis) given a discharge time for severe cases (x axis):
End of explanation
severe_recovery_days = 12
critical_recovery_days = 15
Explanation: Because the data is reported daily, we can only pick an estimated whole number for both variables. The only possible value for hospital discharge days that would result in a median discharge time of 12 days is, unsurprisingly, 12. Then, the estimated ICU discharge days is 15 days.
End of explanation
mild_recovery_days = 7
Explanation: Recovery time for mild cases
In order to estimate the current number of mild cases, there is one more number that we must guess: how many days it takes for recovery, on average, for all cases that are not severe or critical. Reported recovery times from a COVID-19 infection range from 2 weeks (source) to "a week to 10 days" (source). After experimenting with several choices, using a median recovery time of 7 days appears to match empirical data from multiple official reports.
End of explanation
import pandas as pd
# Load country-level data for Italy
data = pd.read_csv('https://storage.googleapis.com/covid19-open-data/v2/IT/main.csv').set_index('date')
# Estimate daily counts per category assuming ratio is constant
data = data[data.index <= '2020-03-27']
data['new_mild'] = data['new_confirmed'] * mild_ratio
data['new_severe'] = data['new_confirmed'] * severe_ratio
data['new_critical'] = data['new_confirmed'] * critical_ratio
data = data[['new_confirmed', 'new_deceased', 'new_mild', 'new_severe', 'new_critical']]
data.tail()
Explanation: Estimating new cases breakdown
Using Italy's data up to March 26, assuming the proportion of each category remains constant for every new case, daily breakdowns can be estimated by multiplying the number of new cases by the ratios of the respective categories:
End of explanation
data['current_mild'] = data['new_mild'].rolling(round(mild_recovery_days)).sum()
data['current_severe'] = data['new_severe'].rolling(round(severe_recovery_days)).sum()
data['current_critical'] = data['new_critical'].rolling(round(critical_recovery_days)).sum()
data.tail()
Explanation: Estimating current cases breakdown
Now that we have an estimate for the category breakdown of new cases as well as for the discharge time per category, we can estimate the number of current cases per category by adding up each category over a rolling window:
End of explanation
estimated = data.iloc[-1]
reported = pd.DataFrame.from_records([
{'Category': 'current_mild', 'Count': 30920},
{'Category': 'current_severe', 'Count': 23112},
{'Category': 'current_critical', 'Count': 3489},
]).set_index('Category')
pd.DataFrame.from_records([
{
'Category': col,
'Estimated': '{0:.02f}'.format(estimated[col]),
'Reported': reported.loc[col, 'Count'],
'Difference': '{0:.02f}%'.format(100.0 * (estimated[col] - reported.loc[col, 'Count']) / reported.loc[col, 'Count']),
}
for col in reported.index.tolist()
]).set_index('Category')
Explanation: Comparing with Italy's home care, hospitalizations and ICU counts
Italy categorises the cases into home care, hospitalizations and ICU admission, which appear to map well into mild, severe and critical categories. From Italy's ministry of health, this is the breakdown as of March 26 of cases compared to our model estimates:
End of explanation |
11,627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
Step2: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient)
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Now, run backward propagation.
Step12: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still | Python Code:
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
Explanation: Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
### START CODE HERE ### (approx. 1 line)
J = theta * x
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
Explanation: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
$\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
End of explanation
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
Explanation: Expected Output:
<table style=>
<tr>
<td> ** J ** </td>
<td> 8</td>
</tr>
</table>
Exercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
End of explanation
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon # Step 1
thetaminus = theta - epsilon # Step 2
J_plus = forward_propagation(x, thetaplus) # Step 3
J_minus = forward_propagation(x, thetaminus) # Step 4
gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x, theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox)# Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
Explanation: Expected Output:
<table>
<tr>
<td> ** dtheta ** </td>
<td> 2 </td>
</tr>
</table>
Exercise: To show that the backward_propagation() function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
Instructions:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
End of explanation
def forward_propagation_n(X, Y, parameters):
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
Explanation: Expected Output:
The gradient is correct!
<table>
<tr>
<td> ** difference ** </td>
<td> 2.9193358103083e-10 </td>
</tr>
</table>
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation().
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!
3) N-dimensional gradient checking
The following figure describes the forward and backward propagation of your fraud detection model.
<img src="images/NDgrad_kiank.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption>
Let's look at your implementations for forward propagation and backward propagation.
End of explanation
def backward_propagation_n(X, Y, cache):
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T) * 2
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
Explanation: Now, run backward propagation.
End of explanation
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary( thetaplus )) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary( thetaminus )) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox)# Step 2'
difference = np.divide(numerator, denominator) # Step 3'
### END CODE HERE ###
if difference > 2e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
Explanation: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "dictionary_to_vector()" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "vector_to_dictionary" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
Exercise: Implement gradient_check_n().
Instructions: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute J_plus[i]:
1. Set $\theta^{+}$ to np.copy(parameters_values)
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\theta^{+}$ )).
- To compute J_minus[i]: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
End of explanation |
11,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing Thanksgiving Dinner
This notebook analyzes Thanksgiving dinner in the US. The dataset contains 1058 responses to an online survey about what Americans eat for Thanksgiving dinner, along with some demographic questions, like gender, income, and location. Using this dataset we can discover regional and income-based patterns in what Americans eat for Thanksgiving dinner.
Step1: We need to filter out the people who didn't celebrate Thanksgiving.
Step2: What main dishes do people eat at Thanksgiving?
Step3: How many people ate apple, pumpkin or pecan pie at Thanksgiving?
Step4: 876 people ate pies (is null was false), and 182 people didn't.
Step5: 82% of people ate pies.
What is the mean age of people celebrating Thanksgiving?
Step6: We used the bottom of each age interval, so our mean would be an underestimate of the mean age.
What is the average income of people celebrating Thanksgiving?
Step7: Using the bottom of each range of income would likely underestimate mean. Also, we don't know where in the range the mean person would fall.
Is Travel Distance related to Income?
Let's determine how distance traveled for Thanksgiving dinner relates to income level. Our hypothesis is that people earning less money could be younger, and would travel to their parent's houses for Thanksgiving. People earning more are more likely to have Thanksgiving at their house as a result.
Step8: A greater proportion of people making higher income had Thanksgiving in their homes.
Are people who meet up with friends for Thanksgiving younger?
Step9: People who have met up with friends are generally younger (ie. 34 vs. 41 for people who haven't).
What is the most commonly eaten dessert?
Step10: The most common dessert is ice cream followed by cookies.
Are there regional patterns in dinner menus? | Python Code:
import pandas as pd
data = pd.read_csv("thanksgiving.csv", encoding = 'Latin-1')
data.head()
data.columns
data['Do you celebrate Thanksgiving?'].value_counts()
Explanation: Analyzing Thanksgiving Dinner
This notebook analyzes Thanksgiving dinner in the US. The dataset contains 1058 responses to an online survey about what Americans eat for Thanksgiving dinner, along with some demographic questions, like gender, income, and location. Using this dataset we can discover regional and income-based patterns in what Americans eat for Thanksgiving dinner.
End of explanation
filter_yes = data['Do you celebrate Thanksgiving?'] == "Yes"
data = data.loc[filter_yes]
data
Explanation: We need to filter out the people who didn't celebrate Thanksgiving.
End of explanation
data['What is typically the main dish at your Thanksgiving dinner?'].value_counts()
filter_tofurkey = data['What is typically the main dish at your Thanksgiving dinner?'] == 'Tofurkey'
tofurkey = data.loc[filter_tofurkey]
tofurkey['Do you typically have gravy?'].value_counts()
Explanation: What main dishes do people eat at Thanksgiving?
End of explanation
apple_isnull = data['Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Apple'].isnull()
pumpkin_isnull = data['Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Pumpkin'].isnull()
pecan_isnull = data['Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Pecan'].isnull()
ate_pies = apple_isnull & pumpkin_isnull & pecan_isnull
ate_pies.value_counts()
Explanation: How many people ate apple, pumpkin or pecan pie at Thanksgiving?
End of explanation
proportion = 876/(876+182)
proportion
Explanation: 876 people ate pies (is null was false), and 182 people didn't.
End of explanation
import numpy as np
def convert_age(value):
if pd.isnull(value):
return None
str2 = value.split(' ')[0]
str2 = str2.replace('+',' ')
return int(str2)
data['int_age'] = data['Age'].apply(convert_age)
np.mean(data['int_age'])
Explanation: 82% of people ate pies.
What is the mean age of people celebrating Thanksgiving?
End of explanation
import numpy as np
def convert_income(value):
if pd.isnull(value):
return None
if value == 'Prefer not to answer':
return None
str2 = value.split(' ')[0]
str2 = str2.replace('$', '')
str2 = str2.replace(',', '')
return int(str2)
data['int_income'] = data['How much total combined money did all members of your HOUSEHOLD earn last year?'].apply(convert_income)
data_income = data['int_income'].dropna()
data_income.describe()
Explanation: We used the bottom of each age interval, so our mean would be an underestimate of the mean age.
What is the average income of people celebrating Thanksgiving?
End of explanation
less_than_50k = data['int_income'] < 50000
filter_less_50k = data[less_than_50k]
filter_less_50k['How far will you travel for Thanksgiving?'].value_counts()
over_150k = data['int_income'] > 150000
filter_over_150k = data[over_150k]
filter_over_150k['How far will you travel for Thanksgiving?'].value_counts()
proportion_home_50k = 106/(106 + 92 + 64 + 16)
proportion_home_150k = 49/(49 + 25 + 16 + 12)
print(proportion_home_50k)
print(proportion_home_150k)
Explanation: Using the bottom of each range of income would likely underestimate mean. Also, we don't know where in the range the mean person would fall.
Is Travel Distance related to Income?
Let's determine how distance traveled for Thanksgiving dinner relates to income level. Our hypothesis is that people earning less money could be younger, and would travel to their parent's houses for Thanksgiving. People earning more are more likely to have Thanksgiving at their house as a result.
End of explanation
thanksgiving_meet_friends = 'Have you ever tried to meet up with hometown friends on Thanksgiving night?'
thanksgiving_friendsgiving = 'Have you ever attended a "Friendsgiving?"'
data.pivot_table(index = thanksgiving_meet_friends, columns = thanksgiving_friendsgiving, values = "int_age")
Explanation: A greater proportion of people making higher income had Thanksgiving in their homes.
Are people who meet up with friends for Thanksgiving younger?
End of explanation
import re
desserts = 'Which of these desserts do you typically have at Thanksgiving dinner?'
column_headings = data.columns
desserts_index = [i for i,x in enumerate(column_headings) if re.match(desserts,x)]
counts = data[desserts_index].count()
counts.sort_values(ascending = False)
Explanation: People who have met up with friends are generally younger (ie. 34 vs. 41 for people who haven't).
What is the most commonly eaten dessert?
End of explanation
region_counts = data['US Region'].value_counts().sort_index()
print(region_counts)
data["US Region"] = data["US Region"].astype("category")
data["US Region"].cat.set_categories(['East North Central', 'East South Central', 'Middle Atlantic',
'Mountain','New England','Pacific','South Atlantic',
'West North Central','West South Central'],inplace=True)
data["What is typically the main dish at your Thanksgiving dinner?"] = data["What is typically the main dish at your Thanksgiving dinner?"].astype("category")
data["What is typically the main dish at your Thanksgiving dinner?"].cat.set_categories(['Chicken', 'I don\'t know','Other (please specify)', 'Roast beef', 'Tofurkey', 'Turducken', 'Turkey', 'Ham/Pork'],inplace=True)
main_dish = pd.pivot_table(data,index = [ "What is typically the main dish at your Thanksgiving dinner?"], columns =['US Region'], values=["RespondentID"], aggfunc=lambda x: len(x.unique()), fill_value = 0, margins = True )
main_dish_normalized = main_dish.div( main_dish.iloc[-1,:], axis=1 )
main_dish_normalized
Explanation: The most common dessert is ice cream followed by cookies.
Are there regional patterns in dinner menus?
End of explanation |
11,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook will perform analysis of functional connectivity on simulated data.
Step1: Now let's add on an activation signal to both voxels
Step2: How can we address this problem? A general solution is to first run a general linear model to remove the task effect and then compute the correlation on the residuals.
Step3: What happens if we get the hemodynamic model wrong? Let's use the temporal derivative model to generate an HRF that is lagged compared to the canonical.
Step4: Let's see if using a more flexible basis set, like an FIR model, will allow us to get rid of the task-induced correlation. | Python Code:
import os,sys
import numpy
%matplotlib inline
import matplotlib.pyplot as plt
sys.path.insert(0,'../utils')
from mkdesign import create_design_singlecondition
from nipy.modalities.fmri.hemodynamic_models import spm_hrf,compute_regressor
from make_data import make_continuous_data
data=make_continuous_data(N=200)
print('correlation without activation:',numpy.corrcoef(data.T)[0,1])
plt.plot(range(data.shape[0]),data[:,0],color='blue')
plt.plot(range(data.shape[0]),data[:,1],color='red')
Explanation: This notebook will perform analysis of functional connectivity on simulated data.
End of explanation
design_ts,design=create_design_singlecondition(blockiness=1.0,offset=30,blocklength=20,deslength=data.shape[0])
regressor,_=compute_regressor(design,'spm',numpy.arange(0,len(design_ts)))
regressor*=50.
data_act=data+numpy.hstack((regressor,regressor))
plt.plot(range(data.shape[0]),data_act[:,0],color='blue')
plt.plot(range(data.shape[0]),data_act[:,1],color='red')
print ('correlation with activation:',numpy.corrcoef(data_act.T)[0,1])
Explanation: Now let's add on an activation signal to both voxels
End of explanation
X=numpy.vstack((regressor.T,numpy.ones(data.shape[0]))).T
beta_hat=numpy.linalg.inv(X.T.dot(X)).dot(X.T).dot(data_act)
y_est=X.dot(beta_hat)
resid=data_act - y_est
print ('correlation of residuals:',numpy.corrcoef(resid.T)[0,1])
Explanation: How can we address this problem? A general solution is to first run a general linear model to remove the task effect and then compute the correlation on the residuals.
End of explanation
regressor_td,_=compute_regressor(design,'spm_time',numpy.arange(0,len(design_ts)))
regressor_lagged=regressor_td.dot(numpy.array([1,0.5]))*50
plt.plot(regressor_lagged)
plt.plot(regressor)
data_lagged=data+numpy.vstack((regressor_lagged,regressor_lagged)).T
beta_hat_lag=numpy.linalg.inv(X.T.dot(X)).dot(X.T).dot(data_lagged)
plt.subplot(211)
y_est_lag=X.dot(beta_hat_lag)
plt.plot(y_est)
plt.plot(data_lagged)
resid=data_lagged - y_est_lag
print ('correlation of residuals:',numpy.corrcoef(resid.T)[0,1])
plt.subplot(212)
plt.plot(resid)
Explanation: What happens if we get the hemodynamic model wrong? Let's use the temporal derivative model to generate an HRF that is lagged compared to the canonical.
End of explanation
regressor_fir,_=compute_regressor(design,'fir',numpy.arange(0,len(design_ts)),fir_delays=range(28))
regressor_fir.shape
X_fir=numpy.vstack((regressor_fir.T,numpy.ones(data.shape[0]))).T
beta_hat_fir=numpy.linalg.inv(X_fir.T.dot(X_fir)).dot(X_fir.T).dot(data_lagged)
plt.subplot(211)
y_est_fir=X_fir.dot(beta_hat_fir)
plt.plot(y_est)
plt.plot(data_lagged)
resid=data_lagged - y_est_fir
print ('correlation of residuals:',numpy.corrcoef(resid.T)[0,1])
plt.subplot(212)
plt.plot(resid)
Explanation: Let's see if using a more flexible basis set, like an FIR model, will allow us to get rid of the task-induced correlation.
End of explanation |
11,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Crash Course Exercises
This is an optional exercise to test your understanding of Python Basics. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
What is 7 to the power of 4?
Step1: Split this string
Step2: Given the variables
Step3: Given this nested list, use indexing to grab the word "hello"
Step4: Given this nested dictionary grab the word "hello". Be prepared, this will be annoying/tricky
Step5: What is the main difference between a tuple and a list?
Step6: Create a function that grabs the email website domain from a string in the form
Step7: Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.
Step8: Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases.
Step9: Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example
Step10: Final Problem
You are driving a little too fast, and a police officer stops you. Write a function
to return one of 3 possible results
Step11: Great job! | Python Code:
7**4
Explanation: Python Crash Course Exercises
This is an optional exercise to test your understanding of Python Basics. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
What is 7 to the power of 4?
End of explanation
s = "Hi there Sam!"
s.split()
Explanation: Split this string:
s = "Hi there Sam!"
into a list.
End of explanation
planet = "Earth"
diameter = 12742
print("The diameter of {} is {} kilometers.".format(planet,diameter))
Explanation: Given the variables:
planet = "Earth"
diameter = 12742
Use .format() to print the following string:
The diameter of Earth is 12742 kilometers.
End of explanation
lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]
lst[3][1][2]
Explanation: Given this nested list, use indexing to grab the word "hello"
End of explanation
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
Explanation: Given this nested dictionary grab the word "hello". Be prepared, this will be annoying/tricky
End of explanation
# Tuple is immutable
na = "[email protected]"
na.split("@")[1]
Explanation: What is the main difference between a tuple and a list?
End of explanation
def domainGet(name):
return name.split("@")[1]
domainGet('[email protected]')
Explanation: Create a function that grabs the email website domain from a string in the form:
[email protected]
So for example, passing "[email protected]" would return: domain.com
End of explanation
def findDog(sentence):
x = sentence.split()
for item in x:
if item == "dog":
return True
findDog('Is there a dog here?')
Explanation: Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.
End of explanation
countDog('This dog runs faster than the other dog dude!')
Explanation: Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases.
End of explanation
seq = ['soup','dog','salad','cat','great']
Explanation: Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:
seq = ['soup','dog','salad','cat','great']
should be filtered down to:
['soup','salad']
End of explanation
def caught_speeding(speed, is_birthday):
if s_birthday == False:
if speed <= 60:
return "No ticket"
elif speed >= 61 and speed <=80:
return "small ticket"
elif speed >81:
return "Big ticket"
else:
return "pass"
caught_speeding(81,False)
caught_speeding(81,False)
lst = ["7:00","7:30"]
Explanation: Final Problem
You are driving a little too fast, and a police officer stops you. Write a function
to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket".
If your speed is 60 or less, the result is "No Ticket". If speed is between 61
and 80 inclusive, the result is "Small Ticket". If speed is 81 or more, the result is "Big Ticket". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all
cases.
End of explanation
lst
type(lst)
type(lst[1])
Explanation: Great job!
End of explanation |
11,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sending Secret Messages with Python
This notebook will teach you how to send secret messages to your friends using a computer language called "Python." Python is used by thousands of programmers around the world to create websites and video games, to do science and math, even this web page is written using Python!
Before we learn any Python, we need to learn just a bit about secret codes. One way to make your message secret is using a "substitution cipher." This means that we replace every letter in our message with a different letter. For example, all of the c's in our message might become j's. The message will look like garbled nonsense to anyone who doesn't know which letter has been substituted. By the time you're done with this page, you'll be able to make this message
this is a super secret message
Look like this
pkbm bm t mexuu mueuup nummtcu
Only someone who knows the password will be able to turn it back into the original message. (By the way, did you notice, above that everywhere there's an 's' in the original message, the coded message has an 'm'?)
To use a substitution cipher, the first thing to do is to decide which letter will replace which. We'll create a table like this one
Step1: You can also change the code in the boxes before you run them. Try replacing "Levi" with your name in the box above. Now, while the cursor is still in the box, push Ctrl+Enter to see Python echo it back. From here on out, make a habit of executing each box in this notebook with Ctrl-Enter.
Now that you know a little bit about Jupyter notebooks, we're ready to learn some Python!
Python Dictionaries
To get Python to do this work for us, the first thing to do is to decide on a table and create a 'dictionary' in Python that keeps track of which letters get replaced by which new letters. A Python dictionary is a lot like the two-column table we've been talking about. We create an empty table like this
Step2: Don't go on until you click into the box above and push Ctrl+Enter. You should see {} appear below the box. One more thing about Jupyter notebooks. Each time you execute a box, Python remembers what you've done. Here, we created a dictionary called "table." When we refer to "table" in the next box, Python will remember that "table" is an empty dictionary. The code in the box below adds one entry to your dictionary with the letter 'a' in the left column and 'n' in the right column. Try it out by pushing Ctrl-Enter in the box below.
Step3: Don't be afraid to change the code in the boxes. You'll learn a lot by playing around, so don't be shy. Make some changes to the code in the boxes, then push Ctrl-Enter to see what you changed. You might get an error from Python. That's OK. See if you can figure out how to fix it. As you play around, remember that the notebook is remembering your changes. If you want to clear the notebook's memory and start again, look for the menu option above called "Kernel." Select "Restart" from that sub menu.
Quick tip
Step4: The first line, newletter = table['a'], tells Python to read the entry we gave it for 'a' and call it by a new name, newletter. In this case, table['a'] is 'n'. After that, everytime we refer to newletter Python will remember that we told it newletter is 'n'. So, when we ask Python to print newletter, it remembers and prints 'n'. newletter is what programmers call a 'variable'--a name that refers, in your program, to some other value.
A few more things about our Python tables (also called 'dictionaries'). The values on the left hand side of the table are called the "keys." The right hand are called the "values." Let's add a few rows to our table, then ask Python some questions. Push Ctrl-Enter in the boxes below and see what Python answers.
Step5: The colon (
Step6: Python didn't like that and sent us back a KeyError. To avoid these errors, we can first check to see whether that table entry exists.
Step7: We just asked Python, "Is d one of the keys in our table?" Python said, "No."
Now, let's create a table entry for 'd' and ask again.
Step8: A key can only appear in the left column one time. If I try to assign one letter on the left column to two different letters on the right, the dictionary will store the last value.
Step9: The old table entry for 'd' is lost and replaced by the new value. Note that more than one key can correspond to a single value. When we assigned 'd' to be changed to 'n', both 'a' and 'd' were being translated to 'n'. That's perfectly OK.
Python Lists
Way to go! Hopefully, you're feeling comfortable with dictionaries. Make sure to execute the code in the boxes as you go by clicking into the box and pushing Ctrl-Enter.
Next, we'll take a look at lists. A python list is like a dictionary where the keys are numbers, specifically integers (i.e. 0,1,2,3,4,5...). We can create a list like this
Step10: The list works a lot like a dictionary with numbers in the left column. We can ask for specific letters in the list by referring to their position in the list. Try this
Step11: Wait, but isn't 'e' the third letter in the list? It's important to remember that computers start counting from zero.
Step12: In a Python list, position 0 is the very start of the list. Now, there are 4 letters in the list. What do you think will happen if we ask for the letter in position 4?
Step13: What do you think this error means? (Hint
Step14: Our string acts just like a list
Step15: The third letter in our string is 'e'. We ask Python for letter number 2 since Python starts counting from zero.
Even the spaces in our string are considered letters.
Step16: The output above just looks like two quote marks, but there's actually a space between them. The 4th "letter" in our string is actually a space.
Take a minute to write some code yourself involving strings. You can edit the code in the box above and push Ctrl-Enter to execute it. Can you figure out which letter is 'q'? Or 'b'? What if you pick a number that's too large, like 100?
Creating a cipher table from a sentence
Let's create a table we can use to encode and decode our messages. I want to change all of my a's into t's. How can we create a Python dictionary to do that?
Step17: Next, I'll add a row for changing all of the b's.
Step18: In fact, what I'd like to do is to take an easy to remember phrase and use it to determine the right hand side of my table. I'd like to put 'abcdefghijklmnopqrstuvwxyz' down the left hand column and 'the quick brown fox jumped over the lazy dogs' down the right hand side. So 'a' changes to 't'. 'b' changes to 'h'. 'c' changes to 'e' and so on.
<html><table>
<tr><td>Old letter<br></td><td>New letter<br></td>
<tr><td>a</td><td>t</td>
<tr><td>b</td><td>h</td>
<tr><td>c</td><td>e</td>
<tr><td>d</td><td>q</td>
<tr><td>e</td><td>u</td>
<tr><td>f</td><td>i</td>
<tr><td>g</td><td>c</td>
<tr><td>h</td><td>k</td>
<tr><td>i</td><td>b</td>
<tr><td>...</td><td>...</td>
</table></html>
How can I do this in Python? We can write a loop to create a dictionary from two strings. Let's build the same string as before.
Step19: Just like before, we can treat it like a list of letters.
Step20: Now, let's create another string like this.
Step21: We're going to use these two strings to create a table like this
<html><table>
<tr><td>Old letter<br></td><td>New letter<br></td>
<tr><td>a</td><td>t</td>
<tr><td>b</td><td>h</td>
<tr><td>c</td><td>e</td>
<tr><td>d</td><td>q</td>
<tr><td>e</td><td>u</td>
<tr><td>f</td><td>i</td>
<tr><td>g</td><td>c</td>
<tr><td>h</td><td>k</td>
<tr><td>i</td><td>b</td>
<tr><td>j</td><td>r</td>
<tr><td>k</td><td>o</td>
<tr><td>l</td><td>w</td>
<tr><td>m</td><td>n</td>
<tr><td>n</td><td>f</td>
<tr><td>o</td><td>o</td>
<tr><td>p</td><td>x</td>
<tr><td>q</td><td>j</td>
<tr><td>r</td><td>u</td>
<tr><td>s</td><td>m</td>
<tr><td>t</td><td>p</td>
<tr><td>u</td><td>e</td>
<tr><td>v</td><td>d</td>
<tr><td>w</td><td>o</td>
<tr><td>x</td><td>v</td>
<tr><td>y</td><td>e</td>
<tr><td>z</td><td>r</td>
</table></html>
To create this list in Python, we could do this
Step22: But what if someone discovered our passphrase ("the quick brown fox...")? We'd have to change each of the individual letters. Instead, we can tell Python to read the letters from the strings, everyletter and sentence, that we created above. Take a look at this. We're using some fancier Python tricks that I'll explain in a minute.
Step23: This requires some explanation. The first line, I think you'll remember, creates an empty table or dictionary. Then, when we write
table[ everyletter[0]] = sentence[0]
Python will look inside the square brackets, [], following "table[" to see which letter to put in the left hand column. Instead of a letter, Python sees everyletter[0]. To Python, this means the first letter in the string called "everyletter"--a. So, Python puts 'a' in the left hand column.
Next, it looks for what to put in the right column after the equal sign. There, Python sees
sentence[0]
which it knows means the first letter in the string "sentence" or 't'. So, Python creates a row in the table with 'a' in the left column and 't' in the right column. When we started encoding message, we'll turn all the a's into t's.
So, Python will act as if we wrote
table['a'] = 't'
adding one entry to our translation table.
The line
table[ everyletter[1]] = sentence[1]
creates another new row with the second letter in "everyletter" and the second letter in "sentence." Can you figure out what will be in the left and right columns of this new row?
This is tricky. When learning new programming ideas, it's important to play around a little. Play around with adding rows to your table. What would happen if I wrote
Step24: What changed? Why? This is another good chance to play around by writing your own code.
Loops in Python
Instead of writing 26 lines like table[ everyletter[14]] = sentence[14], I'd like to write it just once, then have the computer repeat it 26 times increasing the number each time. For this, we use a "for loop." Try this
Step25: Don't worry if this isn't clear yet. We'll take a good look at this. The first line creates an empty dictionary. The next line is the loop
for num in range(26)
Step26: Here, we gave Python a list of three names. For each name on the list, Python executes the commands that follow, replacing "name" with one of the names in the list. This is the first time we've seen + in Python, but it's an easy one. "Hi, " + name, just means that we add each name in the list to the end of "Hi, " and send that to print.
Try editing the loop above. Add your name (maybe it's Homer?). Make a list of your favorite sports teams and make Python cheer for them.
One tricky rule in Python involves indenting. It's possible to put lots of commands inside one for loop, but Python needs to be able to know which commands are in the loop and which are not. Each line that we want to be part of the loop, we indent, but we don't indent the for the loop statement itself. When Python sees a line that's not indented, it knows you've finished writing the loop. Take a look at this. Change the indenting and see what happens. What if you don't indent the "I'll stop you"? What if you do indent "You'll never get away with this"? Try it!
Step27: The spaces before the print statements tell us whether they're part of the loop to be executed once for each villain, or just a lone command to be executed only once.
What about this?
Step28: Python didn't like that. The line that says print("I'll stop you!") was not indented. So, Python thought you were done with that loop. When it saw that print("You'll never get away with this.") was indented, it didn't know what loop it was part of. So, it sent an error message. Errors are part of programming. They can be intimidating, but it can help you to sort out the problem if you read the error message. Here, Python told us "unexpected indent" and referred to print("You'll never get away with this."). It clearly didn't expect that line to be indented.
Using Loops to make a table
Now let's use our loops to create a code table
Step29: The "range" function just generates a list of numbers beginning with zero. The code below creates a list from 0 to 4 (5 total numbers), then prints each one.
Step30: So, to make our table, we make a list of numbers from 0 to 25, then for each number, Python repeats the command
table[ everyletter[num] ] = sentence[num]
but it replaces 'num' with each of the numbers in the list. So it's exactly the same as writing
table[ everyletter[0] ] = sentence[0]
table[ everyletter[1] ] = sentence[1]
table[ everyletter[2] ] = sentence[2]
table[ everyletter[3] ] = sentence[3]
table[ everyletter[4] ] = sentence[4]
table[ everyletter[5] ] = sentence[5]
...
(skip a few)
...
table[ everyletter[25] ] = sentence[25]
except we only had to write those two lines. Much easier! (P.S. Why just 25? What happened to 26?)
We can combine everything we've done so far into one script. Read it through carefully and see if you can describe what each line does.
Step31: BREAK TIME!
Congratulations! You're about halfway to the end. Take a minute to stand up, stretch your legs and savor your new knowledge of Python dictionaries, lists and loops. Then, push Ctrl-Enter in the box below to watch the Minions. When you're done, we'll use the table we just created to encode our message.
Step32: Using our table to encode
Welcome back! We're almost ready to encode our message using our table. Start with this for loop. Before you hit Ctrl-Enter, see if you can predict how Python will respond. Then run it and see if you were right.
Step33: That's not very secret yet. This time, instead of printing our each letter of our message, let's look in the table, swap the letter, then print it out. Do the same as before with this code. Can you predict what Python will do?
Step34: Oops! It seemed to be doing fine, but we crashed on the 5th letter. Python called it a "KeyError". What do you think that means?
We'll need to do some "debugging." Try modifying your loop like this. This way, we'll see each letter before and after it's encoded. We won't leave this in out final program, but it will help us to find the problem.
Step35: It's the space! Python is complaining because we asked for the table entry with a space, ' ', in the left column. But there is no such entry. Remember when we called the left hand column the "keys"? A KeyError means we asked for a key that isn't in the table. It even told us the incorrect key was ' '--the space! There are several ways to solve this. What would you do? Take a minute and try it.
One way to solve this is just to add a dictionary entry for ' '.
Step36: Now, there is an entry for ' '. It just gets replaced by ' ' again. Now our original program works.
Step37: We did it! This is just what we'd expect if we encoded this message by hand.
We can do a few things to improve it. First, we can create a new string rather than printing our encoded message with one letter on each line. We'll first create a string that has no letters in it. As we encode each letter, we'll add it to the string. Then we'll print it at the end.
Step38: Let's look closer at this one, too. At the beginning,
coded_message
had no letters at all. But at each step of the loop, instead of printing
table[letter]
we add it to the end of
coded_message
by writing
coded_message = coded_message + table[letter]
To Python, the equal sign means making a change. Whatever is to the left of the = is the thing we're replacing. To programmers, this 'thing' is called a 'variable'. A variable is a name that represents something else (a number, a word, a dictionary) in your program. In this case, Python knows we want to put something new into coded_message. And what is that something? It's the original coded_message plus table[letter].
Let's consider one more thing. The line print (coded_message) is not indented. Why? What would happen if we indented that? Try it!
We can put this all together to have a single program that creates our table and encrypts our message
Step39: This is pretty slick. All we have to do is replace "this is a super secret message" with our own and Python does the rest of the work. Now, it's time to play around. Change the script above to encrypt a message of your own. One word of warning. We're not ready to use capital letters or punctuation yet. Use just lower case letters.
You can also try replacing "thequickbrownfox.." with your own sentence. What if your sentence is all x's ("xxxxxxxxxxxxx")?
Here are some other things to try
Step40: Getting Input From the User
It's a lot of work to modify your program for every new message you want to encode. Let's give the message another name like "uncoded_message"
Step41: Uh-oh! Can you find and fix the problem above?
We need to assign uncoded_messageto be something. We can add a line like this
Step42: Important note for instructors
Step43: Now, let's add this to our main program. One quick tip. For now, when our program asks for input, you might want to use all lower case letters--no capitals and no punctuation. You're welcome to try anything you want, but don't be frustrated by the error messages. We'll discuss the reason for those a little later.
Step44: In the above code, we made a few other changes. Can you see them? We added some lines beginning with #. When we begin a line with #, we're telling Python that the line is just for other humans to read and Python ignores it. So, we can put whatever we want there and it won't affect our program. It's a good idea to put comments in your program so someone else knows what you're doing.
Decoding your message
Our encoded messages aren't much good if we can't decode them! To decode a message encoded with a substitution cipher, we switch the columns of the table. If during encoding, we replaced all the a's with t's, then to decode we should change the t's back into a's. But our table generated with the "quick brown fox" sentence creates a problem. Here it is below.
<html><table>
<tr><td>Old letter<br></td><td>New letter<br></td><td><br></td><td>Old letter<br></td><td>New letter</td>
<tr><td>A</td><td>T</td><td></td><td>N</td><td>F</td>
<tr><td>B</td><td>H</td><td></td><td>O</td><td>O</td>
<tr><td>C</td><td>E</td><td></td><td>P</td><td>X</td>
<tr><td>D</td><td>Q</td><td></td><td>Q</td><td>J</td>
<tr><td>E</td><td>U</td><td></td><td>R</td><td>U</td>
<tr><td>F</td><td>I</td><td></td><td>S</td><td>M</td>
<tr><td>G</td><td>C</td><td></td><td>T</td><td>P</td>
<tr><td>H</td><td>K</td><td></td><td>U</td><td>E</td>
<tr><td>I</td><td>B</td><td></td><td>V</td><td>D</td>
<tr><td>J</td><td>R</td><td></td><td>W</td><td>O</td>
<tr><td>K</td><td>O</td><td></td><td>X</td><td>V</td>
<tr><td>L</td><td>W</td><td></td><td>Y</td><td>E</td>
<tr><td>M</td><td>N</td><td></td><td>Z</td><td>R</td>
</table></html>
In this table, 'c', 'u' and 'y' are all being changed into 'e'. So, when my encoded message contains an 'e', how will I know whether if began as a c, u or y?
We can solve this by making sure each letter appears only once in 'sentence.' We could, for example, assign sentence to be 'thequickbrownfxjmpdvlazygs'. This is easy to do and gives a well-behaved table, but with just a little thought, we can get Python to remove the duplicate letters for us.
### 'if' statements
An 'if' statement will do the trick here. 'If' statements allow us to provide two sets of instructions and letting Python decide which set to run based on a test that we determine. In our program, we're going to tell Python to consider each letter of our sentence individually. If the letter is not already part of our table, then use it. If not, ignore it. But let's start with an easier one first. Run the box below with Ctrl-Enter.
Step45: That looks easy enough.
Step46: Aha. No message that time. Notice that the indenting rules are the same for "if" statements as for "for" statements. As long as you indent, Python will treat each new statement as part of the "if" clause and only execute it if the "if" condition is true. What if I'm not Luke?
Next, let's let the user decide her own name.
Step47: You should get a message only if you tell Python your name is Luke. There's one new idea here. I used "==" instead of "is" here. In Python, these two mean almost the same thing. Usually you can get away with using either, but it's a good idea to always use == when comparing two strings. This is particularly true when one string is returned by input.
We could do this test for a list of names. Can you modify the code above to repeat this for a list of names, say
["Leia", "Luke", "Anakin"]
and only greet Luke? It's a little bit tricky. If Python gives you an error message, be sure to read it. It often gives you a clue about how to fix the problem. If you give up, scroll down to see my answer.
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
Step48: We combined the for loop that we already know with the if statement we're just learning about. We made a list and for every name in the list we first checked if that name is "Luke", then sent a message only to Luke.
The indentation was tricky here. Look at the line that starts with print. We indented that one a long way! That's because we wanted that line to be part of the "if" code. But the "if" statement was already indented because it's in the for loop. So we indented it twice. If we don't indent enough, we'll get an error from Python. Try it. One rule about Python is that after a line that ends with a colon (like if and for statements) we always have to indent one more level.
Poor Leia. It's a shame to leave her out. (Spoiler alert
Step49: The other way is to explicitly leave Anakin out. Python also understands "not", but in a funny way. Watch.
Step50: Python first tests
name is "Anakin"
then the "not" tells Python to reverse that decision and execute the code only if the name is not Anakin.
Removing our duplicates with "if"
Now, let's use an if loop to remove the duplicates from our sentence. It looks a lot like our greeting loop above.
Step51: That looks almost right. It would be perfect if not for that lousy space. To get rid of that, we can use "and" to test two different things. With "or" we only needed one of the two tests to be true ("either Luke OR Leia") but with "and", both statements must be true. Try this.
Step52: Perfect. Let's incorporate this into our code. In the code below, I've left a tricky "bug". A "bug" is what programmers call a mistake in the code that causes the program not to work right.
Because of our bug, we seem to be getting the very same output. Try using "cuy" as your message. Why are c and y both still being changed to 'e'? To get really confused, try making your message "jdatddr". What's going on there? You might want to add some "print" statements to help you see what's going on.
Step53: Ouch! Errors again. Unfortunately, lots of computer programming is finding problems ("bugs") in your code. Try to solve the problem above. It might help to add some "print" statements.
If you couldn't find the problem above (which is OK. It's a hard bug to see), I'll tell you that we've gone to the trouble of creating a "passphrase" with no spaces or duplicates, but then we still used "sentence" to create our table. In place of the line
table[ everyletter[num]] = sentence[num]
write
table[ everyletter[num]] = passphrase[num]
See if you get better answers.
Decoding
At last, we're ready to do some decoding. Suppose your friend encoded a message with the program above and sent it to you. The message you got was "nuuv nu bf vku jtpo tv dby vkbpvg". How would you decode it?
Well, if we can build the same table, we can use it in reverse. In our new table m's become n's during encoding. So, during decoding, I change all the n's back to m's. All the e's became u's so I change all u's back to e's. The t's were changed to v's, so I change them back to t's and so on. Already I know that the first two words of the message are "meet me". Here's the full table so you can decode the rest of the message. Once you're done, we'll see how to do get Python to do it.
<html><table>
<tr><td>Old letter<br></td><td>New letter<br></td><td><br></td><td>Old letter<br></td><td>New letter</td>
<tr><td>A</td><td>T</td><td></td><td>N</td><td>F</td>
<tr><td>B</td><td>H</td><td></td><td>O</td><td>X</td>
<tr><td>C</td><td>E</td><td></td><td>P</td><td>J</td>
<tr><td>D</td><td>Q</td><td></td><td>Q</td><td>M</td>
<tr><td>E</td><td>U</td><td></td><td>R</td><td>P</td>
<tr><td>F</td><td>I</td><td></td><td>S</td><td>D</td>
<tr><td>G</td><td>C</td><td></td><td>T</td><td>V</td>
<tr><td>H</td><td>K</td><td></td><td>U</td><td>L</td>
<tr><td>I</td><td>B</td><td></td><td>V</td><td>A</td>
<tr><td>J</td><td>R</td><td></td><td>W</td><td>Z</td>
<tr><td>K</td><td>O</td><td></td><td>X</td><td>Y</td>
<tr><td>L</td><td>W</td><td></td><td>Y</td><td>G</td>
<tr><td>M</td><td>N</td><td></td><td>Z</td><td>S</td>
</table></html>
The hardest part of decoding was having to search through the "New letter" column for each letter. Other than that, it was exactly like encoding except that we're finding letters in the "New letter" column, then replacing them with the letter in the "Old letter" column. So, we can use python to decode by just switching the columns. In the language of Python dictionaries, we need to switch the "keys" and "values." Remember that, for a dictionary, the "keys" were always on the left side of the equal sign and in the parentheses. The values were to the right of the equal sign. So, instead of building our table like this
for num in range(26)
Step54: (Did you forget to switch "everyletter" and "passphrase"?)
Finishing up
As our final task, we're going to give the user the option to encode or decode with the same script. Let's use "input" to ask the user whether she's encoding or decoding. If she's decoding, let's leave the table just like it is above. If she's encoding, let's rebuild the table like we did originally. Try it yourself on the code below. My answer is beneath that. | Python Code:
print ("Hello my name is Levi.")
Explanation: Sending Secret Messages with Python
This notebook will teach you how to send secret messages to your friends using a computer language called "Python." Python is used by thousands of programmers around the world to create websites and video games, to do science and math, even this web page is written using Python!
Before we learn any Python, we need to learn just a bit about secret codes. One way to make your message secret is using a "substitution cipher." This means that we replace every letter in our message with a different letter. For example, all of the c's in our message might become j's. The message will look like garbled nonsense to anyone who doesn't know which letter has been substituted. By the time you're done with this page, you'll be able to make this message
this is a super secret message
Look like this
pkbm bm t mexuu mueuup nummtcu
Only someone who knows the password will be able to turn it back into the original message. (By the way, did you notice, above that everywhere there's an 's' in the original message, the coded message has an 'm'?)
To use a substitution cipher, the first thing to do is to decide which letter will replace which. We'll create a table like this one:
<html><table>
<tr><td>Old letter<br></td><td>New letter<br></td>
<tr><td>A</td><td>N</td>
<tr><td>B</td><td>F</td>
<tr><td>C</td><td>R</td>
<tr><td>D</td><td>L</td>
<tr><td>E</td><td>A</td>
<tr><td>F</td><td>T</td>
<tr><td>G</td><td>Z</td>
<tr><td>H</td><td>B</td>
<tr><td>I</td><td>Q</td>
<tr><td>...</td><td>...</td>
</table></html>
I've only shown part of the table. We'll see the rest later.
The values in your table are not important except for a few rules:
1. Every letter in the alphabet must be in the left column.
2. Every letter must be in the right column.
3. Both you and the person you're sending to must know this table.
To encode using your table is easy. For each letter in your message, instead of writing that letter, find it in the left column of your table, then write down the letter in the same row, but in the right column. For example, if your message is
ABBA HIGH DEF,
using the table above, you would encode it as
NFFN BQZB LAT.
Take a minute to make sure I got it right.
## How to use this notebook
This lesson is written as a Jupyter notebook. Notebooks are a convenient way to add Python to a website. Wherever you see a box with "In [ ]:" next to it, you can click inside the box, then hold down the Ctrl button and press the Enter button at the same time to make Python run the code inside. Try it with the box below.
End of explanation
table={}
print (table)
Explanation: You can also change the code in the boxes before you run them. Try replacing "Levi" with your name in the box above. Now, while the cursor is still in the box, push Ctrl+Enter to see Python echo it back. From here on out, make a habit of executing each box in this notebook with Ctrl-Enter.
Now that you know a little bit about Jupyter notebooks, we're ready to learn some Python!
Python Dictionaries
To get Python to do this work for us, the first thing to do is to decide on a table and create a 'dictionary' in Python that keeps track of which letters get replaced by which new letters. A Python dictionary is a lot like the two-column table we've been talking about. We create an empty table like this:
End of explanation
table={}
table['a'] = 'n'
print (table)
Explanation: Don't go on until you click into the box above and push Ctrl+Enter. You should see {} appear below the box. One more thing about Jupyter notebooks. Each time you execute a box, Python remembers what you've done. Here, we created a dictionary called "table." When we refer to "table" in the next box, Python will remember that "table" is an empty dictionary. The code in the box below adds one entry to your dictionary with the letter 'a' in the left column and 'n' in the right column. Try it out by pushing Ctrl-Enter in the box below.
End of explanation
newletter = table['a']
print(newletter)
Explanation: Don't be afraid to change the code in the boxes. You'll learn a lot by playing around, so don't be shy. Make some changes to the code in the boxes, then push Ctrl-Enter to see what you changed. You might get an error from Python. That's OK. See if you can figure out how to fix it. As you play around, remember that the notebook is remembering your changes. If you want to clear the notebook's memory and start again, look for the menu option above called "Kernel." Select "Restart" from that sub menu.
Quick tip: In the box above, make sure to put the quotation marks around your letters.
Now that our table has some values in it, we can read from it like this:
End of explanation
table['b'] = 'f'
table['c'] = 'r'
print(table)
table.keys()
table.values()
print(table['c'])
print(table)
Explanation: The first line, newletter = table['a'], tells Python to read the entry we gave it for 'a' and call it by a new name, newletter. In this case, table['a'] is 'n'. After that, everytime we refer to newletter Python will remember that we told it newletter is 'n'. So, when we ask Python to print newletter, it remembers and prints 'n'. newletter is what programmers call a 'variable'--a name that refers, in your program, to some other value.
A few more things about our Python tables (also called 'dictionaries'). The values on the left hand side of the table are called the "keys." The right hand are called the "values." Let's add a few rows to our table, then ask Python some questions. Push Ctrl-Enter in the boxes below and see what Python answers.
End of explanation
print(table['d'])
Explanation: The colon (:) separates a "key" from its "value." 'a':'n' means that all the a's become n's during encoding.
What if we ask for a table entry that hasn't been set yet?
End of explanation
'd' in table.keys( )
Explanation: Python didn't like that and sent us back a KeyError. To avoid these errors, we can first check to see whether that table entry exists.
End of explanation
table['d'] = 'n'
'd' in table.keys()
Explanation: We just asked Python, "Is d one of the keys in our table?" Python said, "No."
Now, let's create a table entry for 'd' and ask again.
End of explanation
table['d'] = 'l'
print(table['d'])
Explanation: A key can only appear in the left column one time. If I try to assign one letter on the left column to two different letters on the right, the dictionary will store the last value.
End of explanation
myList=['t','h','e','q']
print(myList)
Explanation: The old table entry for 'd' is lost and replaced by the new value. Note that more than one key can correspond to a single value. When we assigned 'd' to be changed to 'n', both 'a' and 'd' were being translated to 'n'. That's perfectly OK.
Python Lists
Way to go! Hopefully, you're feeling comfortable with dictionaries. Make sure to execute the code in the boxes as you go by clicking into the box and pushing Ctrl-Enter.
Next, we'll take a look at lists. A python list is like a dictionary where the keys are numbers, specifically integers (i.e. 0,1,2,3,4,5...). We can create a list like this:
End of explanation
myList[2]
Explanation: The list works a lot like a dictionary with numbers in the left column. We can ask for specific letters in the list by referring to their position in the list. Try this:
End of explanation
myList[0]
Explanation: Wait, but isn't 'e' the third letter in the list? It's important to remember that computers start counting from zero.
End of explanation
myList[4]
Explanation: In a Python list, position 0 is the very start of the list. Now, there are 4 letters in the list. What do you think will happen if we ask for the letter in position 4?
End of explanation
myString = "the quick brown fox jumped over the lazy dogs"
myString
Explanation: What do you think this error means? (Hint: "index" is the number inside the square brackets.)
The first element in our list is number 0. The last is number 3.
A quick note: if Python starts responding to you with errors like 'myList' is not defined it usually means you missed executing one of the code boxes. Make sure you push Ctrl-Enter in each box with a In [ ] next to it.
Python strings are lists
A bunch of text all together in Python is called a "string." Here's a string
End of explanation
myString[2]
Explanation: Our string acts just like a list
End of explanation
myString[3]
Explanation: The third letter in our string is 'e'. We ask Python for letter number 2 since Python starts counting from zero.
Even the spaces in our string are considered letters.
End of explanation
table={}
table['a']='t'
print(table)
Explanation: The output above just looks like two quote marks, but there's actually a space between them. The 4th "letter" in our string is actually a space.
Take a minute to write some code yourself involving strings. You can edit the code in the box above and push Ctrl-Enter to execute it. Can you figure out which letter is 'q'? Or 'b'? What if you pick a number that's too large, like 100?
Creating a cipher table from a sentence
Let's create a table we can use to encode and decode our messages. I want to change all of my a's into t's. How can we create a Python dictionary to do that?
End of explanation
table['b']='h'
print(table)
Explanation: Next, I'll add a row for changing all of the b's.
End of explanation
sentence = "the quick brown fox jumped over the lazy dogs"
Explanation: In fact, what I'd like to do is to take an easy to remember phrase and use it to determine the right hand side of my table. I'd like to put 'abcdefghijklmnopqrstuvwxyz' down the left hand column and 'the quick brown fox jumped over the lazy dogs' down the right hand side. So 'a' changes to 't'. 'b' changes to 'h'. 'c' changes to 'e' and so on.
<html><table>
<tr><td>Old letter<br></td><td>New letter<br></td>
<tr><td>a</td><td>t</td>
<tr><td>b</td><td>h</td>
<tr><td>c</td><td>e</td>
<tr><td>d</td><td>q</td>
<tr><td>e</td><td>u</td>
<tr><td>f</td><td>i</td>
<tr><td>g</td><td>c</td>
<tr><td>h</td><td>k</td>
<tr><td>i</td><td>b</td>
<tr><td>...</td><td>...</td>
</table></html>
How can I do this in Python? We can write a loop to create a dictionary from two strings. Let's build the same string as before.
End of explanation
sentence[10]
Explanation: Just like before, we can treat it like a list of letters.
End of explanation
everyletter = "abcdefghijklmnopqrstuvwxyz"
Explanation: Now, let's create another string like this.
End of explanation
table={}
table['a']='t'
table['b']='h'
table['c']='e'
table['d']='q'
table['e']='u'
table['f']='i'
table['g']='c'
#And on and on...
table
Explanation: We're going to use these two strings to create a table like this
<html><table>
<tr><td>Old letter<br></td><td>New letter<br></td>
<tr><td>a</td><td>t</td>
<tr><td>b</td><td>h</td>
<tr><td>c</td><td>e</td>
<tr><td>d</td><td>q</td>
<tr><td>e</td><td>u</td>
<tr><td>f</td><td>i</td>
<tr><td>g</td><td>c</td>
<tr><td>h</td><td>k</td>
<tr><td>i</td><td>b</td>
<tr><td>j</td><td>r</td>
<tr><td>k</td><td>o</td>
<tr><td>l</td><td>w</td>
<tr><td>m</td><td>n</td>
<tr><td>n</td><td>f</td>
<tr><td>o</td><td>o</td>
<tr><td>p</td><td>x</td>
<tr><td>q</td><td>j</td>
<tr><td>r</td><td>u</td>
<tr><td>s</td><td>m</td>
<tr><td>t</td><td>p</td>
<tr><td>u</td><td>e</td>
<tr><td>v</td><td>d</td>
<tr><td>w</td><td>o</td>
<tr><td>x</td><td>v</td>
<tr><td>y</td><td>e</td>
<tr><td>z</td><td>r</td>
</table></html>
To create this list in Python, we could do this:
End of explanation
table={}
table[ everyletter[0]] = sentence[0]
table[ everyletter[1]] = sentence[1]
table[ everyletter[2]] = sentence[2]
table[ everyletter[3]] = sentence[3]
table[ everyletter[4]] = sentence[4]
#And on and on...
table
Explanation: But what if someone discovered our passphrase ("the quick brown fox...")? We'd have to change each of the individual letters. Instead, we can tell Python to read the letters from the strings, everyletter and sentence, that we created above. Take a look at this. We're using some fancier Python tricks that I'll explain in a minute.
End of explanation
table[ everyletter[1]] = sentence[0]
print(table)
Explanation: This requires some explanation. The first line, I think you'll remember, creates an empty table or dictionary. Then, when we write
table[ everyletter[0]] = sentence[0]
Python will look inside the square brackets, [], following "table[" to see which letter to put in the left hand column. Instead of a letter, Python sees everyletter[0]. To Python, this means the first letter in the string called "everyletter"--a. So, Python puts 'a' in the left hand column.
Next, it looks for what to put in the right column after the equal sign. There, Python sees
sentence[0]
which it knows means the first letter in the string "sentence" or 't'. So, Python creates a row in the table with 'a' in the left column and 't' in the right column. When we started encoding message, we'll turn all the a's into t's.
So, Python will act as if we wrote
table['a'] = 't'
adding one entry to our translation table.
The line
table[ everyletter[1]] = sentence[1]
creates another new row with the second letter in "everyletter" and the second letter in "sentence." Can you figure out what will be in the left and right columns of this new row?
This is tricky. When learning new programming ideas, it's important to play around a little. Play around with adding rows to your table. What would happen if I wrote
End of explanation
table={}
for num in range(26):
table[ everyletter[num]] = sentence[num]
table
Explanation: What changed? Why? This is another good chance to play around by writing your own code.
Loops in Python
Instead of writing 26 lines like table[ everyletter[14]] = sentence[14], I'd like to write it just once, then have the computer repeat it 26 times increasing the number each time. For this, we use a "for loop." Try this
End of explanation
for name in ["Bart", "Lisa", "Maggie"]:
print ("Hi, " + name)
Explanation: Don't worry if this isn't clear yet. We'll take a good look at this. The first line creates an empty dictionary. The next line is the loop
for num in range(26):
In ordinary English, it tells Python to create a list of 26 numbers starting from zero. Then Python executes the next command (table[ everyletter[num]] = sentence[num]) one time for each number in the list. Each time it replaces 'num' with one of the numbers.
Of course, you don't have to use 'num.' You can use anything you want. You could use your name
for maria in range(26):
With this new version, it will replace 'maria' with each of the numbers.
As a side note, you can use any name you want, but in writing computer programs, it's best to be as clear as you can about what all the parts are doing. You never know when you'll have to read your own program to fix problems or add something new. Also, when you write larger programs, you're usually working with a team of programmers. It might be someone else reading and fixing your code. Make sure your program is clear enough for someone else to read and understand.
Let's do a simpler loop first
End of explanation
for villain in ["Kylo Ren", "The Joker", "Magneto", "Megamind"]:
print ("Oh no! It's " + villain)
print ("I'll stop you!")
print ("You'll never get away with this.")
Explanation: Here, we gave Python a list of three names. For each name on the list, Python executes the commands that follow, replacing "name" with one of the names in the list. This is the first time we've seen + in Python, but it's an easy one. "Hi, " + name, just means that we add each name in the list to the end of "Hi, " and send that to print.
Try editing the loop above. Add your name (maybe it's Homer?). Make a list of your favorite sports teams and make Python cheer for them.
One tricky rule in Python involves indenting. It's possible to put lots of commands inside one for loop, but Python needs to be able to know which commands are in the loop and which are not. Each line that we want to be part of the loop, we indent, but we don't indent the for the loop statement itself. When Python sees a line that's not indented, it knows you've finished writing the loop. Take a look at this. Change the indenting and see what happens. What if you don't indent the "I'll stop you"? What if you do indent "You'll never get away with this"? Try it!
End of explanation
for villain in ["Kylo Ren", "The Joker", "Magneto", "Megamind"]:
print ("Oh no! It's " + villain)
print ("I'll stop you!")
print ("You'll never get away with this.")
Explanation: The spaces before the print statements tell us whether they're part of the loop to be executed once for each villain, or just a lone command to be executed only once.
What about this?
End of explanation
for num in range(26):
table[ everyletter[num]] = sentence[num]
table
Explanation: Python didn't like that. The line that says print("I'll stop you!") was not indented. So, Python thought you were done with that loop. When it saw that print("You'll never get away with this.") was indented, it didn't know what loop it was part of. So, it sent an error message. Errors are part of programming. They can be intimidating, but it can help you to sort out the problem if you read the error message. Here, Python told us "unexpected indent" and referred to print("You'll never get away with this."). It clearly didn't expect that line to be indented.
Using Loops to make a table
Now let's use our loops to create a code table
End of explanation
for i in range(5):
print(i)
Explanation: The "range" function just generates a list of numbers beginning with zero. The code below creates a list from 0 to 4 (5 total numbers), then prints each one.
End of explanation
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "thequickbrownfoxjumpedoverthelazydogs"
for num in range(26):
table[ everyletter[num]] = sentence[num]
table
Explanation: So, to make our table, we make a list of numbers from 0 to 25, then for each number, Python repeats the command
table[ everyletter[num] ] = sentence[num]
but it replaces 'num' with each of the numbers in the list. So it's exactly the same as writing
table[ everyletter[0] ] = sentence[0]
table[ everyletter[1] ] = sentence[1]
table[ everyletter[2] ] = sentence[2]
table[ everyletter[3] ] = sentence[3]
table[ everyletter[4] ] = sentence[4]
table[ everyletter[5] ] = sentence[5]
...
(skip a few)
...
table[ everyletter[25] ] = sentence[25]
except we only had to write those two lines. Much easier! (P.S. Why just 25? What happened to 26?)
We can combine everything we've done so far into one script. Read it through carefully and see if you can describe what each line does.
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("rl6-zJharrs")
Explanation: BREAK TIME!
Congratulations! You're about halfway to the end. Take a minute to stand up, stretch your legs and savor your new knowledge of Python dictionaries, lists and loops. Then, push Ctrl-Enter in the box below to watch the Minions. When you're done, we'll use the table we just created to encode our message.
End of explanation
for letter in "this is a super secret message":
print (letter)
Explanation: Using our table to encode
Welcome back! We're almost ready to encode our message using our table. Start with this for loop. Before you hit Ctrl-Enter, see if you can predict how Python will respond. Then run it and see if you were right.
End of explanation
for letter in "this is a super secret message":
print (table[letter])
Explanation: That's not very secret yet. This time, instead of printing our each letter of our message, let's look in the table, swap the letter, then print it out. Do the same as before with this code. Can you predict what Python will do?
End of explanation
for letter in "this is a super secret message":
print ("old: " + letter)
print ("new: " + table[letter])
Explanation: Oops! It seemed to be doing fine, but we crashed on the 5th letter. Python called it a "KeyError". What do you think that means?
We'll need to do some "debugging." Try modifying your loop like this. This way, we'll see each letter before and after it's encoded. We won't leave this in out final program, but it will help us to find the problem.
End of explanation
table[' '] = ' '
Explanation: It's the space! Python is complaining because we asked for the table entry with a space, ' ', in the left column. But there is no such entry. Remember when we called the left hand column the "keys"? A KeyError means we asked for a key that isn't in the table. It even told us the incorrect key was ' '--the space! There are several ways to solve this. What would you do? Take a minute and try it.
One way to solve this is just to add a dictionary entry for ' '.
End of explanation
for letter in "this is a super secret message":
print (table[letter])
table[' '] = ' '
Explanation: Now, there is an entry for ' '. It just gets replaced by ' ' again. Now our original program works.
End of explanation
coded_message = ""
for letter in "this is a super secret message":
coded_message = coded_message + table[letter]
print (coded_message)
Explanation: We did it! This is just what we'd expect if we encoded this message by hand.
We can do a few things to improve it. First, we can create a new string rather than printing our encoded message with one letter on each line. We'll first create a string that has no letters in it. As we encode each letter, we'll add it to the string. Then we'll print it at the end.
End of explanation
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "thequickbrownfoxjumpedoverthelazydogs"
for num in range(26):
table[ everyletter[num]] = sentence[num]
table[' '] = ' '
coded_message = ""
for letter in "this is a super secret message":
coded_message = coded_message + table[letter]
print (coded_message)
Explanation: Let's look closer at this one, too. At the beginning,
coded_message
had no letters at all. But at each step of the loop, instead of printing
table[letter]
we add it to the end of
coded_message
by writing
coded_message = coded_message + table[letter]
To Python, the equal sign means making a change. Whatever is to the left of the = is the thing we're replacing. To programmers, this 'thing' is called a 'variable'. A variable is a name that represents something else (a number, a word, a dictionary) in your program. In this case, Python knows we want to put something new into coded_message. And what is that something? It's the original coded_message plus table[letter].
Let's consider one more thing. The line print (coded_message) is not indented. Why? What would happen if we indented that? Try it!
We can put this all together to have a single program that creates our table and encrypts our message
End of explanation
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "thequickbrownfoxjumpedoverthelazydogs"
for num in range(26):
table[ everyletter[num]] = sentence[num]
print (table)
table[' '] = ' '
coded_message = ""
for letter in "my teacher is a handsome genius":
coded_message = coded_message + table[letter]
print (coded_message)
Explanation: This is pretty slick. All we have to do is replace "this is a super secret message" with our own and Python does the rest of the work. Now, it's time to play around. Change the script above to encrypt a message of your own. One word of warning. We're not ready to use capital letters or punctuation yet. Use just lower case letters.
You can also try replacing "thequickbrownfox.." with your own sentence. What if your sentence is all x's ("xxxxxxxxxxxxx")?
Here are some other things to try:
- Try encoding "abcdefghijklmnopqrstuvwxyz" as your secret message? Where did the "lazy dogs" go?
- Encode your message twice to make it harder to crack. You can do this by putting your encoded output in as your secret message, but there's an easier way. What if you replace the line
coded_message = coded_message + table[letter]
with
coded_message = coded_message + table[table[letter]]
What do you expect it to do? See if you can predict the result for a short message like your name.
- Can you decode your message? What happens if your switch the alphabet and the sentence? Why?
Here's another copy of the same code so you'll still have a clean copy to refer to as you play around.
End of explanation
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
Explanation: Getting Input From the User
It's a lot of work to modify your program for every new message you want to encode. Let's give the message another name like "uncoded_message"
End of explanation
name = input("What is your name? ")
print ("Well, hello, " + name)
Explanation: Uh-oh! Can you find and fix the problem above?
We need to assign uncoded_messageto be something. We can add a line like this:
uncoded_message = "this is a super secret message"
Add it to the program above and see how it works. Make sure to add it <i>before</i> the loop. Python runs your commands starting from the top. Before it can use the variable uncoded_message it needs you to tell it what uncoded_message is.
Instead of having the secret message in the program, there is a way to ask the user to provide the message. The command for this is "input." Try this.
End of explanation
uncoded_message = input("What message should I encode?")
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
Explanation: Important note for instructors: If students are getting an error above, you may be using Python version 2.7. If you can switch to Python 3, it will work better. Otherwise, you can get around these errors by putting your answers to "input" in quotation marks. For example, when it asks, What is your name? type "Marcus" with the quotation marks.
Let use the same thing for getting a secret message. Modify the code below to use input to create uncoded_message.
End of explanation
#First, create our substitution table
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "thequickbrownfoxjumpedoverthelazydogs"
for num in range(26):
table[ everyletter[num]] = sentence[num]
table[' '] = ' '
#Get a message from the user
uncoded_message = input("Type your message here, then press enter: ")
#Encode and print the message
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
Explanation: Now, let's add this to our main program. One quick tip. For now, when our program asks for input, you might want to use all lower case letters--no capitals and no punctuation. You're welcome to try anything you want, but don't be frustrated by the error messages. We'll discuss the reason for those a little later.
End of explanation
name = "Luke"
if name is "Luke":
print ("May the force be with you, " + name)
Explanation: In the above code, we made a few other changes. Can you see them? We added some lines beginning with #. When we begin a line with #, we're telling Python that the line is just for other humans to read and Python ignores it. So, we can put whatever we want there and it won't affect our program. It's a good idea to put comments in your program so someone else knows what you're doing.
Decoding your message
Our encoded messages aren't much good if we can't decode them! To decode a message encoded with a substitution cipher, we switch the columns of the table. If during encoding, we replaced all the a's with t's, then to decode we should change the t's back into a's. But our table generated with the "quick brown fox" sentence creates a problem. Here it is below.
<html><table>
<tr><td>Old letter<br></td><td>New letter<br></td><td><br></td><td>Old letter<br></td><td>New letter</td>
<tr><td>A</td><td>T</td><td></td><td>N</td><td>F</td>
<tr><td>B</td><td>H</td><td></td><td>O</td><td>O</td>
<tr><td>C</td><td>E</td><td></td><td>P</td><td>X</td>
<tr><td>D</td><td>Q</td><td></td><td>Q</td><td>J</td>
<tr><td>E</td><td>U</td><td></td><td>R</td><td>U</td>
<tr><td>F</td><td>I</td><td></td><td>S</td><td>M</td>
<tr><td>G</td><td>C</td><td></td><td>T</td><td>P</td>
<tr><td>H</td><td>K</td><td></td><td>U</td><td>E</td>
<tr><td>I</td><td>B</td><td></td><td>V</td><td>D</td>
<tr><td>J</td><td>R</td><td></td><td>W</td><td>O</td>
<tr><td>K</td><td>O</td><td></td><td>X</td><td>V</td>
<tr><td>L</td><td>W</td><td></td><td>Y</td><td>E</td>
<tr><td>M</td><td>N</td><td></td><td>Z</td><td>R</td>
</table></html>
In this table, 'c', 'u' and 'y' are all being changed into 'e'. So, when my encoded message contains an 'e', how will I know whether if began as a c, u or y?
We can solve this by making sure each letter appears only once in 'sentence.' We could, for example, assign sentence to be 'thequickbrownfxjmpdvlazygs'. This is easy to do and gives a well-behaved table, but with just a little thought, we can get Python to remove the duplicate letters for us.
### 'if' statements
An 'if' statement will do the trick here. 'If' statements allow us to provide two sets of instructions and letting Python decide which set to run based on a test that we determine. In our program, we're going to tell Python to consider each letter of our sentence individually. If the letter is not already part of our table, then use it. If not, ignore it. But let's start with an easier one first. Run the box below with Ctrl-Enter.
End of explanation
name = "Anakin"
if name is "Luke":
print ("May the force be with you, " + name)
Explanation: That looks easy enough.
End of explanation
name = input("What is your name? ")
if name == "Luke":
print ("May the force be with you, " + name)
Explanation: Aha. No message that time. Notice that the indenting rules are the same for "if" statements as for "for" statements. As long as you indent, Python will treat each new statement as part of the "if" clause and only execute it if the "if" condition is true. What if I'm not Luke?
Next, let's let the user decide her own name.
End of explanation
for name in ["Luke", "Leia", "Anakin"]:
if name is "Luke":
print ("May the force be with you, " + name)
Explanation: You should get a message only if you tell Python your name is Luke. There's one new idea here. I used "==" instead of "is" here. In Python, these two mean almost the same thing. Usually you can get away with using either, but it's a good idea to always use == when comparing two strings. This is particularly true when one string is returned by input.
We could do this test for a list of names. Can you modify the code above to repeat this for a list of names, say
["Leia", "Luke", "Anakin"]
and only greet Luke? It's a little bit tricky. If Python gives you an error message, be sure to read it. It often gives you a clue about how to fix the problem. If you give up, scroll down to see my answer.
<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>
End of explanation
for name in ["Luke", "Leia", "Anakin"]:
if name is "Luke" or name is "Leia":
print ("May the force be with you, " + name)
Explanation: We combined the for loop that we already know with the if statement we're just learning about. We made a list and for every name in the list we first checked if that name is "Luke", then sent a message only to Luke.
The indentation was tricky here. Look at the line that starts with print. We indented that one a long way! That's because we wanted that line to be part of the "if" code. But the "if" statement was already indented because it's in the for loop. So we indented it twice. If we don't indent enough, we'll get an error from Python. Try it. One rule about Python is that after a line that ends with a colon (like if and for statements) we always have to indent one more level.
Poor Leia. It's a shame to leave her out. (Spoiler alert: she's Luke's sister). We can include her in one of two ways. Python understands the word "or", so we can give Python two names.
End of explanation
for name in ["Luke", "Leia", "Anakin"]:
if not name is "Anakin":
print ("May the force be with you, " + name)
Explanation: The other way is to explicitly leave Anakin out. Python also understands "not", but in a funny way. Watch.
End of explanation
sentence = "the quick brown fox jumped over the lazy dogs"
passphrase = ""
for letter in sentence:
if not letter in passphrase:
passphrase = passphrase + letter
print (passphrase)
Explanation: Python first tests
name is "Anakin"
then the "not" tells Python to reverse that decision and execute the code only if the name is not Anakin.
Removing our duplicates with "if"
Now, let's use an if loop to remove the duplicates from our sentence. It looks a lot like our greeting loop above.
End of explanation
sentence = "the quick brown fox jumped over the lazy dogs"
passphrase = ""
for letter in sentence:
if letter not in passphrase and letter is not ' ':
passphrase = passphrase + letter
print (passphrase)
Explanation: That looks almost right. It would be perfect if not for that lousy space. To get rid of that, we can use "and" to test two different things. With "or" we only needed one of the two tests to be true ("either Luke OR Leia") but with "and", both statements must be true. Try this.
End of explanation
#First, create our substitution table
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "the quick brown fox jumped over the lazy dogs"
#Remove the duplicate letters
passphrase=""
for letter in sentence:
if letter not in passphrase and letter is not ' ':
passphrase = passphrase + letter
print (passphrase)
#Build the table
for num in range(26):
table[ everyletter[num]] = sentence[num]
table[' '] = ' '
#Get a message from the user
uncoded_message = input("Type your message here, then press enter: ")
#Encode and print the message
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
Explanation: Perfect. Let's incorporate this into our code. In the code below, I've left a tricky "bug". A "bug" is what programmers call a mistake in the code that causes the program not to work right.
Because of our bug, we seem to be getting the very same output. Try using "cuy" as your message. Why are c and y both still being changed to 'e'? To get really confused, try making your message "jdatddr". What's going on there? You might want to add some "print" statements to help you see what's going on.
End of explanation
#First, create our substitution table
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "the quick brown fox jumped over the lazy dogs"
#Remove duplicate letters
passphrase=""
for letter in sentence:
if letter not in passphrase and letter is not ' ':
passphrase = passphrase + letter
print (passphrase)
#Build the table
for num in range(26):
table[ everyletter[num]] = passphrase[num]
table[' '] = ' '
#Get a message from the user
uncoded_message = input("Type your message here, then press enter: ")
#Encode and print the message
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
Explanation: Ouch! Errors again. Unfortunately, lots of computer programming is finding problems ("bugs") in your code. Try to solve the problem above. It might help to add some "print" statements.
If you couldn't find the problem above (which is OK. It's a hard bug to see), I'll tell you that we've gone to the trouble of creating a "passphrase" with no spaces or duplicates, but then we still used "sentence" to create our table. In place of the line
table[ everyletter[num]] = sentence[num]
write
table[ everyletter[num]] = passphrase[num]
See if you get better answers.
Decoding
At last, we're ready to do some decoding. Suppose your friend encoded a message with the program above and sent it to you. The message you got was "nuuv nu bf vku jtpo tv dby vkbpvg". How would you decode it?
Well, if we can build the same table, we can use it in reverse. In our new table m's become n's during encoding. So, during decoding, I change all the n's back to m's. All the e's became u's so I change all u's back to e's. The t's were changed to v's, so I change them back to t's and so on. Already I know that the first two words of the message are "meet me". Here's the full table so you can decode the rest of the message. Once you're done, we'll see how to do get Python to do it.
<html><table>
<tr><td>Old letter<br></td><td>New letter<br></td><td><br></td><td>Old letter<br></td><td>New letter</td>
<tr><td>A</td><td>T</td><td></td><td>N</td><td>F</td>
<tr><td>B</td><td>H</td><td></td><td>O</td><td>X</td>
<tr><td>C</td><td>E</td><td></td><td>P</td><td>J</td>
<tr><td>D</td><td>Q</td><td></td><td>Q</td><td>M</td>
<tr><td>E</td><td>U</td><td></td><td>R</td><td>P</td>
<tr><td>F</td><td>I</td><td></td><td>S</td><td>D</td>
<tr><td>G</td><td>C</td><td></td><td>T</td><td>V</td>
<tr><td>H</td><td>K</td><td></td><td>U</td><td>L</td>
<tr><td>I</td><td>B</td><td></td><td>V</td><td>A</td>
<tr><td>J</td><td>R</td><td></td><td>W</td><td>Z</td>
<tr><td>K</td><td>O</td><td></td><td>X</td><td>Y</td>
<tr><td>L</td><td>W</td><td></td><td>Y</td><td>G</td>
<tr><td>M</td><td>N</td><td></td><td>Z</td><td>S</td>
</table></html>
The hardest part of decoding was having to search through the "New letter" column for each letter. Other than that, it was exactly like encoding except that we're finding letters in the "New letter" column, then replacing them with the letter in the "Old letter" column. So, we can use python to decode by just switching the columns. In the language of Python dictionaries, we need to switch the "keys" and "values." Remember that, for a dictionary, the "keys" were always on the left side of the equal sign and in the parentheses. The values were to the right of the equal sign. So, instead of building our table like this
for num in range(26):
table[ everyletter[num]] = passphrase[num]
we swap "everyletter" and "passphrase"
for num in range(26):
table[ passphrase[num]] = everyletter[num]
It's just that easy and we've got a decoder. Make the change yourself here and check it with the message above ("nuuv nu bf vku jtpo tv dby vkbpvg"). What's the decoded message?
End of explanation
#First, create our substitution table
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "the quick brown fox jumped over the lazy dogs"
#Remove duplicate letters
passphrase=""
for letter in sentence:
if letter not in passphrase and letter is not ' ':
passphrase = passphrase + letter
print (passphrase)
#Build the table
for num in range(26):
table[ passphrase[num]] = everyletter[num]
table[' '] = ' '
#Get a message from the user
uncoded_message = input("Type your message here, then press enter: ")
#Encode and print the message
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
#First, create our substitution table
table = {}
everyletter = "abcdefghijklmnopqrstuvwxyz"
sentence = "the quick brown fox jumped over the lazy dogs"
#Remove duplicates
passphrase=""
for letter in sentence:
if letter not in passphrase and letter is not ' ':
passphrase = passphrase + letter
print (passphrase)
#Build a table for decoding
for num in range(26):
table[ passphrase[num]] = everyletter[num]
table[' '] = ' '
#**** This is the new part. If we're encoding, we rebuild the table
# But we switch everyletter and passphrase
task = input("Are you encoding or decoding?")
if task == "encoding":
#Build a table for encoding instead
print ("Remaking table for encoding...")
for num in range(26):
table[ everyletter[num]] = passphrase[num]
print (table)
#Get a message from the user
uncoded_message = input("Type your message here, then press enter: ")
#Encode and print the message
coded_message = ""
for letter in uncoded_message:
coded_message = coded_message + table[letter]
print (coded_message)
Explanation: (Did you forget to switch "everyletter" and "passphrase"?)
Finishing up
As our final task, we're going to give the user the option to encode or decode with the same script. Let's use "input" to ask the user whether she's encoding or decoding. If she's decoding, let's leave the table just like it is above. If she's encoding, let's rebuild the table like we did originally. Try it yourself on the code below. My answer is beneath that.
End of explanation |
11,632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EuroPython program grid
Step1: Load the data
Step2: Clean up the data
Here I pick from talk_sessions only the talks with the type that I need for scheduling.
I also remove from all these talks the fields that I don't need for scheduling, to maintain the prints clean and short enough.
Step3: Declare the week schedule
Declare certain structures to be able to declare and define the conference schedule.
The information here will be used in the dict_query submodule to filter the talks.
Step4: Build the schedule conditions table
Step5: Group tags and count talks-per-tag
Step8: Filtering functions
Here I declare the functions used to filter talks using the dict_query-type queries defined above.
Step9: Distribute the talks along the schedule
Step10: Remaining talks
Print the remaining talks that have been left out of the schedule (by accident?).
Step12: Print the schedule
Declare functions needed to orederly access the talks in the filled schedule and print the tables nicely in this notebook.
Step13: Schedule
Step14: Snippets | Python Code:
%%javascript
IPython.OutputArea.auto_scroll_threshold = 99999;
//increase max size of output area
import json
import datetime as dt
from random import choice, randrange, shuffle
from copy import deepcopy
from collections import OrderedDict, defaultdict
from itertools import product
from functools import partial
from operator import itemgetter
from eptools.dict_query import build_query, run_query
from IPython.display import display, HTML
show = lambda s: display(HTML(s))
Explanation: EuroPython program grid
End of explanation
talk_sessions = json.load(open('accepted_talks.json'))
list(talk_sessions.keys())
Explanation: Load the data
End of explanation
#all talks
all_talks = []
for s in talk_sessions.values():
all_talks.extend(list(s.values()))
#the talks worth for scheduling
grid_talks = []
sessions = talk_sessions.copy()
general_grid_sessions = ['talk', 'training']
for session_name in general_grid_sessions:
grid_talks.extend(sessions[session_name].values())
fields2pop = ['abstract_extra',
'abstract_long',
'abstract_short',
'twitters',
'emails',
'status',
'url',
'companies',
'have_tickets',
]
for talk in grid_talks:
for f in fields2pop:
talk.pop(f)
Explanation: Clean up the data
Here I pick from talk_sessions only the talks with the type that I need for scheduling.
I also remove from all these talks the fields that I don't need for scheduling, to maintain the prints clean and short enough.
End of explanation
tags_field = 'tag_categories'
weekday_names = {0: 'Monday, July 18th',
1: 'Tuesday, July 19th',
2: 'Wednesday, July 20th',
3: 'Thursday, July 21st',
4: 'Friday, July 22nd'
}
room_names = {0: 'A1',
1: 'A3',
2: 'A2',
3: 'Ba1',
4: 'Ba2',
5: 'E' ,
6: 'A4',
}
# this is not being used yet
durations = {'announcements': 15,
'keynote': 45,
'lts': 60,
'lunch': 60,
'am_coffee': 30,
'pm_coffee': 30,
}
# track schedule types, by talk conditions
track_schedule1 = [(('duration', 45), ),
(('duration', 45), ),
(('duration', (45, 60)), ),
(('duration', 45), ),
(('duration', 45), ),
(('duration', 45), ),
(('duration', 45)), ),
#(('admin_type', 'Lightning talk'), ),
]
track_schedule2 = [(('duration', 45), ),
(('duration', 45), ),
(('duration', (45, 30)), ),
(('duration', (60, 45)), ),
(('duration', (60, 45)), ),
(('duration', 45), ),
]
track_schedule3 = [(('duration', 45), ),
(('duration', 45), ),
(('duration', (45, 30)), ),
(('duration', (45, 60)), ),
(('duration', (45, 30), ),
(('duration', (60, 30)), ),
(('duration', (45, 30)), ),
]
#tutorials
track_schedule4 = [(('type', 'has Training'), ),
(('type', 'has Training'), ), ]
# these are for reference, but not being taken into account (yet)
frstday_schedule1 = [(('admin_type', 'Opening session')),
(('admin_type', 'Keynote')),
] + track_schedule1
lastday_schedule1 = track_schedule1 + [(('admin_type', 'Closing session')),]
# I removed time from here.
#daily_timegrid = lambda schedule: OrderedDict([(datetime.time(*slot[0]), slot[1]) for slot in schedule])
room1_schedule = track_schedule1 # A1, the google room
room2_schedule = track_schedule2 # A3, pythonanywhere room
room3_schedule = track_schedule3 # A2
room4_schedule = track_schedule3 # Barria1
room5_schedule = track_schedule3 # Barria2
room6_schedule = track_schedule4 # Room E
room7_schedule = track_schedule4 # Room A4
daily_schedule = OrderedDict([(0, room1_schedule),
(1, room2_schedule),
(2, room3_schedule),
(3, room4_schedule),
(4, room5_schedule),
(5, room6_schedule),
(6, room7_schedule)])
# week conditions
default_condition = (('language', 'English'), ('type', 'has Talk'),)
# [day][room] -> talk conditions
dayroom_conditions = {0: {},
1: {4: (('language', 'Spanish'), ), },
2: {3: (('language', 'Basque' ), ), },
3: {},
4: {},
}
Explanation: Declare the week schedule
Declare certain structures to be able to declare and define the conference schedule.
The information here will be used in the dict_query submodule to filter the talks.
End of explanation
# the whole schedule conditions table
def join_conds(condset1, condset2):
d = dict(condset1)
if condset2:
d.update(dict(condset2))
return tuple(d.items())
week_conditions = defaultdict(dict)
for day, room in product(weekday_names, room_names):
track_schedule = daily_schedule[room]
dayroom_conds = dayroom_conditions[day].get(room, default_condition)
week_conditions[day][room] = []
for slot_conds in track_schedule:
week_conditions[day][room].append(join_conds(dayroom_conds, slot_conds))
week_conditions[0][5]
Explanation: Build the schedule conditions table
End of explanation
tags2pop = ['>>> Suggested Track', 'Python', '']
tags = defaultdict(int)
for talk in all_talks:
for t in talk[tags_field]:
if t in tags2pop:
continue
tags[t] += 1
tags_sorted = sorted(tags.items(), key=itemgetter(1), reverse=True)
tags_sorted
Explanation: Group tags and count talks-per-tag
End of explanation
def pick_talk(talks, conditions, trialno=1):
if not talks:
raise IndexError('The list of talks is empty!')
query = build_query(conditions)
for tidx, talk in enumerate(talks):
if run_query(talk, query):
return talks.pop(tidx)
# if no talk fills the query requirements
if trialno == 1:
nuconds = dict(conditions)
del nuconds[tags_field]
nuconds = tuple(nuconds.items())
print('2ND TRY: Looking only for {}.'.format(nuconds))
return pick_talk(talks, nuconds, trialno=2)
if trialno == 2:
oldconds = dict(conditions)
nuconds = {}
if 'duration' in oldconds:
nuconds['duration'] = oldconds['duration']
if 'type' in oldconds:
nuconds['type'] = oldconds['type']
nuconds = tuple(nuconds.items())
print('3RD TRY: Looking only for {}.'.format(nuconds))
return pick_talk(talks, nuconds, trialno=3)
else:
print('FAILED looking for {}.'.format(conditions))
return {}
# collections splitting utilities
import random
def chunks(l, n):
Yield successive `n`-sized chunks from `l`.
for i in range(0, len(l), n):
yield l[i:i+n]
def split(xs, n):
Yield `n` chunks of the sequence `xs`.
ys = list(xs)
random.shuffle(ys)
size = len(ys) // n
leftovers = ys[size*n:]
for c in range(n):
if leftovers:
extra = [ leftovers.pop() ]
else:
extra = []
yield ys[c*size:(c+1)*size] + extra
Explanation: Filtering functions
Here I declare the functions used to filter talks using the dict_query-type queries defined above.
End of explanation
from eptools.dict_query import or_condition
talks = grid_talks.copy()
shuffle(talks)
def condition_set(slot_conditions, default_conditions, topic_conditions):
conds = join_conds(default_conditions, slot_conditions)
if 'admin_type' not in dict(conds):
conds = join_conds(conds, topic_conditions)
return conds
# random pick talks
week_slots = defaultdict(dict)
for day in weekday_names:
shuffle(tags_sorted)
tags_chunks = list(split([t[0] for t in tags_sorted], len(room_names)))
rooms_topics = {room: or_condition(tags_field, 'has', tags)
for room, tags in zip(room_names.keys(), tags_chunks)}
for room in room_names:
slots_conds = week_conditions[day][room]
room_topics = rooms_topics[room]
week_slots[day][room] = []
#print(len(talks))
for slot_cond in slots_conds:
conds = condition_set(slot_cond, default_condition, room_topics)
try:
week_slots[day][room].append(pick_talk(talks, conds))
except IndexError:
print('No talks left for {}.'.format(conditions))
except:
raise
Explanation: Distribute the talks along the schedule
End of explanation
q = build_query((('type', 'has Training'),))
run_query(talks[0], q)
if talks:
show('<h1>Not scheduled talks</h1>')
for talk in talks:
print(talk)
Explanation: Remaining talks
Print the remaining talks that have been left out of the schedule (by accident?).
End of explanation
class ListTable(list):
Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders an HTML Table in
IPython Notebook.
def _repr_html_(self):
html = ["<table>"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
return ''.join(html)
def tabulate(time_list, header=''):
table = ListTable()
table.append(header)
for slot in time_list:
table.append([slot] + time_list[slot])
return table
def get_room_schedule(weekly_schedule, room_name, field='title'):
slots = list(daily_schedule[room_name].keys())
daily_slots = []
for slot in slots:
talks = [weekly_schedule[d][room_name][slot].get(field, '-') for d in range(n_days)]
daily_slots.append((slot, talks))
room_schedule = OrderedDict(daily_slots)
return room_schedule
from itertools import zip_longest
def get_day_schedule(weekly_schedule, day_num, field='title'):
day_schedule = weekly_schedule[day_num]
nslots = max([len(slots) for room, slots in day_schedule.items()])
room_slots = []
for room, talk_slots in day_schedule.items():
room_talks = [talk.get(field, '-') for slot, talk in enumerate(talk_slots)]
room_slots.append(room_talks)
schedule = OrderedDict(list(enumerate(list(map(list, zip_longest(*room_slots))))))
return schedule
Explanation: Print the schedule
Declare functions needed to orederly access the talks in the filled schedule and print the tables nicely in this notebook.
End of explanation
sched_field = 'title'
for day, _ in enumerate(weekday_names):
show('<h3>{}</h3>'.format(weekday_names[day]))
show(tabulate(get_day_schedule(week_slots, day),
header=['Slot'] + list(room_names.values()))._repr_html_())
Explanation: Schedule
End of explanation
get_room_schedule(weekly_schedule, 'A1')
## schedules by room
# tabulate(get_room_schedule(weekly_schedule, 'A1'), header=['A1'] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, 'A2'), header=['A2'] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, 'A3'), header=['A3'] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, 'Ba1'), header=['Barria 1'] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, 'Ba2'), header=['Barria 2'] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, 'E'), header=[room_names[6]] + weekday_names)
# tabulate(get_room_schedule(weekly_schedule, room_names[7]]), header=[room_names[7]]] + weekday_names)
def find_talk(talk_title):
return [talk for talk in all_talks if talk_title in talk['title']]
find_talk("So, what's all the fuss about Docker?")
Explanation: Snippets
End of explanation |
11,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gate Zoo
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: Cirq comes with many gates that are standard across quantum computing. This notebook serves as a reference sheet for these gates.
Single Qubit Gates
Gate Constants
Cirq defines constants which are gate instances for particular important single qubit gates.
Step2: Traditional Pauli Rotation Gates
Cirq defines traditional single qubit rotations that are rotations in radiants abougt different Pauli directions.
Step3: Pauli PowGates
If you think of the cirq.Z gate as phasing the state $|1\rangle$ by $-1$, then you might think that the square root of this gate phases the state $|1\rangle$ by $i=\sqrt{-1}$. The XPowGate, YPowGate and ZPowGates all act in this manner, phasing the state corresponding to their $-1$ eigenvalue by a prescribed amount. This ends up being the same as the Rx, Ry, and Rz up to a global phase.
Step4: More Single Qubit Gate
Many quantum computing implementations use qubits whose energy eigenstate are the computational basis states. In these cases it is often useful to move cirq.ZPowGate's through other single qubit gates, "phasing" the other gates. For these scenarios, the following phased gates are useful.
Step5: Two Qubit Gates
Gate Constants
Cirq defines convenient constants for common two qubit gates.
Step6: Parity Gates
If $P$ is a non-identity Pauli matrix, then it has eigenvalues $\pm 1$. $P \otimes P$ similarly has eigenvalues $\pm 1$ which are the product of the eigenvalues of the single $P$ eigenvalues. In this sense, $P \otimes P$ has an eigenvalue which encodes the parity of the eigenvalues of the two qubits. If you think of $P \otimes P$ as phasing its $-1$ eigenvectors by $-1$, then you could consider $(P \otimes P)^{\frac{1}{2}}$ as the gate that phases the $-1$ eigenvectors by $\sqrt{-1} =i$. The Parity gates are exactly these gates for the three different non-identity Paulis.
Step7: There are also constants that one can use to define the parity gates via exponentiating them.
Step8: Fermionic Gates
If we think of $|1\rangle$ as an excitation, then the gates that preserve the number of excitations are the fermionic gates. There are two implementations, with differing phase conventions.
Step9: Two qubit PowGates
Just as cirq.XPowGate represents a powering of cirq.X, our two qubit gate constants also have corresponding "Pow" versions.
Step10: Three Qubit Gates
Gate Constants
Cirq provides constants for common three qubit gates.
Step11: Three Qubit Pow Gates
Corresponding to some of the above gate constants are the corresponding PowGates.
Step12: N Qubit Gates
Do Nothing Gates
Sometimes you just want a gate to represent doing nothing.
Step13: Measurement Gates
Measurement gates are gates that represent a measurement and can operate on any number of qubits.
Step14: Matrix Gates
If one has a specific unitary matrix in mind, then one can construct it using matrix gates, or, if the unitary is diagonal, the diagonal gates.
Step15: Pauli String Gates
Pauli strings are expressions like "XXZ" representing the Pauli operator X operating on the first two qubits, and Z on the last qubit, along with a numeric (or symbolic) coefficient. When the coefficient is a unit complex number, then this is a valid unitary gate. Similarly one can construct gates which phases the $\pm 1$ eigenvalues of such a Pauli string.
Step16: Algorithm Based Gates
It is useful to define composite gates which correspond to algorithmic primitives, i.e. one can think of the fourier transform as a single unitary gate.
Step17: Classiscal Permutation Gates
Sometimes you want to represent shuffling of qubits. | Python Code:
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet --pre cirq
print("installed cirq.")
import IPython.display as ipd
import cirq
import inspect
def display_gates(*gates):
for gate_name in gates:
ipd.display(ipd.Markdown("---"))
gate = getattr(cirq, gate_name)
ipd.display(ipd.Markdown(f"#### cirq.{gate_name}"))
ipd.display(ipd.Markdown(inspect.cleandoc(gate.__doc__ or "")))
else:
ipd.display(ipd.Markdown("---"))
Explanation: Gate Zoo
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/gatezoo.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/gatezoo.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/gatezoo.ipynbb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/gatezoo.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Setup
Note: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via pip install cirq --pre
End of explanation
display_gates("X", "Y", "Z", "H", "S", "T")
Explanation: Cirq comes with many gates that are standard across quantum computing. This notebook serves as a reference sheet for these gates.
Single Qubit Gates
Gate Constants
Cirq defines constants which are gate instances for particular important single qubit gates.
End of explanation
display_gates("Rx", "Ry", "Rz")
Explanation: Traditional Pauli Rotation Gates
Cirq defines traditional single qubit rotations that are rotations in radiants abougt different Pauli directions.
End of explanation
display_gates("XPowGate", "YPowGate", "ZPowGate")
Explanation: Pauli PowGates
If you think of the cirq.Z gate as phasing the state $|1\rangle$ by $-1$, then you might think that the square root of this gate phases the state $|1\rangle$ by $i=\sqrt{-1}$. The XPowGate, YPowGate and ZPowGates all act in this manner, phasing the state corresponding to their $-1$ eigenvalue by a prescribed amount. This ends up being the same as the Rx, Ry, and Rz up to a global phase.
End of explanation
display_gates("PhasedXPowGate", "PhasedXZGate", "HPowGate")
Explanation: More Single Qubit Gate
Many quantum computing implementations use qubits whose energy eigenstate are the computational basis states. In these cases it is often useful to move cirq.ZPowGate's through other single qubit gates, "phasing" the other gates. For these scenarios, the following phased gates are useful.
End of explanation
display_gates("CX", "CZ", "SWAP", "ISWAP", "SQRT_ISWAP", "SQRT_ISWAP_INV")
Explanation: Two Qubit Gates
Gate Constants
Cirq defines convenient constants for common two qubit gates.
End of explanation
display_gates("XXPowGate", "YYPowGate", "ZZPowGate")
Explanation: Parity Gates
If $P$ is a non-identity Pauli matrix, then it has eigenvalues $\pm 1$. $P \otimes P$ similarly has eigenvalues $\pm 1$ which are the product of the eigenvalues of the single $P$ eigenvalues. In this sense, $P \otimes P$ has an eigenvalue which encodes the parity of the eigenvalues of the two qubits. If you think of $P \otimes P$ as phasing its $-1$ eigenvectors by $-1$, then you could consider $(P \otimes P)^{\frac{1}{2}}$ as the gate that phases the $-1$ eigenvectors by $\sqrt{-1} =i$. The Parity gates are exactly these gates for the three different non-identity Paulis.
End of explanation
display_gates("XX", "YY", "ZZ")
Explanation: There are also constants that one can use to define the parity gates via exponentiating them.
End of explanation
display_gates("FSimGate", "PhasedFSimGate")
Explanation: Fermionic Gates
If we think of $|1\rangle$ as an excitation, then the gates that preserve the number of excitations are the fermionic gates. There are two implementations, with differing phase conventions.
End of explanation
display_gates("SwapPowGate", "ISwapPowGate", "CZPowGate", "CXPowGate", "PhasedISwapPowGate")
Explanation: Two qubit PowGates
Just as cirq.XPowGate represents a powering of cirq.X, our two qubit gate constants also have corresponding "Pow" versions.
End of explanation
display_gates("CCX", "CCZ", "CSWAP")
Explanation: Three Qubit Gates
Gate Constants
Cirq provides constants for common three qubit gates.
End of explanation
display_gates("CCXPowGate", "CCZPowGate")
Explanation: Three Qubit Pow Gates
Corresponding to some of the above gate constants are the corresponding PowGates.
End of explanation
display_gates("IdentityGate", "WaitGate")
Explanation: N Qubit Gates
Do Nothing Gates
Sometimes you just want a gate to represent doing nothing.
End of explanation
display_gates("MeasurementGate")
Explanation: Measurement Gates
Measurement gates are gates that represent a measurement and can operate on any number of qubits.
End of explanation
display_gates("MatrixGate", "DiagonalGate", "TwoQubitDiagonalGate", "ThreeQubitDiagonalGate")
Explanation: Matrix Gates
If one has a specific unitary matrix in mind, then one can construct it using matrix gates, or, if the unitary is diagonal, the diagonal gates.
End of explanation
display_gates("DensePauliString", "MutableDensePauliString", "PauliStringPhasorGate")
Explanation: Pauli String Gates
Pauli strings are expressions like "XXZ" representing the Pauli operator X operating on the first two qubits, and Z on the last qubit, along with a numeric (or symbolic) coefficient. When the coefficient is a unit complex number, then this is a valid unitary gate. Similarly one can construct gates which phases the $\pm 1$ eigenvalues of such a Pauli string.
End of explanation
display_gates("BooleanHamiltonianGate", "QuantumFourierTransformGate", "PhaseGradientGate")
Explanation: Algorithm Based Gates
It is useful to define composite gates which correspond to algorithmic primitives, i.e. one can think of the fourier transform as a single unitary gate.
End of explanation
display_gates("QubitPermutationGate")
Explanation: Classiscal Permutation Gates
Sometimes you want to represent shuffling of qubits.
End of explanation |
11,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
quant-econ Solutions
Step1: Setup
To recall, we consider the following problem
Step2: Here we want to solve a finite state version of the continuous state model above.
We discretize the state space into a grid of size grid_size=1500,
from $10^{-6}$ to grid_max=2.
The grid size in the lecture is 150,
where the value functions are approximated by linear interpolation,
while we choose a finer grid since we fill the gaps with discrete points.
Step3: We choose the action to be the amount of capital to save for the next period
(the state is the capical stock at the beginning of the period).
Thus the state indices and the action indices are both 0, ..., grid_size-1.
Action (indexed by) a is feasible at state (indexed by) s if and only if
grid[a] < f([grid[s])
(zero consumption is not allowed because of the log utility).
Thus the Bellman equation is
Step4: Reward vector R (of length L)
Step5: (Degenerate) transition probability matrix Q (of shape (L, grid_size)),
where we choose the scipy.sparse.lil_matrix
format,
while any format will do
(internally it will be converted to the csr format)
Step6: (If you are familar with the data structure of
scipy.sparse.csr_matrix,
the following is the most efficient way to create the Q matrix in the current case.)
Step7: Discrete growth model
Step8: Notes
Here we intensively vectorized the operations on arrays to simplify the code.
As noted,
however, vectorization is memory consumptive,
and it can be prohibitively so for grids with large size.
Solving the model
Solve the dynamic optimization problem
Step9: Note that sigma contains the indices of the optimal capital stocks
to save for the next period.
The following translates sigma to the corresponding consumption vector.
Step10: Let us compare the solution of the discrete model with that of the original continuous model.
Step11: The outcomes appear very close to those of the continuous version.
Except for the "boundary" point, the value functions are very close
Step12: The optimal consumption functions are close as well
Step13: In fact, the optimal consumption obtained in the discrete version is not really monotone,
but the decrements are quit small
Step14: The value function is monotone
Step15: Comparison of the solution methods
Let us solve the problem by the other two methods.
Value iteration
Step16: Modified policy iteration
Step17: Speed comparison
Step18: As is often the case, policy iteration and modified policy iteration are much faster
than value iteration.
Replication of the figures
Using DiscreteDP we replicate the figures shown in the lecture.
Convergence of value iteration
Let us first visualize the convergence of the value iteration algorithm as in the lecture,
where we use ddp.bellman_operator implemented as a method of DiscreteDP.
Step19: We next plot the consumption policies along the value iteration.
Step20: Dynamics of the capital stock
Finally, let us work on Exercise 2,
where we plot the trajectories of the capital stock for three different discount factors,
$0.9$, $0.94$, and $0.98$, with initial condition $k_0 = 0.1$. | Python Code:
%matplotlib inline
from __future__ import division, print_function
import numpy as np
import scipy.sparse as sparse
import matplotlib.pyplot as plt
from quantecon import compute_fixed_point
from quantecon.markov import DiscreteDP
Explanation: quant-econ Solutions: Discrete Dynamic Programming
Solutions for http://quant-econ.net/py/discrete_dp.html
Prepared by Daisuke Oyama, Faculty of Economics, University of Tokyo
The exercise is to replicate numerically the analytical solution for the benchmark model in this lecture of quant-econ, using the DiscreteDP class.
End of explanation
alpha = 0.65
f = lambda k: k**alpha
u = np.log
beta = 0.95
Explanation: Setup
To recall, we consider the following problem:
$$
\begin{aligned}
&\max_{{c_t}{t=0}^{\infty}} \sum{t=0}^{\infty} \beta^t u(c_t) \
&\ \text{ s.t. }\ k_{t+1} = f(k_t) - c_t,
\quad \text{$k_0$: given},
\end{aligned}
$$
where
$k_t$ and $c_t$ are the capital stock and consumption at time $t$, respectively,
$u$ is the utility function,
$f$ is the production function, and
$\beta \in (0, 1)$ is the discount factor.
As in the lecture,
we let $f(k) = k^{\alpha}$ with $\alpha = 0.65$, $u(c) = \log c$, and $\beta = 0.95$.
End of explanation
grid_max = 2
grid_size = 1500
grid = np.linspace(1e-6, grid_max, grid_size)
print(grid)
Explanation: Here we want to solve a finite state version of the continuous state model above.
We discretize the state space into a grid of size grid_size=1500,
from $10^{-6}$ to grid_max=2.
The grid size in the lecture is 150,
where the value functions are approximated by linear interpolation,
while we choose a finer grid since we fill the gaps with discrete points.
End of explanation
# Consumption matrix, with nonpositive consumption included
C = f(grid).reshape(grid_size, 1) - grid.reshape(1, grid_size)
# State-action indices
s_indices, a_indices = np.where(C > 0)
# Number of state-action pairs
L = len(s_indices)
print(L)
print(s_indices)
print(a_indices)
Explanation: We choose the action to be the amount of capital to save for the next period
(the state is the capical stock at the beginning of the period).
Thus the state indices and the action indices are both 0, ..., grid_size-1.
Action (indexed by) a is feasible at state (indexed by) s if and only if
grid[a] < f([grid[s])
(zero consumption is not allowed because of the log utility).
Thus the Bellman equation is:
$$
v(k) = \max_{0 < k' < f(k)} u(f(k) - k') + \beta v(k'),
$$
where $k'$ is the capital stock in the next period.
The transition probability array Q will be highly sparse
(in fact it is degenerate as the model is deterministic),
so we formulate the problem with state-action pairs, to represent Q in
scipy sparse matrix format.
We first construct indices for state-action pairs:
End of explanation
R = u(C[s_indices, a_indices])
Explanation: Reward vector R (of length L):
End of explanation
Q = sparse.lil_matrix((L, grid_size))
Q[np.arange(L), a_indices] = 1
Explanation: (Degenerate) transition probability matrix Q (of shape (L, grid_size)),
where we choose the scipy.sparse.lil_matrix
format,
while any format will do
(internally it will be converted to the csr format):
End of explanation
# data = np.ones(L)
# indptr = np.arange(L+1)
# Q = sparse.csr_matrix((data, a_indices, indptr), shape=(L, grid_size))
Explanation: (If you are familar with the data structure of
scipy.sparse.csr_matrix,
the following is the most efficient way to create the Q matrix in the current case.)
End of explanation
ddp = DiscreteDP(R, Q, beta, s_indices, a_indices)
Explanation: Discrete growth model:
End of explanation
res = ddp.solve(method='policy_iteration')
v, sigma, num_iter = res.v, res.sigma, res.num_iter
num_iter
Explanation: Notes
Here we intensively vectorized the operations on arrays to simplify the code.
As noted,
however, vectorization is memory consumptive,
and it can be prohibitively so for grids with large size.
Solving the model
Solve the dynamic optimization problem:
End of explanation
# Optimal consumption in the discrete version
c = f(grid) - grid[sigma]
# Exact solution of the continuous version
ab = alpha * beta
c1 = (np.log(1 - ab) + np.log(ab) * ab / (1 - ab)) / (1 - beta)
c2 = alpha / (1 - ab)
def v_star(k):
return c1 + c2 * np.log(k)
def c_star(k):
return (1 - ab) * k**alpha
Explanation: Note that sigma contains the indices of the optimal capital stocks
to save for the next period.
The following translates sigma to the corresponding consumption vector.
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
ax[0].set_ylim(-40, -32)
ax[0].set_xlim(grid[0], grid[-1])
ax[1].set_xlim(grid[0], grid[-1])
lb0 = 'discrete value function'
ax[0].plot(grid, v, lw=2, alpha=0.6, label=lb0)
lb0 = 'continuous value function'
ax[0].plot(grid, v_star(grid), 'k-', lw=1.5, alpha=0.8, label=lb0)
ax[0].legend(loc='upper left')
lb1 = 'discrete optimal consumption'
ax[1].plot(grid, c, 'b-', lw=2, alpha=0.6, label=lb1)
lb1 = 'continuous optimal consumption'
ax[1].plot(grid, c_star(grid), 'k-', lw=1.5, alpha=0.8, label=lb1)
ax[1].legend(loc='upper left')
plt.show()
Explanation: Let us compare the solution of the discrete model with that of the original continuous model.
End of explanation
np.abs(v - v_star(grid)).max()
np.abs(v - v_star(grid))[1:].max()
Explanation: The outcomes appear very close to those of the continuous version.
Except for the "boundary" point, the value functions are very close:
End of explanation
np.abs(c - c_star(grid)).max()
Explanation: The optimal consumption functions are close as well:
End of explanation
diff = np.diff(c)
(diff >= 0).all()
dec_ind = np.where(diff < 0)[0]
len(dec_ind)
np.abs(diff[dec_ind]).max()
Explanation: In fact, the optimal consumption obtained in the discrete version is not really monotone,
but the decrements are quit small:
End of explanation
(np.diff(v) > 0).all()
Explanation: The value function is monotone:
End of explanation
ddp.epsilon = 1e-4
ddp.max_iter = 500
res1 = ddp.solve(method='value_iteration')
res1.num_iter
np.array_equal(sigma, res1.sigma)
Explanation: Comparison of the solution methods
Let us solve the problem by the other two methods.
Value iteration
End of explanation
res2 = ddp.solve(method='modified_policy_iteration')
res2.num_iter
np.array_equal(sigma, res2.sigma)
Explanation: Modified policy iteration
End of explanation
%timeit ddp.solve(method='value_iteration')
%timeit ddp.solve(method='policy_iteration')
%timeit ddp.solve(method='modified_policy_iteration')
Explanation: Speed comparison
End of explanation
w = 5 * np.log(grid) - 25 # Initial condition
n = 35
fig, ax = plt.subplots(figsize=(8,5))
ax.set_ylim(-40, -20)
ax.set_xlim(np.min(grid), np.max(grid))
lb = 'initial condition'
ax.plot(grid, w, color=plt.cm.jet(0), lw=2, alpha=0.6, label=lb)
for i in range(n):
w = ddp.bellman_operator(w)
ax.plot(grid, w, color=plt.cm.jet(i / n), lw=2, alpha=0.6)
lb = 'true value function'
ax.plot(grid, v_star(grid), 'k-', lw=2, alpha=0.8, label=lb)
ax.legend(loc='upper left')
plt.show()
Explanation: As is often the case, policy iteration and modified policy iteration are much faster
than value iteration.
Replication of the figures
Using DiscreteDP we replicate the figures shown in the lecture.
Convergence of value iteration
Let us first visualize the convergence of the value iteration algorithm as in the lecture,
where we use ddp.bellman_operator implemented as a method of DiscreteDP.
End of explanation
w = 5 * u(grid) - 25 # Initial condition
fig, ax = plt.subplots(3, 1, figsize=(8, 10))
true_c = c_star(grid)
for i, n in enumerate((2, 4, 6)):
ax[i].set_ylim(0, 1)
ax[i].set_xlim(0, 2)
ax[i].set_yticks((0, 1))
ax[i].set_xticks((0, 2))
w = 5 * u(grid) - 25 # Initial condition
compute_fixed_point(ddp.bellman_operator, w, max_iter=n, print_skip=1)
sigma = ddp.compute_greedy(w) # Policy indices
c_policy = f(grid) - grid[sigma]
ax[i].plot(grid, c_policy, 'b-', lw=2, alpha=0.8,
label='approximate optimal consumption policy')
ax[i].plot(grid, true_c, 'k-', lw=2, alpha=0.8,
label='true optimal consumption policy')
ax[i].legend(loc='upper left')
ax[i].set_title('{} value function iterations'.format(n))
Explanation: We next plot the consumption policies along the value iteration.
End of explanation
discount_factors = (0.9, 0.94, 0.98)
k_init = 0.1
# Search for the index corresponding to k_init
k_init_ind = np.searchsorted(grid, k_init)
sample_size = 25
fig, ax = plt.subplots(figsize=(8,5))
ax.set_xlabel("time")
ax.set_ylabel("capital")
ax.set_ylim(0.10, 0.30)
# Create a new instance, not to modify the one used above
ddp0 = DiscreteDP(R, Q, beta, s_indices, a_indices)
for beta in discount_factors:
ddp0.beta = beta
res0 = ddp0.solve()
k_path_ind = res0.mc.simulate(init=k_init_ind, ts_length=sample_size)
k_path = grid[k_path_ind]
ax.plot(k_path, 'o-', lw=2, alpha=0.75, label=r'$\beta = {}$'.format(beta))
ax.legend(loc='lower right')
plt.show()
Explanation: Dynamics of the capital stock
Finally, let us work on Exercise 2,
where we plot the trajectories of the capital stock for three different discount factors,
$0.9$, $0.94$, and $0.98$, with initial condition $k_0 = 0.1$.
End of explanation |
11,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 4
Step1: Part 1a. Simple Line Fitting
Step2: Try a range of slopes and intercepts, and calculate $\chi^2$ values for each set.
Step3: What is chi2? What happens if you print it? What does the minimum mean? Where is the minimum? What happens if you plot it...
chi2 is an array of floating point values. It's instantiated as an array of zeroes, it is passed parameter values from the elements of the m and b arrays; these are used to calculate the $\,\chi^2$ value for resulting from the choice of the two parameters.
The minimum is the value of $\,\chi^2$ which is most likely to fit our data. The m and b that result are thus the parameter values of the line which is our best fit.
Step4: Using the tools above and similar, find the slope and intercept that minimize $\chi^2$. Answer each of the following questions
Step5: Consult the internet to "reverse engineer" what is going on in the example here. Carefully explain all the outputs of the cell above.
The first array consists of the fit parameters that were calculated from x and y. Since the function was fed a polynomial degree argument of 1, there are two parameters returned.
Because cov was set to true, the other output is the covariance matrix for the parameters. The values on the main diagonal are the variance estimates for each parameter.
$$\left| \begin{array}{cc}
0.0004177 & -0.00167676 \
-0.00167676 & 0.00822924 \end{array} \right|$$
Part 2a. Fitting Other Curves
Now repeat the exercise by fitting a Gaussian distribution to the data below, perhaps representing results from one of our previous labs.
Step6: $\chi^2$ is minimized at $\sigma=2.42$ and $\bar{x}=99.92$
The uncertainty in the mean is given by
Step7: Part 2b
Step8: Consult the internet to "reverse engineer" what is going on in the example here. Put comments in the code (using #) to explain what is happening in most of the lines. You should have enough comments such that you could use this to fit your own data with your own function.
Carefully explain what the printed outputs mean. Hint | Python Code:
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Lab 4: Curve Fitting
Jacob Skinner
End of explanation
x=np.array([1.1,2.2,3.1,4.0,5.0,5.8,6.9])
y=np.array([2.0,3.0,4.0,5.0,6.0,7.0,8.0])
dely=np.array([0.1,0.1,0.1,0.1,0.1,0.1,0.1])
plt.plot(x,y,'ro')
plt.show()
Explanation: Part 1a. Simple Line Fitting
End of explanation
m=np.arange(-1,4.0,0.1)
b=np.arange(-5.0,5.0,0.1)
chi2=np.zeros([m.size,b.size])
for i in range(m.size):
for j in range(b.size):
chi2[i,j]=sum(((y-m[i]*x-b[j])**2)/dely**2)
Explanation: Try a range of slopes and intercepts, and calculate $\chi^2$ values for each set.
End of explanation
np.unravel_index(np.argmin(chi2), (m.size,b.size)) # What does this line do?
print("(21, 56)","\nChi^2 =",chi2[21][56], "\nm[21] =",m[21], "\nb[56] =",b[56])
plt.plot(x,y,'ro')
xNew = np.linspace(0,8, num=9)
plt.plot(xNew,xNew*m[21]+b[56])
plt.show()
#here we see that what we've expected to be the best fit doesn't look obviously wrong at a glance; encouraging.
plt.plot(b,chi2[21,:])
plt.ylim(0,100)
plt.xlim(-0.1,1.1)
plt.xlabel("b")
plt.ylabel("Chi^2")
plt.show()
#this plot shows the changing value of chi2 if m is held constant at 21, and b is varied.
#contour code courtesy of your demo
plt.imshow(chi2,vmin=0,vmax=2500,cmap='rainbow',aspect='auto')
plt.colorbar()
plt.contour(chi2,levels=np.arange(0,1500,200))
plt.xlim(46,66)
plt.xlabel("index of b")
plt.ylim(15,27)
plt.ylabel("index of m")
plt.grid(True)
plt.show()
#we have a visual view of how chi2 varies in the parameter space
#the pixellation does make it a bit bad.
Explanation: What is chi2? What happens if you print it? What does the minimum mean? Where is the minimum? What happens if you plot it...
chi2 is an array of floating point values. It's instantiated as an array of zeroes, it is passed parameter values from the elements of the m and b arrays; these are used to calculate the $\,\chi^2$ value for resulting from the choice of the two parameters.
The minimum is the value of $\,\chi^2$ which is most likely to fit our data. The m and b that result are thus the parameter values of the line which is our best fit.
End of explanation
np.polyfit(x, y, 1, cov=True)
Explanation: Using the tools above and similar, find the slope and intercept that minimize $\chi^2$. Answer each of the following questions:
What are the $m$ and $b$ values that minimize $\chi^2$?
What uncertainties do you estimate for $m$ and $b$? Explain your reasoning.
Are the values of $m$ and $b$ related? Carefully explain what this means.
Explore! All "what if" explorations will receive additional credit.
When $\,\chi^2$ is minimized, $m=1.1$ and $b=0.6$. For these parameter values, $\,\chi^2=8.3$
To find the Uncertainty values in m and b I turn to chapter 8 of our textbook. The section on least squares fitting lays out some helpful formulae for determining the uncertainties in our least squares fit parameters.
$$\sigma_m=\sigma_y\sqrt{\frac{N}{\Delta}}$$
<center>and</center></h1>
$$\sigma_b=\sigma_y\sqrt{\frac{\sum{x^2}}{\Delta}}$$
<center>where</center>
$$\sigma_y=\sqrt{\frac{1}{N-2}\sum_{i=1}^{N} {(y_i-b-mx_i)^2}}$$
<center>and</center>
$$\Delta=N\sum{x^2}-(\sum{x})^2$$
So when we make the substitutions, the equations become
$$\sigma_m=\sqrt{N}\sqrt{\frac{1}{N-2}\sum_{i=1}^{N} {(y_i-b-mx_i)^2}}\frac{1}{\sqrt{N\sum{x^2}-(\sum{x})^2}}$$
<center>and</center>
$$\sigma_b=\sqrt{\sum{x^2}}\sqrt{\frac{1}{N-2}\sum_{i=1}^{N} {(y_i-b-mx_i)^2}}\frac{1}{\sqrt{N\sum{x^2}-(\sum{x})^2}}$$
Now to perform the calculation.
$$\frac{1}{\sqrt{\Delta}}=0.07543$$
$$\sqrt{\frac{1}{N-2}\sum_{i=1}^{N} {(y_i-b-mx_i)^2}}=0.12892$$
$$\sigma_m=\sqrt{7}\bullet0.12892\bullet0.07543=0.026$$
$$\sigma_b=\sqrt{137.91}\bullet0.12892\bullet0.07543=0.114$$
The values of m and b are indeed related. There is a negative covariance visible in the contour plot of the parameter space. i.e. We can keep $\,\chi^2$ low if we raise one parameter and lower the other, or vice versa. Raising or lowering each parameter together is the fastest way to raise $\,\chi^2$.
It may be possible to demonstrate covariance by pointing to the "trough" that the long axis of the contour shows. The line it makes in the $mb$ plane is the line of minimized values for $\,\chi^2$, this line corresponds to the vertices of the quadratic curves that appear in the $m-\chi^2$ and $b-\chi^2$ planes.
If I remember my multivariable calculus correctly, $\,\chi^2$ is a function of $m$ and $b$.
What we see is that there exists a line of coordinates, $m(b)$, for which:
$$\frac{\partial\chi^2}{\partial m}=\frac{\partial\chi^2}{\partial b}=0$$
The existence of this line is direct evidence of the covariance of m and b.
Part 1b. Using Built-in Functions and Interpreting Results
End of explanation
x=np.array([95.,96.,97.,98.,99.,100.,101.,102.,103.])
yraw=np.array([1.,2.,3.,4.,5.,6.,5.,4.,3.])
y=yraw/(np.sum(yraw))
# x=np.random.normal(loc=100.,scale=10.,size=1000)
# hist, bin_edges = np.histogram(x, density=True)
# bin_centers = (bin_edges[:-1] + bin_edges[1:])/2
plt.plot(x,y)
plt.show()
def gauss(x, *p):
mu, sigma = p
return (1/(sigma*2.506))*np.exp(-(x-mu)**2/(2.*sigma**2))
mean=np.arange(98.,102.,0.01)
sig=np.arange(1.0,6.0,0.01)
chi2=np.zeros([mean.size,sig.size])
for i in range(mean.size):
for j in range(sig.size):
chi2[i,j]=sum(((y-(np.exp(-((x-mean[i])**2)/(2.0*(sig[j])**2)))/(sig[j]*np.sqrt(2.0*np.pi)))**2)/1.0**2)
meanfit,sigfit=np.unravel_index(np.argmin(chi2), (mean.size,sig.size))
print(mean[meanfit])
print(sig[sigfit])
print(chi2[meanfit][sigfit])
Explanation: Consult the internet to "reverse engineer" what is going on in the example here. Carefully explain all the outputs of the cell above.
The first array consists of the fit parameters that were calculated from x and y. Since the function was fed a polynomial degree argument of 1, there are two parameters returned.
Because cov was set to true, the other output is the covariance matrix for the parameters. The values on the main diagonal are the variance estimates for each parameter.
$$\left| \begin{array}{cc}
0.0004177 & -0.00167676 \
-0.00167676 & 0.00822924 \end{array} \right|$$
Part 2a. Fitting Other Curves
Now repeat the exercise by fitting a Gaussian distribution to the data below, perhaps representing results from one of our previous labs.
End of explanation
plt.plot(x,y)
plt.plot(x,(np.exp(-((x-mean[meanfit])**2)/(2.0*(sig[sigfit])**2)))/(sig[sigfit]*np.sqrt(2.0*np.pi)))
plt.show()
plt.imshow(chi2,vmax=.2)
plt.colorbar()
plt.contour(chi2,levels=np.arange(0,.03,.005))
plt.show()
Explanation: $\chi^2$ is minimized at $\sigma=2.42$ and $\bar{x}=99.92$
The uncertainty in the mean is given by: $$\delta \bar{x}=\frac{1}{\sqrt{N}}=\frac{1}{\sqrt{9}}=0.333$$
I've had a hard time trying to find the uncertainty in sigma, unfortunately. Using propagation of error blows it up since each of the nine terms must have their own partial derivatives calculated.
There is no obvious covariance between the mean and the standard deviation, As we can see by the argument in the previous section. There is no single line for which $\frac{\partial^2\chi^2}{\partial m^2}=\frac{\partial^2\chi^2}{\partial b^2}=0$.
End of explanation
from scipy.optimize import curve_fit #import necessary libraries
x=np.random.normal(loc=100.,scale=10.,size=1000)
#create a random array of 1000 normally distribruted entries
hist, bin_edges = np.histogram(x, density=True)#create a histogram to plot
bin_centres = (bin_edges[:-1] + bin_edges[1:])/2
# Define model function to be used to fit to the data:
def gauss(x, *p):
mu, sigma = p
return (1/(sigma*2.506))*np.exp(-(x-mu)**2/(2.*sigma**2))
# Choose initial guess:
p0 = [130., 20.]
coeff, var_matrix = curve_fit(gauss, bin_centres, hist, p0=p0)
#the above line instantiates variables returned by the gauss function, a coefficient array and
#a matrix of 'y values' lined up with the bin centers.
hist_fit = gauss(bin_centres, *coeff) #creates the y values for the fit, in a form ready to plot
plt.hist(x,normed=True, label='Test data')
plt.plot(bin_centres, hist_fit, 'ro',label='Fitted data')
plt.legend()
plt.show()
#the above code is plotting code we've seen several times now
print("Coefficients output =", coeff)
print("Variance matrix output =", var_matrix)
Explanation: Part 2b: Using Built-in Tools and Interpreting Results
End of explanation
print(np.sqrt(np.diag(var_matrix)))
Explanation: Consult the internet to "reverse engineer" what is going on in the example here. Put comments in the code (using #) to explain what is happening in most of the lines. You should have enough comments such that you could use this to fit your own data with your own function.
Carefully explain what the printed outputs mean. Hint: What is the significance of the next cell's output?
The "Coefficients output" is the parameters of the gaussian fit, first the mean, then the standard deviation. The next output is the variance matrix, showing the best estimates for the variance coefficients of each parameter.
End of explanation |
11,636 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using isomorphism doesn't help
Step1: Since the Property map doesn't map exactly to the node id in the main graph, I have to use the induced subgraphs. | Python Code:
re = isomorphism(re.get_graph(), m3_5.gt_motif, isomap=True)
re[1][2]
re = m3_5_r[2][0][0]
graph_draw(re.get_graph(), output_size=(100,100))
re.get_graph().get_edges()
re[0]
re[1]
re[2]
g.get_out_edges(216)
re.get_graph().get_edges()
re[0], re[1], re[2]
for i in _:
print(g.get_out_edges(i))
Explanation: Using isomorphism doesn't help
End of explanation
# select some vertices
vfilt = g.new_vertex_property('bool');
vfilt[216] = True
vfilt[756] = True
vfilt[938] = True
sub = GraphView(g, vfilt)
ka = isomorphism(sub, m3_5.gt_motif, isomap=True)
ka[1][216], ka[1][756], ka[1][938]
[i for i in [216, 756, 938] if ka[1][i] in {0,1}]
Explanation: Since the Property map doesn't map exactly to the node id in the main graph, I have to use the induced subgraphs.
End of explanation |
11,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
Retrieve the discretization attributes available in PredicSis.ai GUI using the Python SDK
Prerequisites
PredicSis.ai Python SDK (pip install predicsis; documentation)
A predictive model available on your PredicSis.ai instance
Jupyter (see http
Step1: Retrieve the predictive model
Step2: Retrieve and describe the central frame
Step3: Retrieve discretization attributes
Step4: Discretization attributes are only available on contributive feature.
If you try to retrieve discretization attributes on non contributive feature, then you raise an AssertionError
Step5: Same for Categorical features | Python Code:
# Load PredicSis.ai SDK
from predicsis import PredicSis
import predicsis.config as config, os, sys
os.environ['PREDICSIS_URL'] = 'your_instance'
if sys.version_info[0] >= 3:
from importlib import reload
reload(config)
Explanation: Goal
Retrieve the discretization attributes available in PredicSis.ai GUI using the Python SDK
Prerequisites
PredicSis.ai Python SDK (pip install predicsis; documentation)
A predictive model available on your PredicSis.ai instance
Jupyter (see http://jupyter.org/)
End of explanation
pj = PredicSis.project('Outbound Mail Campaign')
model = pj.schema('Outbound Mail Campaign 2017-03-17 15:02:22')
Explanation: Retrieve the predictive model
End of explanation
central = model.central()
central.describe()
Explanation: Retrieve and describe the central frame
End of explanation
central.discretization_attributes('age')
Explanation: Retrieve discretization attributes
End of explanation
central.level('last_name')
central.discretization_attributes('last_name')
Explanation: Discretization attributes are only available on contributive feature.
If you try to retrieve discretization attributes on non contributive feature, then you raise an AssertionError:
End of explanation
central.discretization_attributes('customer_profile')
Explanation: Same for Categorical features
End of explanation |
11,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification with a Multi-layer Perceptron (MLP)
Author
Step1: A few notes on Pytorch syntax
(Many thanks to Vanessa Bohm!!)
Pytorch datatype summary
Step2: Problem 2b Make a histogram showing the fraction of each class
Keep only the top two classes (i.e., the classes with the most galaxies)
Step3: This next block of code converts the data to a format which is more compatible with our neural network.
Step4: Problem 2c Split the data into a training and test set (66/33 split) using the train_test_split function from sklearn
Step5: The next cell will outline how one can make a MLP with pytorch.
Problem 3a Talk to a partner about how this code works, line by line. Add another hidden layer which is the same size as the first hidden layer. Choose an appropriate final nonlinear layer for this classification problem. Choose the appropriate number of outputs.
Step6: The next block of code will show how one can train the model for 100 epochs. Note that we use the binary cross-entropy as our objective function and stochastic gradient descent as our optimization method.
Problem 3b Edit the code so that the function plots the loss for the training and test loss for each epoch.
Step7: The next block trains the code, assuming a hidden layer size of 100 neurons.
Problem 3c Change the learning rate lr to minimize the cross entropy score
Step8: Write a function called evaluate_model which takes the image data, labels and model as input, and the accuracy as output. you can use the accuracy_score function. | Python Code:
# this module contains our dataset
!pip install astronn
#this is pytorch, which we will use to build our nn
import torch
#Standards for plotting, math
import matplotlib.pyplot as plt
import numpy as np
#for our objective function
from sklearn.metrics import accuracy_score, confusion_matrix, ConfusionMatrixDisplay
Explanation: Classification with a Multi-layer Perceptron (MLP)
Author: V. Ashley Villar
In this problem set, we will not be implementing neural networks from scratch. Yesterday, you built a perceptron in Python. Multi-layer perceptrons (MLPs) are, as discussed in the lecture, several layers of these perceptrons stacked. Here, we will learn how to use one of the most common modules for building neural networks: Pytorch
End of explanation
from astroNN.datasets import load_galaxy10
from astroNN.datasets.galaxy10 import galaxy10cls_lookup
%matplotlib inline
#helpful functions:
#Load the images and labels as numbers
images, labels_original = load_galaxy10()
#convert numbers to a string
galaxy10cls_lookup(labels_original[0])
Explanation: A few notes on Pytorch syntax
(Many thanks to Vanessa Bohm!!)
Pytorch datatype summary: The model expects a single precision input. You can change the type of a tensor with tensor_name.type(), where tensor_name is the name of your tensor and type is the dtype. For typecasting into single precision floating points, use float(). A numpy array is typecasted with array_name.astype(type). For single precision, the type should be np.float32.
Before we analyze tensors we often want to convert them to numpy arrays with tensor_name.numpy()
If pytorch has been tracking operations that resulted in the current tensor value, you need to detach the tensor from the graph (meaning you want to ignore things like its derivative) before you can transform it into a numpy array: tensor_name.detach(). Scalars can be detached with scalar.item()
Pytorch allows you to easily use your CPU or GPU; however, we are not using this feature. If you tensor is currently on the GPU, you can bring it onto the CPU with tensor_name.cpu()
Problem 1: Understanding the Data
For this problem set, we will use the Galaxy10 dataset made available via the astroNN module. This dataset is made up of 17736 images of galaxies which have been labelled by hand. See this link for more information.
First we will visualize our data.
Problem 1a Show one example of each class as an image.
End of explanation
images_top_two = ...
labels_top_two = ...
Explanation: Problem 2b Make a histogram showing the fraction of each class
Keep only the top two classes (i.e., the classes with the most galaxies)
End of explanation
# This code converts from integer labels to 'one-hot encodings'. What does that term mean?
import torch.nn.functional as F
torch.set_default_dtype(torch.float)
labels_top_two_one_hot = F.one_hot(torch.tensor(labels_top_two - np.min(labels_top_two)).long(), num_classes=2)
images_top_two = torch.tensor(images_top_two).float()
labels_top_two_one_hot = labels_top_two_one_hot.float()
# we're going to flatten the images for our MLP
images_top_two_flat = ...
#Normalize the flux of the images here
images_top_two_flat_normed = ...
Explanation: This next block of code converts the data to a format which is more compatible with our neural network.
End of explanation
from sklearn.model_selection import train_test_split
Explanation: Problem 2c Split the data into a training and test set (66/33 split) using the train_test_split function from sklearn
End of explanation
class MLP(torch.nn.Module):
# this defines the model
def __init__(self, input_size, hidden_size):
super(MLP, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.hiddenlayer = torch.nn.Linear(self.input_size, self.hidden_size)
self.outputlayer = torch.nn.Linear(self.hidden_size, HOW_MANY_OUTPUTS)
# some nonlinear options
self.sigmoid = torch.nn.Sigmoid()
self.softmax = torch.nn.Softmax()
self.relu = torch.nn.ReLU()
def forward(self, x):
layer1 = self.hiddenlayer(x)
activation = self.sigmoid(layer1)
layer2 = self.outputlayer(activation)
output = self.NONLINEAR(layer2)
return output
Explanation: The next cell will outline how one can make a MLP with pytorch.
Problem 3a Talk to a partner about how this code works, line by line. Add another hidden layer which is the same size as the first hidden layer. Choose an appropriate final nonlinear layer for this classification problem. Choose the appropriate number of outputs.
End of explanation
# train the model
def train_model(training_data,training_labels, test_data,test_labels, model):
# define the optimization
criterion = torch.nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1,momentum=0.9)
# Increase the number of epochs for your "final" run
for epoch in range(10):
# clear the gradient
optimizer.zero_grad()
# compute the model output
myoutput = model(training_data)
# calculate loss
loss = criterion(myoutput, training_labels)
# credit assignment
loss.backward()
# update model weights
optimizer.step()
# ADD PLOT
Explanation: The next block of code will show how one can train the model for 100 epochs. Note that we use the binary cross-entropy as our objective function and stochastic gradient descent as our optimization method.
Problem 3b Edit the code so that the function plots the loss for the training and test loss for each epoch.
End of explanation
model = MLP(np.shape(images_train[0])[0],100)
train_model(images_train, labels_train, images_test, labels_test, model)
Explanation: The next block trains the code, assuming a hidden layer size of 100 neurons.
Problem 3c Change the learning rate lr to minimize the cross entropy score
End of explanation
# evaluate the model
def evaluate_model(data,labels, model):
return(acc)
# evaluate the model
acc = evaluate_model(images_test,labels_test, model)
print('Accuracy: %.3f' % acc)
Explanation: Write a function called evaluate_model which takes the image data, labels and model as input, and the accuracy as output. you can use the accuracy_score function.
End of explanation |
11,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
主题模型
王成军
[email protected]
计算传播网 http
Step1: Download data
http
Step2: Build the topic model
Step3: We can see the list of topics a document refers to
by using the model[doc] syntax
Step4: We can see that about 150 documents have 5 topics,
while the majority deal with around 10 to 12 of them.
No document talks about more than 20 topics.
Step5: 使用pyLDAvis可视化主体模型
http | Python Code:
%matplotlib inline
from __future__ import print_function
from wordcloud import WordCloud
from gensim import corpora, models, similarities, matutils
import matplotlib.pyplot as plt
import numpy as np
Explanation: 主题模型
王成军
[email protected]
计算传播网 http://computational-communication.com
2014年高考前夕,百度“基于海量作文范文和搜索数据,利用概率主题模型,预测2014年高考作文的命题方向”。如上图所示,共分为了六个主题:时间、生命、民族、教育、心灵、发展。而每个主题下面又包括了一些具体的关键词。比如,生命的主题对应:平凡、自由、美丽、梦想、奋斗、青春、快乐、孤独。
Read more
latent Dirichlet allocation (LDA)
The simplest topic model (on which all others are based) is latent Dirichlet allocation (LDA).
- LDA is a generative model that infers unobserved meanings from a large set of observations.
Reference
Blei DM, Ng J, Jordan MI. Latent dirichlet allocation. J Mach Learn Res. 2003; 3: 993–1022.
Blei DM, Lafferty JD. Correction: a correlated topic model of science. Ann Appl Stat. 2007; 1: 634.
Blei DM. Probabilistic topic models. Commun ACM. 2012; 55: 55–65.
Chandra Y, Jiang LC, Wang C-J (2016) Mining Social Entrepreneurship Strategies Using Topic Modeling. PLoS ONE 11(3): e0151342. doi:10.1371/journal.pone.0151342
<img src = './img/topic.png' width = 1000>
Topic models assume that each document contains a mixture of topics
Topics are considered latent/unobserved variables that stand between the documents and terms
It is impossible to directly assess the relationships between topics and documents and between topics and terms.
- What can be directly observed is the distribution of terms over documents, which is known as the document term matrix (DTM).
Topic models algorithmically identify the best set of latent variables (topics) that can best explain the observed distribution of terms in the documents.
The DTM is further decomposed into two matrices:
- a term-topic matrix (TTM)
- a topic-document matrix (TDM)
Each document can be assigned to a primary topic that demonstrates the highest topic-document probability and can then be linked to other topics with declining probabilities.
Assume K topics are in D documents, and each topic is denoted with $\phi_{1:K}$.
Each topic $\phi_K$ is a distribution of fixed words in the given documents.
The topic proportion in the document is denoted as $\theta_d$.
- e.g., the kth topic's proportion in document d is $\theta_{d, k}$.
Let $w_{d,n}$ denote the nth term in document d.
Further, topic models assign topics to a document and its terms.
- For example, the topic assigned to document d is denoted as $z_d$,
- and the topic assigned to the nth term in document d is denoted as $z_{d,n}$.
According to Blei et al. the joint distribution of $\phi_{1:K}$,$\theta_{1:D}$, $z_{1:D}$ and $w_{d, n}$ plus the generative process for LDA can be expressed as:
$ p(\phi_{1:K}, \theta_{1:D}, z_{1:D}, w_{d, n}) $ =
$\prod_{i=1}^{K} p(\phi_i) \prod_{d =1}^D p(\theta_d)(\prod_{n=1}^N p(z_{d,n} \mid \theta_d) \times p(w_{d, n} \mid \phi_{1:K}, Z_{d, n}) ) $
Note that $\phi_{1:k},\theta_{1:D},and z_{1:D}$ are latent, unobservable variables. Thus, the computational challenge of LDA is to compute the conditional distribution of them given the observable specific words in the documents $w_{d, n}$.
Accordingly, the posterior distribution of LDA can be expressed as:
$p(\phi_{1:K}, \theta_{1:D}, z_{1:D} \mid w_{d, n}) = \frac{p(\phi_{1:K}, \theta_{1:D}, z_{1:D}, w_{d, n})}{p(w_{1:D})}$
Because the number of possible topic structures is exponentially large, it is impossible to compute the posterior of LDA. Topic models aim to develop efficient algorithms to approximate the posterior of LDA.
- There are two categories of algorithms:
- sampling-based algorithms
- variational algorithms
Using the Gibbs sampling method, we can build a Markov chain for the sequence of random variables (see Eq 1). The sampling algorithm is applied to the chain to sample from the limited distribution, and it approximates the posterior.
Gensim
Unfortunately, scikit-learn does not support latent Dirichlet allocation.
Therefore, we are going to use the gensim package in Python.
Gensim is developed by Radim Řehůřek,who is a machine learning researcher and consultant in the Czech Republic. We muststart by installing it. We can achieve this by running one of the following commands:
pip install gensim
End of explanation
# Load the data
corpus = corpora.BleiCorpus('/Users/chengjun/bigdata/ap/ap.dat', '/Users/chengjun/bigdata/ap/vocab.txt')
' '.join(dir(corpus))
corpus.id2word.items()[:3]
Explanation: Download data
http://www.cs.princeton.edu/~blei/lda-c/ap.tgz
Unzip the data and put them into /Users/chengjun/bigdata/ap/
End of explanation
NUM_TOPICS = 100
model = models.ldamodel.LdaModel(
corpus, num_topics=NUM_TOPICS, id2word=corpus.id2word, alpha=None)
' '.join(dir(model))
Explanation: Build the topic model
End of explanation
document_topics = [model[c] for c in corpus]
# how many topics does one document cover?
document_topics[2]
# The first topic
# format: weight, term
model.show_topic(0, 10)
# The 100 topic
# format: weight, term
model.show_topic(99, 10)
words = model.show_topic(0, 5)
words
model.show_topics(4)
for f, w in words[:10]:
print(f, w)
# write out topcis with 10 terms with weights
for ti in range(model.num_topics):
words = model.show_topic(ti, 10)
tf = sum(f for f, w in words)
with open('/Users/chengjun/github/cjc2016/data/topics_term_weight.txt', 'a') as output:
for f, w in words:
line = str(ti) + '\t' + w + '\t' + str(f/tf)
output.write(line + '\n')
# We first identify the most discussed topic, i.e., the one with the
# highest total weight
topics = matutils.corpus2dense(model[corpus], num_terms=model.num_topics)
weight = topics.sum(1)
max_topic = weight.argmax()
# Get the top 64 words for this topic
# Without the argument, show_topic would return only 10 words
words = model.show_topic(max_topic, 64)
words = np.array(words).T
words_freq=[float(i)*10000000 for i in words[0]]
words = zip(words[1], words_freq)
wordcloud = WordCloud().generate_from_frequencies(words)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
num_topics_used = [len(model[doc]) for doc in corpus]
fig,ax = plt.subplots()
ax.hist(num_topics_used, np.arange(42))
ax.set_ylabel('Nr of documents')
ax.set_xlabel('Nr of topics')
fig.tight_layout()
#fig.savefig('Figure_04_01.png')
Explanation: We can see the list of topics a document refers to
by using the model[doc] syntax:
End of explanation
# Now, repeat the same exercise using alpha=1.0
# You can edit the constant below to play around with this parameter
ALPHA = 1.0
model1 = models.ldamodel.LdaModel(
corpus, num_topics=NUM_TOPICS, id2word=corpus.id2word, alpha=ALPHA)
num_topics_used1 = [len(model1[doc]) for doc in corpus]
fig,ax = plt.subplots()
ax.hist([num_topics_used, num_topics_used1], np.arange(42))
ax.set_ylabel('Nr of documents')
ax.set_xlabel('Nr of topics')
# The coordinates below were fit by trial and error to look good
plt.text(9, 223, r'default alpha')
plt.text(26, 156, 'alpha=1.0')
fig.tight_layout()
Explanation: We can see that about 150 documents have 5 topics,
while the majority deal with around 10 to 12 of them.
No document talks about more than 20 topics.
End of explanation
with open('/Users/chengjun/bigdata/ap/ap.txt', 'r') as f:
dat = f.readlines()
dat[:6]
dat[4].strip()[0]
docs = []
for i in dat[:100]:
if i.strip()[0] != '<':
docs.append(i)
def clean_doc(doc):
doc = doc.replace('.', '').replace(',', '')
doc = doc.replace('``', '').replace('"', '')
doc = doc.replace('_', '').replace("'", '')
doc = doc.replace('!', '')
return doc
docs = [clean_doc(doc) for doc in docs]
texts = [[i for i in doc.lower().split()] for doc in docs]
from nltk.corpus import stopwords
stop = stopwords.words('english')
' '.join(stop)
stop.append('said')
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1 and token not in stop]
for text in texts]
docs[8]
' '.join(texts[9])
dictionary = corpora.Dictionary(texts)
lda_corpus = [dictionary.doc2bow(text) for text in texts]
#The function doc2bow() simply counts the number of occurences of each distinct word,
# converts the word to its integer word id and returns the result as a sparse vector.
lda_model = models.ldamodel.LdaModel(
lda_corpus, num_topics=NUM_TOPICS, id2word=dictionary, alpha=None)
import pyLDAvis.gensim
ap_data = pyLDAvis.gensim.prepare(lda_model, lda_corpus, dictionary)
pyLDAvis.enable_notebook()
pyLDAvis.display(ap_data)
pyLDAvis.save_html(ap_data, '/Users/chengjun/github/cjc2016/vis/ap_ldavis.html')
Explanation: 使用pyLDAvis可视化主体模型
http://nbviewer.jupyter.org/github/bmabey/pyLDAvis/blob/master/notebooks/pyLDAvis_overview.ipynb
读取并清洗数据
End of explanation |
11,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Límite de Shockley–Queisser
Bandas de conduccion y Bandgap
Primero librerias
Step1: Graficas chidas!
Step2: 1 A graficar el Hermoso Espectro Solar
Primero constantes numericas
Utilizaremos un modulo que manejara unidades numericas y fisicas
Step3: Para usarlo convertimos numeros a unidades, por ejemplo
Step4: Bajar datos
Aveces los datos que queremos se encuentran en el internet.
Aqui usaremos datos del NREL (National Renewable Energy Laboratory)
Step5: Que tamanio tienen los datos?
Step6: Manipular datos
La columna 0 es la longitud de onda y la 2 es los datos AM1.5G
Step7: Vamos a dar unidades a cada columna
Columna1 es $\lambda$ numeros onda, entonces queremos usar $nm$ (nu.nm).
Columna2 es energia entonces queremos usar usar $W/(m^{2}nm)$ (nu.W,nu.m,nu.nm).
Step8: Actividad
Step9: Creamos una funcion, interpolando valores intermedios
Step10: Conseguimos los valores x, y
Actividad
Step11: Tiempo de Graficar
Actividad
Step12: Ejemplo
Step13: Constante Solar
Esta constante es irradiancia total del sol. Debe de estar cerca de 1000 W/m^2 , asi NREL normalizo sus datos.
Step14: 3 - Fotones arriba del bandgap
Para un dado bangap, definimos la funcion que es el numero total de fotones con energia arriba del bandgap, por unidad de tiempo, por unidad de espacio
Step15: Ejemplo
Step16: En funcion de la energia
Step17: 4 Recombinacion
Un poco abstracto (falta traducir)
In the best possible case, the only cause of electron-hole-pair recombination is radiative recombination. Radiative recombination occurs when an electron and hole collide, so it depends on how many electrons and holes there are, or more specifically it depends
on the electron and hole QFLs.
Recombination rate when electron QFL = hole QFL ("QFL" is "Quasi-Fermi Level")
This is the case where electron QFL = hole QFL throughout the semiconductor. An example is the solar cell at zero bias in the dark. Then it’s in thermal equilibrium and its radiation can be calculated by the blackbody formula – more specifically, assuming it’s a perfect blackbody above the bandgap and white-body below the bandgap. We also assume isotropic radiation from the top surface, and a mirror on the bottom surface.
Let RR0 be the "Radiative Recombination rate at 0 QFL splitting", (per solar-cell area). By the blackbody formula
Step18: Recombination rate when electron QFL and hole QFL are split
By kinetic theory, the radiative recombination rate is proportional to the product of electron concentration
and hole concentration, $p\times n$. If you move the electron QFL up towards the conduction band by energy $E$,
the electron concentration increases by $\exp(-E/kT)$. Likewise, if you move the hole QFL down towards the
valence band by E, the hole concentration increases by $\exp(E/k_BT)$. Either way,
$p\times n \propto \exp(E/k_BT)$, where $E$ is the QFL energy splitting.
In the best possible case, the QFL splitting is equal to the external voltage (in reality, it may be larger
than the external voltage). Therefore, the lowest possible radiative recombaniton rate is
Step19: Examplo
Step23: Bandgap Ideal y maxima efficiencia
Dado lo que tenemos, podemos calcular el bandgap ideal y maxima eficiencia, optimizando numericamente el producto JV para cada bandgap.
El "maximum power point" (MPP) es el punto donde la curve JV tiene un maximo, el poder maximo es la energia generada en el MPP, la eficiencia es el poder dividido por la constante solar (i.e. cuanta luz nos llega).
Step24: Example
Step25: Actividad
Step26: Actividad
Step27: Cuanto suma todo?
Graficando cada contribucion | Python Code:
import numpy as np # modulo de computo numerico
import matplotlib.pyplot as plt # modulo de graficas
import pandas as pd # modulo de datos
import seaborn as sns
import scipy as sp
import scipy.interpolate, scipy.integrate # para interpolar e integrar
import wget, tarfile # para bajar datos y descompirmir
from __future__ import print_function
# esta linea hace que las graficas salgan en el notebook
%matplotlib inline
Explanation: Límite de Shockley–Queisser
Bandas de conduccion y Bandgap
Primero librerias
End of explanation
def awesome_settings():
# awesome plot options
sns.set_style("white")
sns.set_style("ticks")
sns.set_context("paper", font_scale=2)
sns.set_palette(sns.color_palette('bright'))
# image stuff
plt.rcParams['figure.figsize'] = (12.0, 6.0)
plt.rcParams['savefig.dpi'] = 60
plt.rcParams['lines.linewidth'] = 4
return
%config InlineBackend.figure_format='retina'
awesome_settings()
Explanation: Graficas chidas!
End of explanation
import numericalunits as nu
Explanation: 1 A graficar el Hermoso Espectro Solar
Primero constantes numericas
Utilizaremos un modulo que manejara unidades numericas y fisicas:
End of explanation
Tcell = 300 * nu.K
Explanation: Para usarlo convertimos numeros a unidades, por ejemplo:
x = 5 * nu.cm significa "x es igual a 5 centimetros".
si quieres sacar el valor numerico de x, podemos dividir por las unidades
y = x / nu.mm, en este caso tenemos el valor numerico en milimetros.
Pruebalo!
Definimos la Celda solar cualquiera a una temperatura de 300 kelvin:
End of explanation
data_url = 'http://rredc.nrel.gov/solar/spectra/am1.5/ASTMG173/compressed/ASTMG173.csv.tar'
a_file = wget.download(data_url)
download_as_tarfile_object = tarfile.open(fileobj=a_file)
csv_file = download_as_tarfile_object.extractfile('ASTMG173.csv')
Explanation: Bajar datos
Aveces los datos que queremos se encuentran en el internet.
Aqui usaremos datos del NREL (National Renewable Energy Laboratory): http://rredc.nrel.gov/solar/spectra/am1.5/
del espectro solar (AM1.5G) con intensity (1000 W/m2).
Primero lo bajamos y lo descomprimimos:
End of explanation
csv_file = 'ASTMG173.csv'
downloaded_array = np.genfromtxt(csv_file, delimiter=",", skip_header=2)
downloaded_array.shape
Explanation: Que tamanio tienen los datos?
End of explanation
AM15 = downloaded_array[:,[0,2]]
print(AM15)
Explanation: Manipular datos
La columna 0 es la longitud de onda y la 2 es los datos AM1.5G
End of explanation
AM15[:,0] *= nu.nm
AM15[:,1] *= nu.W * nu.m**-2 * nu.nm**-1
Explanation: Vamos a dar unidades a cada columna
Columna1 es $\lambda$ numeros onda, entonces queremos usar $nm$ (nu.nm).
Columna2 es energia entonces queremos usar usar $W/(m^{2}nm)$ (nu.W,nu.m,nu.nm).
End of explanation
wavelength_min =
wavelength_max =
E_min = nu.hPlanck * nu.c0
E_max = nu.hPlanck * nu.c0
Explanation: Actividad: Limites de los datos
Para los limites de numeros onda ($\lambda$), podemos usar np.min y np.max.
Para la energia usaremos la formula
$$
E = \frac{\hbar c_0}{\lambda}
$$
End of explanation
AM15interp = scipy.interpolate.interp1d(AM15[:,0], AM15[:,1])
Explanation: Creamos una funcion, interpolando valores intermedios
End of explanation
x =
y =
Explanation: Conseguimos los valores x, y
Actividad : consigue datos x y y
Pista:
para x podemos usar linspace entre el valor minimo y maximo.
para y creamos una lista y llenamos los datos de la funcion con x
End of explanation
def FotonesPorTEA(Ephoton):
wavelength = nu.hPlanck * nu.c0 / Ephoton
return AM15interp(wavelength) * (1 / Ephoton) * (nu.hPlanck * nu.c0 / Ephoton**2)
Explanation: Tiempo de Graficar
Actividad: Grafica el spectro solar
Ojo: No se te olvide agregar titulos e informacion a la grafica
Compara con el espector visible
3-6 Donde perdemos eficiencia?
Podemos indentificar 5 partes:
Conversion de energia a electricidad util
Energia fotonica debajo del bandgap, esta energia no se absorbe y entonces no se gasta.
Energia fotonica en exceso del bandgap: esto es por que el electron y el hoyo se relajan inmediatamente a borde de las bandas. Por ejemplo, para un semiconductor de 1eV-bandgap, un foton de 3eV crea el mismo par electron-hoyo que uno foton de 1.01eV. Esos 2eV de energia extra que tenia el 3eV se pierden.
Recombination de electron-hoyos: toda la recombinacion en el punto de maxima energia (max-power-point) se disipa como calor;
Voltaje de la celda : el voltaje max-power-point is less than the bandgap.
Podemos poner esto en una ecuacion:
$(\text{Energia solar entrante}) = V_{MPP} \times I_{MPP}$<br>
$ \qquad + (\text{Energia fotonica debajo del bandgap})$<br>
$ \qquad + (\text{Energia fotonica arriba del bandgap} - \text{Numero de fotones arriba del bandgap} \times \text{Energia Bandgap})$<br>
$ \qquad + ((\text{Numero de fotones arriba del bandgap}) - I_{MPP} / e) \times (\text{Energia Bandgap})$<br>
$ \qquad + I_{MPP} \times (\text{Voltaje del Bandgap} - V_{MPP})$<br>
2 - Luz incidente
Calcularemos una cantidad mas intuitiva
Fotones por unidad de tiempo, area y rango de energia
Haremos este cambio para hacer calculos en funcion de fotones en vez de numero onda.
Para esto definimos FotonesPorTEA (Tiempo, Energia, Area). Convertimos los datos AM1.5 a estas nuevas unidades usando la formula:
$\text{FotonesPerTEA} = \frac{d(\text{numero de fotos, por unidad de tiempo, por unidad de area})}{dE} = \frac{d(\text{Energia foton por unidad de area})}{d\lambda} \; \frac{(\text{numero de fotos, por unidad de tiempo, por unidad de area})}{(\text{Energia foton por unidad de area})} \left| \frac{d\lambda}{dE} \right| = $
$ = (\text{AM1.5 spectro}) \; \frac{1}{\text{energia foton}} \; \frac{hc}{E^2}$
(Usamos $\left| \frac{d\lambda}{dE} \right| = \left| \frac{d}{dE} (\frac{hc}{E}) \right| = \frac{hc}{E^2}$.)
End of explanation
print(FotonesPorTEA(2 * nu.eV) * (1 * nu.meV) * (1 * nu.m**2) * (1 * nu.s))
Explanation: Ejemplo:
Este calculo nos dice que hay cerca de $1.43 \times 10^{18}$ fotones solares con energia solar photons with energy entre 2eV y 2.001eV pegandole a un metro cuadrado (m^2) por segundo:
End of explanation
PowerPorTEA = lambda E : E * FotonesPerTEA(E)
# quad() es integracion
solar_constant = sp.integrate.quad(PowerPorTEA,E_min,E_max, full_output=1)[0]
print(solar_constant / (nu.W/nu.m**2))
Explanation: Constante Solar
Esta constante es irradiancia total del sol. Debe de estar cerca de 1000 W/m^2 , asi NREL normalizo sus datos.
End of explanation
def fotones_arriba_gap(Egap):
return scipy.integrate.quad(FotonesPorTEA, Egap, E_max, full_output=1)[0]
Explanation: 3 - Fotones arriba del bandgap
Para un dado bangap, definimos la funcion que es el numero total de fotones con energia arriba del bandgap, por unidad de tiempo, por unidad de espacio
End of explanation
print(fotones_arriba_gap(1.1 * nu.eV) * (1 * nu.m**2) * (1 * nu.s))
Explanation: Ejemplo:
Hay $2.76 \times 10^{21}$ fotones con energia arriba de 1.1eV que le pegan a un pedazo de $1m^2$ en un segundo:
End of explanation
Egap_list = np.linspace(0.4 * nu.eV, 3 * nu.eV, num=100)
y_values = np.array([fotones_arriba_gap(E) for E in Egap_list])
plt.plot(Egap_list / nu.eV , y_values / (1e21 * nu.m**-2 * nu.s**-1))
plt.xlabel("Bandgap (eV)")
plt.ylabel("fotones arriba del gap ($10^{21}$ m$^{-2} \cdot $s$^{-1}$)");
Explanation: En funcion de la energia
End of explanation
def RR0(Egap):
integrand = lambda E : E**2 / (np.exp(E / (nu.kB * Tcell)) - 1)
integral = scipy.integrate.quad(integrand, Egap, E_max, full_output=1)[0]
return ((2 * np.pi) / (nu.c0**2 * nu.hPlanck**3)) * integral
Explanation: 4 Recombinacion
Un poco abstracto (falta traducir)
In the best possible case, the only cause of electron-hole-pair recombination is radiative recombination. Radiative recombination occurs when an electron and hole collide, so it depends on how many electrons and holes there are, or more specifically it depends
on the electron and hole QFLs.
Recombination rate when electron QFL = hole QFL ("QFL" is "Quasi-Fermi Level")
This is the case where electron QFL = hole QFL throughout the semiconductor. An example is the solar cell at zero bias in the dark. Then it’s in thermal equilibrium and its radiation can be calculated by the blackbody formula – more specifically, assuming it’s a perfect blackbody above the bandgap and white-body below the bandgap. We also assume isotropic radiation from the top surface, and a mirror on the bottom surface.
Let RR0 be the "Radiative Recombination rate at 0 QFL splitting", (per solar-cell area). By the blackbody formula:
$$\text{RR0} = \frac{2\pi}{c^2 h^3} \int_{E_{gap}}^{\infty} \frac{E^2 dE}{\exp(E/(k_B T_{cell})) - 1}$$
End of explanation
def densidad_de_corriente(V, Egap):
return nu.e * (fotones_arriba_gap(Egap) - RR0(Egap) * np.exp(nu.e * V / (nu.kB * Tcell)))
def JSC(Egap):
return densidad_de_corriente(0, Egap)
def VOC(Egap):
return (nu.kB * Tcell / nu.e) * np.log(fotones_arriba_gap(Egap) / RR0(Egap))
Explanation: Recombination rate when electron QFL and hole QFL are split
By kinetic theory, the radiative recombination rate is proportional to the product of electron concentration
and hole concentration, $p\times n$. If you move the electron QFL up towards the conduction band by energy $E$,
the electron concentration increases by $\exp(-E/kT)$. Likewise, if you move the hole QFL down towards the
valence band by E, the hole concentration increases by $\exp(E/k_BT)$. Either way,
$p\times n \propto \exp(E/k_BT)$, where $E$ is the QFL energy splitting.
In the best possible case, the QFL splitting is equal to the external voltage (in reality, it may be larger
than the external voltage). Therefore, the lowest possible radiative recombaniton rate is:
$$\text{Recomb rate} = e \text{RR0} \exp(e V / k_B T_{cell}),$$
where $V$ is the external voltage.
<p style="font-size:80%">Note for pedants: I’m using the expression for radiative recombination $\frac{2\pi}{c^2 h^3} \exp(eV/k_B T_{cell})\int_{E_{gap}}^\infty \frac{E^2 dE}{\exp(E/k_B T_{cell})-1}.$ This isn't quite right: A more accurate expression is: $\frac{2\pi}{c^2 h^3} \int_{E_{gap}}^\infty \frac{E^2 dE}{\exp((E-eV)/k_B T_{cell})-1}.$ The difference is negligible except for tiny tiny bandgaps (less than 200meV). For explanation see <a href="http://dx.doi.org/10.1109/T-ED.1980.19950">link</a> or <a href="http://dx.doi.org/10.1007/BF00901283">link</a>. (Thanks Ze’ev!)</p>
J-V curve
The current is from the electron-hole pairs that are created but which don’t recombine. In the best case, all the solar photons possible are absorbed, while none recombine except radiatively. This gives:
$$J = e (\text{SolarPhotonsAboveGap} - \text{RR0} (\exp(e V / k_B T_{cell}) - 1 ))$$
where $J$ is the current per unit area, and $V$ is the forward bias on the junction. The "-1" on the right accounts for spontaneous
generation of e-h pairs through thermal fluctuations at 300K. I will leave out the "-1" below because
$\text{RR0} \ll \text{SolarPhotonsAboveGap}$, at least in the range of bandgaps that I'm plotting.
End of explanation
print(JSC(1.1 * nu.eV) / (nu.mA / nu.cm**2))
print(VOC(1.1 * nu.eV) / nu.V)
Explanation: Examplo:
Una celda solar de 1.1eV de bandgap tiene una corriente corto-circuito de
44 mA/cm$^2$ y un voltaje de circuito-abierto de 0.86V.
End of explanation
from scipy.optimize import fmin
def fmax(func_to_maximize, initial_guess=0):
return the x that maximizes func_to_maximize(x)
func_to_minimize = lambda x : -func_to_maximize(x)
return fmin(func_to_minimize, initial_guess, disp=False)[0]
def V_mpp(Egap):
voltage at max power point
return fmax(lambda V : V * densidad_de_corriente(V, Egap))
def J_mpp(Egap):
current at max power point
return densidad_de_corriente(V_mpp(Egap), Egap)
def max_power(Egap):
V = V_mpp(Egap)
return V * densidad_de_corriente(V, Egap)
def max_efficiencia(Egap):
return max_power(Egap) / solar_constant
Explanation: Bandgap Ideal y maxima efficiencia
Dado lo que tenemos, podemos calcular el bandgap ideal y maxima eficiencia, optimizando numericamente el producto JV para cada bandgap.
El "maximum power point" (MPP) es el punto donde la curve JV tiene un maximo, el poder maximo es la energia generada en el MPP, la eficiencia es el poder dividido por la constante solar (i.e. cuanta luz nos llega).
End of explanation
max_efficiencia(1.1 * nu.eV)
Explanation: Example: An ideal 1.1eV-bandgap solar cell has an efficiency of 32.9%.
End of explanation
def electricidad_util(Egap):
return max_efficiencia(Egap)
def energia_debajo_bandgap(Egap):
integrand = lambda E : E * FotonesPorTEA(E)
return scipy.integrate.quad(integrand, E_min, Egap, full_output=1)[0] / solar_constant
def exceso_arriba_bandgap(Egap):
integrand = lambda E : (E - Egap) * FotonesPorTEA(E)
return scipy.integrate.quad(integrand, Egap, E_max, full_output=1)[0] / solar_constant
def mpp_recombination(Egap):
return (solar_photons_above_gap(Egap) - J_mpp(Egap) / nu.e) * Egap / solar_constant
def mpp_voltage_debajo_bangap(Egap):
return J_mpp(Egap) * (Egap / nu.e - V_mpp(Egap)) / solar_constant
Explanation: Actividad: Graficar el limite SC
Calcula la eficiencia maxima para cada valor de bandgap
x son los bangaps
y son la eficiencia maxima
Grafica!
Panorama de las perdidas
$(\text{Energia solar entrante}) = V_{MPP} \times I_{MPP}$<br>
$ \qquad + (\text{Energia fotonica debajo del bandgap})$<br>
$ \qquad + (\text{Energia fotonica arriba del bandgap} - \text{Numero de fotones arriba del bandgap} \times \text{Energia Bandgap})$<br>
$ \qquad + ((\text{Numero de fotones arriba del bandgap}) - I_{MPP} / e) \times (\text{Energia Bandgap})$<br>
$ \qquad + I_{MPP} \times (\text{Voltaje del Bandgap} - V_{MPP})$<br>
End of explanation
mpp_recombination(1.1 * nu.eV)
Explanation: Actividad: Usa las funciones con una celda solar 1.1eV
Calcula la energia y las perdidas
Por ejemplo para calcular el porcentaje de recombinacion usamos
python
mpp_recombination(1.1 * nu.eV)
Suma todo para ver que si llega al 100%
End of explanation
Egap_list = np.linspace(0.4 * nu.eV, 3 * nu.eV, num=25)
loss_list = []
for indx,Egap in enumerate(Egap_list):
e_util = electricidad_util(Egap)
gap_abajo = energia_debajo_bandgap(Egap)
gap_arriba = exceso_arriba_bandgap(Egap)
mpp_recomb = mpp_recombination(Egap)
mpp_voltaje = mpp_voltage_debajo_bangap(Egap)
loss_list.append([e_util,gap_abajo,gap_arriba,mpp_recomb,mpp_voltaje])
print("%2.2f%% .. "%(indx/float(len(Egap_list))*100.0),end='')
loss_list = np.array(loss_list)
# sumamos todo y lo ponemos encima
loss_list = np.cumsum(loss_list,axis=1)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.fill_between(Egap_list / nu.eV, 0, loss_list[:,0], facecolor="k")
ax1.fill_between(Egap_list / nu.eV, loss_list[:,0], loss_list[:,1], facecolor="m")
ax1.fill_between(Egap_list / nu.eV, loss_list[:,1], loss_list[:,2], facecolor="g")
ax1.fill_between(Egap_list / nu.eV, loss_list[:,2], loss_list[:,3], facecolor="b")
ax1.fill_between(Egap_list / nu.eV, loss_list[:,3], 1, facecolor="0.75")
plt.title('POWER GOES TO...\n'
'Energia Util (negro);\n'
'debajo del gap (magenta);\n'
'exceso del gap (verde);\n'
'Current loss from radiative recombination (azul)\n'
'Voltage menor del bandgap (gris)')
plt.xlabel('Bandgap (eV)')
plt.ylabel('Fraccion de luz incidente')
plt.xlim(0.4, 3)
plt.ylim(0,1);
Explanation: Cuanto suma todo?
Graficando cada contribucion
End of explanation |
11,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clase 7
Step1: 2. Modelo normal para los rendimientos
Step2: 3. Simulación usando el histograma de los rendimientos | Python Code:
#importar los paquetes que se van a usar
import pandas as pd
import pandas_datareader.data as web
import numpy as np
from sklearn.neighbors import KernelDensity
import datetime
from datetime import datetime, timedelta
import scipy.stats as stats
import scipy as sp
import scipy.optimize as optimize
import scipy.cluster.hierarchy as hac
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#algunas opciones para Python
pd.set_option('display.notebook_repr_html', True)
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 78)
pd.set_option('precision', 3)
def get_historical_closes(ticker, start_date, end_date):
p = web.DataReader(ticker, "yahoo", start_date, end_date).sort_index('major_axis')
d = p.to_frame()['Adj Close'].reset_index()
d.rename(columns={'minor': 'Ticker', 'Adj Close': 'Close'}, inplace=True)
pivoted = d.pivot(index='Date', columns='Ticker')
pivoted.columns = pivoted.columns.droplevel(0)
return pivoted
data=get_historical_closes(['AAPL'], '2016-01-01', '2016-12-31')
data.plot(figsize=(8,6));
aapl = web.Options('AAPL', 'yahoo')
appl_opt = aapl.get_all_data().reset_index()
appl_opt
def calc_daily_returns(closes):
return np.log(closes/closes.shift(1))[1:]
daily_returns=calc_daily_returns(data)
daily_returns.plot(figsize=(8,6));
Explanation: Clase 7: Simulación de riesgo de mercado
Juan Diego Sánchez Torres,
Profesor, MAF ITESO
Departamento de Matemáticas y Física
[email protected]
Tel. 3669-34-34 Ext. 3069
Oficina: Cubículo 4, Edificio J, 2do piso
1. Motivación
En primer lugar, para poder bajar precios y información sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal será Pandas. También, se usarán el Scipy y el Numpy para las matemáticas necesarias y, el Matplotlib y el Seaborn para hacer gráficos de las series de datos.
End of explanation
mu=daily_returns.mean().AAPL
sigma=daily_returns.std().AAPL
ndays = 360
ntraj=10
dates=pd.date_range('20170101',periods=ndays)
simret = pd.DataFrame(sigma*np.random.randn(ndays,ntraj)+mu,index=dates)
simret
simdata=(data.loc['2016-12-30',:].AAPL)*np.exp(simret.cumsum())
simdata
simdata.plot(figsize=(8,6));
pd.concat([data,simdata]).plot(figsize=(8,6));
Explanation: 2. Modelo normal para los rendimientos
End of explanation
ndays = 360
ntraj=10
#
values,indices=np.histogram(daily_returns,bins=250)
values=values.astype(np.float32)
weights=values/np.sum(values)
ret=np.random.choice(indices[1:],ndays*ntraj,p=weights)
#
dates=pd.date_range('20170101',periods=ndays)
simret = pd.DataFrame(ret.reshape((ndays,ntraj)),index=dates)
simret
simdata=(data.loc['2016-12-30',:].AAPL)*np.exp(simret.cumsum())
simdata
pd.concat([data,simdata]).plot(figsize=(8,6));
kde = KernelDensity(kernel='gaussian', bandwidth=0.001).fit(daily_returns)
ndays = 360
ntraj=10
#
ret=kde.sample(n_samples=ndays*ntraj, random_state=None)
#
dates=pd.date_range('20170101',periods=ndays)
simret = pd.DataFrame(ret.reshape((ndays,ntraj)),index=dates)
simret
simdata=(data.loc['2016-12-30',:].AAPL)*np.exp(simret.cumsum())
simdata
pd.concat([data,simdata]).plot(figsize=(8,6));
Explanation: 3. Simulación usando el histograma de los rendimientos
End of explanation |
11,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom Generator objects
This example should guide you to build your own simple generator.
Step1: Basic knowledge
We assume that you have completed at least some of the previous examples and have a general idea of how adaptiveMD works. Still, let's recapitulate what we think is the typical way of a simulation.
How to execute something
To execute something you need
a description of the task to be done. This is the Task object. Once you have this you can,
use it in a Scheduler which will interpret the Task into some code that the computer understands. It handles all the little things you expect from the task, like registering generated file, etc... And to do so, the Scheduler needs
your Resource description which acts like a config for the scheduler
When you have a Scheduler (with Resource) you let it execute Task objects. If you know how to build these you are done. That is all you need.
What are Generators?
Build a task can be cumbersome and often repetative, and a factory for Task objects is extremely useful. These are called Generators (maybe TaskFactory) is a better name?!?
In your final scheme where you observe all generated objects and want to build new tasks accordingly you will (almost) never build a Task yourself. You use a generator.
A typical example is an Engine. It will generate tasks, that simulate new trajectories, extend existing ones, etc... Basic stuff. The second big class is Analysis. It will use trajectories to generate models or properties of interest to guide your decisions for new trajectories.
In this example we will build a simple generator for a task, that uses the mdtraj package to compute some features and store these in the database and in a file.
The MDTrajFeaturizer generator
First, we think about how this featurizer works if we would not use adaptivemd. The reason is, that we have basically two choices for designing a Task (see example 4 about Task objects).
A task that calls bash commands for you
A task that calls a python function for you
Since we want to call mdtraj functions we use the 2nd and start with a skeleton for this type and store it under my_generator.py
Step2: What input does our generator always need?
Mdtraj needs a topology unless it is already present. Interestingly, our Trajectory objects know about their topology so we could access these, if our function is to process a Trajectory. This requires the Trajectory to be the input. If we want to process any file, then we might need a topology.
The decision if we want the generator to work for a fixed topology is yours. To show how this would work, we do this here. We use a fixed topology per generator that applies to File objects.
Second is the feature we want to compute. This is tricky and so we hard code this now. You can think of a better way to represent this. But let's pick the tertiary stucture prediction
Step3: The task building
Step4: The actual script
This script is executed on the HPC for you. And requires mdtraj to be installed on it.
Step5: That's it. At least in the simplest form. When you use this to create a Task
Step6: We wait and then the Task object has a .output property which now contains the returned result.
This can now be used in your execution plans...
Step7: Next, we look at improvements
Better storing of results
Often you want to save the output from your function in the DB in some form or another. Though the output is stored, it is not conviniently accessed unless you know the task that was used.
For this reason there is a callback function you can set, that can take care of doing a custom handling of the output. The function to be called needs to be a method of the generator and you can give the task the name of the method. The name (str) of the funtion can be set using the then() command. An the default name is then_func.
Step8: The function takes exactly 4 parameters
project
Step9: in that case .output will stay None even after execution
Working with Trajectory files and get their properties
Note that you always have to write file generation and file analysis/reading that matches. We only store some very general properties of objects with them, e.g. a stride for trajectories. This means you cannot arbitrarily mix code for these.
Now we want that this works
Step10: This is rather simple
Step11: Import! You have no access to the Trajectory object in our remove function. These will be converted
to a real path relative to the working directory. This makes sure that you will not have to deal with
prefixes, etc. This might change in the future, but. The scripts are considered independent of adaptivemd!
Problem with saving your generator to the DB
This is not complicated but you need to briefly learn about the mechanism to store complex Python objects in the DB. The general way to Store an instance of a class requires you to subclass from adaptivemd.mongodb.StorableMixin. This provides the class with a __uuid__ attribute that is a unique number for each storable object that is given at creation time. (If we would just store objects using pymongo we would get a number like this, but later). Secondly, it add two functions
to_dict()
Step12: while this would not work
Step13: In the second case you need to overwrite the default function. All of these will work
Step14: If you do that, make sure that you really capture all variables. Especially if you subclass from an existing one. You can use super to access the result from the parent class
Step15: This is the recommended way to build your custom functions. For completeness we show here what the base TaskGenerator class will do
Step16: The only unfamiliar part is the
py
obj = cls.__new__(cls)
StorableMixin.__init__(obj)
which needs a little explanation.
In most __init__ functions for a TaskGenerator you will construct the initial_staging attribute with some functions. If you would reconstruct by just calling the constructor with the same parameters again, this would result in an equal object as expected and that would work, but not in all regards as expected | Python Code:
from adaptivemd import (
Project, Task, File, PythonTask
)
project = Project('tutorial')
engine = project.generators['openmm']
modeller = project.generators['pyemma']
pdb_file = project.files['initial_pdb']
Explanation: Custom Generator objects
This example should guide you to build your own simple generator.
End of explanation
%%file my_generator.py
# This is an example for building your own generator
# This file must be added to the project so that it is loaded
# when you import `adaptivemd`. Otherwise your workers don't know
# about the class!
from adaptivemd import Generator
class MDTrajFeaturizer(Generator):
def __init__(self, {things we always need}):
super(PyEMMAAnalysis, self).__init__()
# stage file you want to reuse (optional)
# self['pdb_file'] = pdb_file
# stage = pdb_file.transfer('staging:///')
# self['pdb_file_stage'] = stage.target
# self.initial_staging.append(stage)
@staticmethod
def then_func(project, task, data, inputs):
# add the output for later reference
project.data.add(data)
def execute(self, {options per task}):
t = PythonTask(self)
# get your staged files (optional)
# input_pdb = t.link(self['pdb_file_stage'], 'input.pdb')
# add the python function call to your script (there can be only one!)
t.call(
my_script,
param1,
param2,
...
)
return t
def my_script(param1, param2, ...):
return {"whatever you want to return"}
Explanation: Basic knowledge
We assume that you have completed at least some of the previous examples and have a general idea of how adaptiveMD works. Still, let's recapitulate what we think is the typical way of a simulation.
How to execute something
To execute something you need
a description of the task to be done. This is the Task object. Once you have this you can,
use it in a Scheduler which will interpret the Task into some code that the computer understands. It handles all the little things you expect from the task, like registering generated file, etc... And to do so, the Scheduler needs
your Resource description which acts like a config for the scheduler
When you have a Scheduler (with Resource) you let it execute Task objects. If you know how to build these you are done. That is all you need.
What are Generators?
Build a task can be cumbersome and often repetative, and a factory for Task objects is extremely useful. These are called Generators (maybe TaskFactory) is a better name?!?
In your final scheme where you observe all generated objects and want to build new tasks accordingly you will (almost) never build a Task yourself. You use a generator.
A typical example is an Engine. It will generate tasks, that simulate new trajectories, extend existing ones, etc... Basic stuff. The second big class is Analysis. It will use trajectories to generate models or properties of interest to guide your decisions for new trajectories.
In this example we will build a simple generator for a task, that uses the mdtraj package to compute some features and store these in the database and in a file.
The MDTrajFeaturizer generator
First, we think about how this featurizer works if we would not use adaptivemd. The reason is, that we have basically two choices for designing a Task (see example 4 about Task objects).
A task that calls bash commands for you
A task that calls a python function for you
Since we want to call mdtraj functions we use the 2nd and start with a skeleton for this type and store it under my_generator.py
End of explanation
def __init__(self, pdb_file=None):
super(PyEMMAAnalysis, self).__init__()
# if we provide a pdb_file it should be used
if pdb_file is not None:
# stage file you want to reuse (optional)
# give the file an internal name
self['pdb_file'] = pdb_file
# create the transfer from local to staging:
stage = pdb_file.transfer('staging:///')
# give the staged file an internal name
self['pdb_file_stage'] = stage.target
# append the transfer action to the initial staging action list
self.initial_staging.append(stage)
Explanation: What input does our generator always need?
Mdtraj needs a topology unless it is already present. Interestingly, our Trajectory objects know about their topology so we could access these, if our function is to process a Trajectory. This requires the Trajectory to be the input. If we want to process any file, then we might need a topology.
The decision if we want the generator to work for a fixed topology is yours. To show how this would work, we do this here. We use a fixed topology per generator that applies to File objects.
Second is the feature we want to compute. This is tricky and so we hard code this now. You can think of a better way to represent this. But let's pick the tertiary stucture prediction
End of explanation
def execute(self, file_to_analyze):
assert(isinstance(file_to_analyze, File))
t = PythonTask(self)
# get your staged files (optional)
if self.get('pdb_file_stage'):
input_pdb = t.link(self['pdb_file_stage'], 'input.pdb')
else:
input_pdb = None
# add the python function call to your script (there can be only one!)
t.call(
my_script,
file_to_analyze,
input_pdb
)
return t
Explanation: The task building
End of explanation
def my_script(file_to_analyze, input_pdb):
import mdtraj as md
traj = md.load(file_to_analyze, top=input_pdb)
features = traj.compute_xyz()
return features
Explanation: The actual script
This script is executed on the HPC for you. And requires mdtraj to be installed on it.
End of explanation
my_generator = MDTrajFeaturizer(pdb_file)
task = my_generator.execute(traj.file('master.dcd'))
project.queue(task)
Explanation: That's it. At least in the simplest form. When you use this to create a Task
End of explanation
def strategy():
# generate some structures...
# yield wait ...
# get a traj object
task = my_generator.execute(traj.outputs('master'))
# wait until the task is done
yield task.is_done
# print the output
output = task.output
# do something with the result, store in the DB, etc...
Explanation: We wait and then the Task object has a .output property which now contains the returned result.
This can now be used in your execution plans...
End of explanation
def execute(self, ...):
t = PythonTask(self)
t.then('handle_my_output')
@staticmethod
def handle_my_output(project, task, data, inputs):
print 'Saving data from task', task, 'into model'
m = Model(data)
project.model.add(m)
Explanation: Next, we look at improvements
Better storing of results
Often you want to save the output from your function in the DB in some form or another. Though the output is stored, it is not conviniently accessed unless you know the task that was used.
For this reason there is a callback function you can set, that can take care of doing a custom handling of the output. The function to be called needs to be a method of the generator and you can give the task the name of the method. The name (str) of the funtion can be set using the then() command. An the default name is then_func.
End of explanation
def execute(self, ...):
t = PythonTask(self)
t.then('handle_my_output')
t.store_output = False # default is `True`
Explanation: The function takes exactly 4 parameters
project: the project in which the task was run. Is used to access the database, etc
task: the actual task object that produced the output
data: the output returned by the function
inputs: the input to the python function call (internally). The data actually transmitted to the worker to run
Like in the above example you can do whatever you want with your data, store it, alter it, write it to a file, etc. In case you do not want to additionally save the output (data) in the DB as an object, you can tell the trask not to by setting
End of explanation
my_generator.execute(traj)
Explanation: in that case .output will stay None even after execution
Working with Trajectory files and get their properties
Note that you always have to write file generation and file analysis/reading that matches. We only store some very general properties of objects with them, e.g. a stride for trajectories. This means you cannot arbitrarily mix code for these.
Now we want that this works
End of explanation
def __init__(self, outtype, pdb_file=None):
super(PyEMMAAnalysis, self).__init__()
# we store a str that holds the name of the outputtype
# this must match the definition
self.outtype = outtype
# ...
def execute(self, traj, *args, **kwargs):
t = PythonTask(self)
# ...
file_location = traj.outputs(self.outtype) # get the trajectory file matching outtype
# use the file_location.
# ...
Explanation: This is rather simple: All you need to do is to extract the actual files from the trajectory object.
End of explanation
class MyStorableObject(StorableMixin):
def __init__(self, state):
self.state = state
Explanation: Import! You have no access to the Trajectory object in our remove function. These will be converted
to a real path relative to the working directory. This makes sure that you will not have to deal with
prefixes, etc. This might change in the future, but. The scripts are considered independent of adaptivemd!
Problem with saving your generator to the DB
This is not complicated but you need to briefly learn about the mechanism to store complex Python objects in the DB. The general way to Store an instance of a class requires you to subclass from adaptivemd.mongodb.StorableMixin. This provides the class with a __uuid__ attribute that is a unique number for each storable object that is given at creation time. (If we would just store objects using pymongo we would get a number like this, but later). Secondly, it add two functions
to_dict(): this converts the (immutable) state of the object into a dictionary that is simple enough that it can be stored. Simple enought means, that you can have Python primitives, things like numpy arrays or even other storable objects, but not arbitrary objects in it, like lambda constructs (these are possible but need special treatment)
from_dict(): The reverse. It takes the dictionary from to_dict and must return an equivalent object!
So, you can do
clone = obj.__class__.from_dict(obj.to_dict())
and get an equal object in that it has the same attributes. You could also say a deep copy.
This is not always trivial and there exists a default implementation, which will make an additional assumption:
All necessary attributes have the same parameters in __init__. So, this would correspond to this rule
End of explanation
class MyStorableObject(StorableMixin):
def __init__(self, initial_state):
self.state = initial_state
Explanation: while this would not work
End of explanation
# fix `to_dict` to match default `from_dict`
class MyStorableObject(StorableMixin):
def __init__(self, initial_state):
self.state = initial_state
def to_dict(self):
return {
'initial_state': self.state
}
# fix `from_dict` to match default `to_dict`
class MyStorableObject(StorableMixin):
def __init__(self, initial_state):
self.state = initial_state
@classmethod
def from_dict(cls, dct):
return cls(initial_state=dct['state'])
# fix both `from_dict` and `to_dict`
class MyStorableObject(StorableMixin):
def __init__(self, initial_state):
self.state = initial_state
def to_dict(self):
return {
'my_state': self.state
}
@classmethod
def from_dict(cls, dct):
return cls(initial_state=dct['my_state'])
Explanation: In the second case you need to overwrite the default function. All of these will work
End of explanation
class MyStorableObject(StorableMixin):
@classmethod
def from_dict(cls, dct):
obj = super(MyStorableObject, cls).from_dict(dct)
obj.missing_attr1 = dct['missing_attr_key1']
return obj
def to_dict(self):
dct = super(MyStorableObject, self).to_dict(self)
dct.update({
'missing_attr_key1': self.missing_attr1
})
return dct
Explanation: If you do that, make sure that you really capture all variables. Especially if you subclass from an existing one. You can use super to access the result from the parent class
End of explanation
@classmethod
def from_dict(cls, dct):
obj = cls.__new__(cls)
StorableMixin.__init__(obj)
obj._items = dct['_items']
obj.initial_staging = dct['initial_staging']
return obj
def to_dict(self):
return {
'_items': self._items,
'initial_staging': self.initial_staging
}
Explanation: This is the recommended way to build your custom functions. For completeness we show here what the base TaskGenerator class will do
End of explanation
project.close()
Explanation: The only unfamiliar part is the
py
obj = cls.__new__(cls)
StorableMixin.__init__(obj)
which needs a little explanation.
In most __init__ functions for a TaskGenerator you will construct the initial_staging attribute with some functions. If you would reconstruct by just calling the constructor with the same parameters again, this would result in an equal object as expected and that would work, but not in all regards as expected: The problem is that if you generate objects that can be stored, these will get new UUIDs and hence are considered different from the ones that you wanted to store. In short, the construction in the __init__ prevents you from getting the real old object back, you always construct something new.
This can be solved by not using __init__ but creating an empty object using __new__ and then fixing all attributes to the original state. This is very similar to __setstate__ which we do not use in general to still allow using __init__ which makes sense in most cases where not storable objects are generated.
In the following we discuss an existing generator
A simple generator
A word about this example. While a Task can be created and configured a new class in adaptivemd needs to be part of the project. So we will write discuss the essential parts of the existing code.
A generator is in essence a factory to create Task objects with a single command. A generator can be initialized with certain files that the created tasks will always need, like an engine will need a topology for each task, etc. It also (as explained briefly before in Example 4) knows about certain callback behaviour of their tasks. Last, a generator allows you to assign a worker only to tasks that were created by a generator.
The execution structure
Let's look at the code of the PyEMMAAnalysis
```py
class PyEMMAAnalysis(Analysis):
def init(self, pdb_file):
super(PyEMMAAnalysis, self).init()
self['pdb_file'] = pdb_file
stage = pdb_file.transfer('staging:///')
self['pdb_file_stage'] = stage.target
self.initial_staging.append(stage)
@staticmethod
def then_func(project, task, model, inputs):
# add the input arguments for later reference
model.data['input']['trajectories'] = inputs['files']
model.data['input']['pdb'] = inputs['topfile']
project.models.add(model)
def execute(
self,
trajectories,
tica_lag=2,
tica_dim=2,
msm_states=5,
msm_lag=2,
stride=1):
t = PythonTask(self)
input_pdb = t.link(self['pdb_file_stage'], 'input.pdb')
t.call(
remote_analysis,
trajectories=list(trajectories),
topfile=input_pdb,
tica_lag=tica_lag,
tica_dim=tica_dim,
msm_states=msm_states,
msm_lag=msm_lag,
stride=stride
)
return t
```
```py
def init(self, pdb_file):
# don't forget to call super
super(PyEMMAAnalysis, self).init()
# a generator also acts like a dictionary for files
# this way you can later access certain files you might need
# save the pdb_file under the same name
self['pdb_file'] = pdb_file
# this creates a transfer action like it is used in tasks
# and moves the passed pdb_file (usually on the local machein)
# to the staging_area root directory
stage = pdb_file.transfer('staging:///')
# and the new target file (which is also like the original)
# on the staging_area is saved unter `pdb_file_stage`
# so, we can access both files if we wanted to
# note that the original file most likely is in the DB
# so we could just skip the stage transfer completely
self['pdb_file_stage'] = stage.target
# last we add this transfer to the initial_staging which
# is done only once per used generator
self.initial_staging.append(stage)
```
```py
the kwargs is to keep the exmaple short, you should use explicit
parameters and add appropriate docs
def execute(self, trajectories, **kwargs):
# create the task and set the generator to self, our new generator
t = PythonTask(self)
# we want to copy the staged file to the worker directory
# and name it `input.pdb`
input_pdb = t.link(self['pdb_file_stage'], 'input.pdb')
# if you chose not to use the staging file and copy it directly you
# would use in analogy
# input_pdb = t.link(self['pdb_file'], 'input.pdb')
# finally we use `.call` and want to call the `remote_analysis` function
# which we imported earlier from somewhere
t.call(
remote_analysis,
trajectories=list(trajectories),
**kwargs
)
return t
```
And finally a call_back function. The name then_func is the default function name to be called.
```py
we use a static method, but you can of course write a normal method
@staticmethod
the call_backs take these arguments in this order
the second parameter is actually a Model object in this case
which has a .data attribute
def then_func(project, task, model, inputs):
# add the input arguments for later reference to the model
model.data['input']['trajectories'] = inputs['kwargs']['files']
model.data['input']['pdb'] = inputs['kwargs']['topfile']
# and save the model in the project
project.models.add(model)
```
A brief summary and things you need to set to make your generator work
```py
class MyGenerator(Analysis):
def init(self, {things your generator always needs}):
super(MyGenerator, self).init()
# Add input files to self
self['file1'] = file1
# stage all files to the staging area of you want to keep these
# files on the HPC
for fn in ['file1', 'file2', ...]:
stage = self[fn].transfer('staging:///')
self[fn + '_stage'] = stage.target
self.initial_staging.append(stage)
@staticmethod
def then_func(project, task, outputs, inputs):
# do something with input and outputs
# store something in your project
def task_using_python_rpc(
self,
{arguments}):
t = PythonTask(self)
# set any task dependencies if you need
t.dependencies = []
input1 = t.link(self['file1'], 'alternative_name1')
input2 = t.link(self['file2'], 'alternative_name2')
...
# add whatever bash stuff you need BEFORE the function call
t.append('some bash command')
...
# use input1, etc in your function call if you like. It will
# be converted to a regular file location you can use
t.call(
{my_remote_python_function},
files=list(files),
)
# add whatever bash stuff you need AFTER the function call
t.append('some bash command')
...
return t
def task_using_bash_argument_call(
self,
{arguments}):
t = Task(self)
# set any task dependencies if you need
t.dependencies = []
input1 = t.link(self['file1'], 'alternative_name1')
input2 = t.link(self['file2'], 'alternative_name2')
...
# add more staging
t.append({action})
...
# add whatever bash stuff you want to do
t.append('some bash command')
...
# add whatever staging stuff you need AFTER the function call
t.append({action})
...
return t
```
The simplified code for the OpenMMEngine
```py
class OpenMMEngine(Engine):
trajectory_ext = 'dcd'
def __init__(self, system_file, integrator_file, pdb_file, args=None):
super(OpenMMEngine, self).__init__()
self['pdb_file'] = pdb_file
self['system_file'] = system_file
self['integrator_file'] = integrator_file
self['_executable_file'] = exec_file
for fn in self.files:
stage = self[fn].transfer(Location('staging:///'))
self[name + '_stage'] = stage.target
self.initial_staging.append(stage)
if args is None:
args = '-p CPU --store-interval 1'
self.args = args
# this one only works if you start from a file
def task_run_trajectory_from_file(self, target):
# we create a special Task, that has some additional functionality
t = TrajectoryGenerationTask(self, target)
# link all the files we require
initial_pdb = t.link(self['pdb_file_stage'], Location('initial.pdb'))
t.link(self['system_file_stage'])
t.link(self['integrator_file_stage'])
t.link(self['_executable_file_stage'])
# use the initial PDB to be used
input_pdb = t.get(target.frame, 'coordinates.pdb')
# this represents our output trajectory
output = Trajectory('traj/', target.frame, length=target.length, engine=self)
# create the directory so openmmrun can write to it
t.touch(output)
# build the actual bash command
cmd = 'python openmmrun.py {args} -t {pdb} --length {length} {output}'.format(
pdb=input_pdb,
length=target.length,
output=output,
args=self.args,
)
t.append(cmd)
# copy the resulting trajectory directory back to the staging area
t.put(output, target)
return t
```
End of explanation |
11,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example Assignment
<a href="#Problem-1">Problem 1</a>
<a href="#Problem-2">Problem 2</a>
<a href="#Part-A">Part A</a>
<a href="#Part-B">Part B</a>
<a href="#Part-C">Part C</a>
Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below
Step1:
Step3: Problem 1
Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a ValueError.
Step4: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does
Step6: Problem 2
Part A
Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality.
Step7: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get
Step8: Part B
Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function.
YOUR ANSWER HERE
Part C
Create a plot of the sum of squares for $n=1$ to $n=15$. Make sure to appropriately label the $x$-axis and $y$-axis, and to give the plot a title. Set the $x$-axis limits to be 1 (minimum) and 15 (maximum). | Python Code:
NAME = ""
COLLABORATORS = ""
Explanation: Example Assignment
<a href="#Problem-1">Problem 1</a>
<a href="#Problem-2">Problem 2</a>
<a href="#Part-A">Part A</a>
<a href="#Part-B">Part B</a>
<a href="#Part-C">Part C</a>
Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below:
End of explanation
# import plotting libraries
%matplotlib inline
import matplotlib.pyplot as plt
Explanation:
End of explanation
def squares(n):
Compute the squares of numbers from 1 to n, such that the
ith element of the returned list equals i^2.
# YOUR CODE HERE
raise NotImplementedError
Explanation: Problem 1
Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a ValueError.
End of explanation
squares(10)
Explanation: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does:
End of explanation
def sum_of_squares(n):
Compute the sum of the squares of numbers from 1 to n.
# YOUR CODE HERE
raise NotImplementedError
Explanation: Problem 2
Part A
Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality.
End of explanation
sum_of_squares(10)
Explanation: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get:
End of explanation
fig, ax = plt.subplots() # do not delete this line!
# YOUR CODE HERE
raise NotImplementedError
Explanation: Part B
Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function.
YOUR ANSWER HERE
Part C
Create a plot of the sum of squares for $n=1$ to $n=15$. Make sure to appropriately label the $x$-axis and $y$-axis, and to give the plot a title. Set the $x$-axis limits to be 1 (minimum) and 15 (maximum).
End of explanation |
11,644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Text Data
Step2: Tokenize Words
Step3: Tokenize Sentences | Python Code:
# Load library
from nltk.tokenize import word_tokenize, sent_tokenize
Explanation: Title: Tokenize Text
Slug: tokenize_text
Summary: How to tokenize text from unstructured text data for machine learning in Python.
Date: 2016-09-08 12:00
Category: Machine Learning
Tags: Preprocessing Text
Authors: Chris Albon
Preliminaries
End of explanation
# Create text
string = "The science of today is the technology of tomorrow. Tomorrow is today."
Explanation: Create Text Data
End of explanation
# Tokenize words
word_tokenize(string)
Explanation: Tokenize Words
End of explanation
# Tokenize sentences
sent_tokenize(string)
Explanation: Tokenize Sentences
End of explanation |
11,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kalman filter for altitude estimation from accelerometer and sonar
I) TRAJECTORY
We assume sinusoidal trajectory
Step1: II) MEASUREMENTS
Sonar
Step2: Baro
Step3: GPS
Step4: GPS velocity
Step5: Acceleration
Step6: III) PROBLEM FORMULATION
State vector
$$x_{k} = \left[ \matrix{ z \ h \ \dot z \ \zeta } \right]
= \matrix{ \text{Altitude} \ \text{Height above ground} \ \text{Vertical speed} \ \text{Accelerometer bias} }$$
Input vector
$$ u_{k} = \left[ \matrix{ \ddot z } \right] = \text{Accelerometer} $$
Formal definition (Law of motion)
Step7: Initial uncertainty $P_0$
Step8: Dynamic matrix $A$
Step9: Disturbance Control Matrix $B$
Step10: Measurement Matrix $H$
Step11: Measurement noise covariance $R$
Step12: Process noise covariance $Q$
Step13: Identity Matrix
Step14: Input
Step15: V) TEST
Filter loop
Step16: VI) PLOT
Altitude $z$ | Python Code:
m = 10000 # timesteps
dt = 1/ 250.0 # update loop at 250Hz
t = np.arange(m) * dt
freq = 0.1 # Hz
amplitude = 0.5 # meter
alt_true = 405 + amplitude * np.cos(2 * np.pi * freq * t)
height_true = 5 + amplitude * np.cos(2 * np.pi * freq * t)
vel_true = - amplitude * (2 * np.pi * freq) * np.sin(2 * np.pi * freq * t)
acc_true = - amplitude * (2 * np.pi * freq)**2 * np.cos(2 * np.pi * freq * t)
plt.plot(t, height_true)
plt.plot(t, vel_true)
plt.plot(t, acc_true)
plt.legend(['elevation', 'velocity', 'acceleration'], loc='best')
plt.xlabel('time')
Explanation: Kalman filter for altitude estimation from accelerometer and sonar
I) TRAJECTORY
We assume sinusoidal trajectory
End of explanation
sonar_sampling_period = 1 / 10.0 # sonar reading at 10Hz
# Sonar noise
sigma_sonar_true = 0.05 # in meters
meas_sonar = height_true[::(sonar_sampling_period/dt)] + sigma_sonar_true * np.random.randn(m // (sonar_sampling_period/dt))
t_meas_sonar = t[::(sonar_sampling_period/dt)]
plt.plot(t_meas_sonar, meas_sonar, 'or')
plt.plot(t, height_true)
plt.legend(['Sonar measure', 'Elevation (true)'])
plt.title("Sonar measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
Explanation: II) MEASUREMENTS
Sonar
End of explanation
baro_sampling_period = 1 / 10.0 # baro reading at 10Hz
# Baro noise
sigma_baro_true = 2.0 # in meters
meas_baro = alt_true[::(baro_sampling_period/dt)] + sigma_baro_true * np.random.randn(m // (baro_sampling_period/dt))
t_meas_baro = t[::(baro_sampling_period/dt)]
plt.plot(t_meas_baro, meas_baro, 'or')
plt.plot(t, alt_true)
plt.title("Baro measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
Explanation: Baro
End of explanation
gps_sampling_period = 1 / 1.0 # gps reading at 1Hz
# GPS noise
sigma_gps_true = 5.0 # in meters
meas_gps = alt_true[::(gps_sampling_period/dt)] + sigma_gps_true * np.random.randn(m // (gps_sampling_period/dt))
t_meas_gps = t[::(gps_sampling_period/dt)]
plt.plot(t_meas_gps, meas_gps, 'or')
plt.plot(t, alt_true)
plt.title("GPS measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
Explanation: GPS
End of explanation
gpsvel_sampling_period = 1 / 1.0 # gps reading at 1Hz
# GPS noise
sigma_gpsvel_true = 10.0 # in meters/s
meas_gpsvel = vel_true[::(gps_sampling_period/dt)] + sigma_gpsvel_true * np.random.randn(m // (gps_sampling_period/dt))
t_meas_gps = t[::(gps_sampling_period/dt)]
plt.plot(t_meas_gps, meas_gpsvel, 'or')
plt.plot(t, vel_true)
plt.title("GPS velocity measurement")
plt.xlabel('time (s)')
plt.ylabel('vel (m/s)')
Explanation: GPS velocity
End of explanation
sigma_acc_true = 0.2 # in m.s^-2
acc_bias = 1.5
meas_acc = acc_true + sigma_acc_true * np.random.randn(m) + acc_bias
plt.plot(t, meas_acc, '.')
plt.plot(t, acc_true)
plt.title("Accelerometer measurement")
plt.xlabel('time (s)')
plt.ylabel('acc ($m.s^{-2}$)')
Explanation: Acceleration
End of explanation
x = np.matrix([0.0, 0.0, 0.0, 0.0]).T
print(x, x.shape)
Explanation: III) PROBLEM FORMULATION
State vector
$$x_{k} = \left[ \matrix{ z \ h \ \dot z \ \zeta } \right]
= \matrix{ \text{Altitude} \ \text{Height above ground} \ \text{Vertical speed} \ \text{Accelerometer bias} }$$
Input vector
$$ u_{k} = \left[ \matrix{ \ddot z } \right] = \text{Accelerometer} $$
Formal definition (Law of motion):
$$ x_{k+1} = \textbf{A} \cdot x_{k} + \textbf{B} \cdot u_{k} $$
$$ x_{k+1} = \left[
\matrix{ 1 & 0 & \Delta_t & \frac{1}{2} \Delta t^2
\ 0 & 1 & \Delta t & \frac{1}{2} \Delta t^2
\ 0 & 0 & 1 & \Delta t
\ 0 & 0 & 0 & 1 } \right]
\cdot
\left[ \matrix{ z \ h \ \dot z \ \zeta } \right]
+ \left[ \matrix{ \frac{1}{2} \Delta t^2 \ \frac{1}{2} \Delta t^2 \ \Delta t \ 0 } \right]
\cdot
\left[ \matrix{ \ddot z } \right] $$
Measurement
$$ y = H \cdot x $$
$$ \left[ \matrix{ y_{sonar} \ y_{baro} \ y_{gps} \ y_{gpsvel} } \right]
= \left[ \matrix{ 0 & 1 & 0 & 0
\ 1 & 0 & 0 & 0
\ 1 & 0 & 0 & 0
\ 0 & 0 & 1 & 0 } \right] \cdot \left[ \matrix{ z \ h \ \dot z \ \zeta } \right] $$
Measures are done separately according to the refresh rate of each sensor
We measure the height from sonar
$$ y_{sonar} = H_{sonar} \cdot x $$
$$ y_{sonar} = \left[ \matrix{ 0 & 1 & 0 & 0 } \right] \cdot x $$
We measure the altitude from barometer
$$ y_{baro} = H_{baro} \cdot x $$
$$ y_{baro} = \left[ \matrix{ 1 & 0 & 0 & 0 } \right] \cdot x $$
We measure the altitude from gps
$$ y_{gps} = H_{gps} \cdot x $$
$$ y_{gps} = \left[ \matrix{ 1 & 0 & 0 & 0 } \right] \cdot x $$
We measure the velocity from gps
$$ y_{gpsvel} = H_{gpsvel} \cdot x $$
$$ y_{gpsvel} = \left[ \matrix{ 0 & 0 & 1 & 0 } \right] \cdot x $$
IV) IMPLEMENTATION
Initial state $x_0$
End of explanation
P = np.diag([100.0, 100.0, 100.0, 100.0])
print(P, P.shape)
Explanation: Initial uncertainty $P_0$
End of explanation
dt = 1 / 250.0 # Time step between filter steps (update loop at 250Hz)
A = np.matrix([[1.0, 0.0, dt, 0.5*dt**2],
[0.0, 1.0, dt, 0.5*dt**2],
[0.0, 0.0, 1.0, dt ],
[0.0, 0.0, 0.0, 1.0]])
print(A, A.shape)
Explanation: Dynamic matrix $A$
End of explanation
B = np.matrix([[0.5*dt**2],
[0.5*dt**2],
[dt ],
[0.0]])
print(B, B.shape)
Explanation: Disturbance Control Matrix $B$
End of explanation
H_sonar = np.matrix([[0.0, 1.0, 0.0, 0.0]])
print(H_sonar, H_sonar.shape)
H_baro = np.matrix([[1.0, 0.0, 0.0, 0.0]])
print(H_baro, H_baro.shape)
H_gps = np.matrix([[1.0, 0.0, 0.0, 0.0]])
print(H_gps, H_gps.shape)
H_gpsvel = np.matrix([[0.0, 0.0, 1.0, 0.0]])
print(H_gpsvel, H_gpsvel.shape)
Explanation: Measurement Matrix $H$
End of explanation
# sonar
sigma_sonar = sigma_sonar_true # sonar noise
R_sonar = np.matrix([[sigma_sonar**2]])
print(R_sonar, R_sonar.shape)
# baro
sigma_baro = sigma_baro_true # sonar noise
R_baro = np.matrix([[sigma_baro**2]])
print(R_baro, R_baro.shape)
# gps
sigma_gps = sigma_gps_true # sonar noise
R_gps = np.matrix([[sigma_gps**2]])
print(R_gps, R_gps.shape)
# gpsvel
sigma_gpsvel = sigma_gpsvel_true # sonar noise
R_gpsvel = np.matrix([[sigma_gpsvel**2]])
print(R_gpsvel, R_gpsvel.shape)
Explanation: Measurement noise covariance $R$
End of explanation
from sympy import Symbol, Matrix, latex
from sympy.interactive import printing
printing.init_printing()
dts = Symbol('\Delta t')
s1 = Symbol('\sigma_1') # drift of accelerometer bias
Qs = Matrix([[0.5*dts**2], [0.5*dts**2], [dts], [1.0]])
Qs*Qs.T*s1**2
sigma_acc_drift = 0.0001
G = np.matrix([[0.5*dt**2],
[0.5*dt**2],
[dt],
[1.0]])
Q = G*G.T*sigma_acc_drift**2
print(Q, Q.shape)
Explanation: Process noise covariance $Q$
End of explanation
I = np.eye(4)
print(I, I.shape)
Explanation: Identity Matrix
End of explanation
u = meas_acc
print(u, u.shape)
Explanation: Input
End of explanation
# Re init state
# State
x[0] = 300.0
x[1] = 5.0
x[2] = 0.0
x[3] = 0.0
# Estimate covariance
P[0,0] = 100.0
P[1,1] = 100.0
P[2,2] = 100.0
P[3,3] = 100.0
# Preallocation for Plotting
# estimate
zt = []
ht = []
dzt= []
zetat=[]
# covariance
Pz = []
Ph = []
Pdz= []
Pzeta=[]
# kalman gain
Kz = []
Kh = []
Kdz= []
Kzeta=[]
for filterstep in range(m):
# ========================
# Time Update (Prediction)
# ========================
# Project the state ahead
x = A*x + B*u[filterstep]
# Project the error covariance ahead
P = A*P*A.T + Q
# ===============================
# Measurement Update (Correction)
# ===============================
# Sonar (only at the beginning, ex take off)
if filterstep%25 == 0 and (filterstep <2000 or filterstep>9000):
# Compute the Kalman Gain
S_sonar = H_sonar*P*H_sonar.T + R_sonar
K_sonar = (P*H_sonar.T) * np.linalg.pinv(S_sonar)
# Update the estimate via z
Z_sonar = meas_sonar[filterstep//25]
y_sonar = Z_sonar - (H_sonar*x) # Innovation or Residual
x = x + (K_sonar*y_sonar)
# Update the error covariance
P = (I - (K_sonar*H_sonar))*P
# Baro
if filterstep%25 == 0:
# Compute the Kalman Gain
S_baro = H_baro*P*H_baro.T + R_baro
K_baro = (P*H_baro.T) * np.linalg.pinv(S_baro)
# Update the estimate via z
Z_baro = meas_baro[filterstep//25]
y_baro = Z_baro - (H_baro*x) # Innovation or Residual
x = x + (K_baro*y_baro)
# Update the error covariance
P = (I - (K_baro*H_baro))*P
# GPS
if filterstep%250 == 0:
# Compute the Kalman Gain
S_gps = H_gps*P*H_gps.T + R_gps
K_gps = (P*H_gps.T) * np.linalg.pinv(S_gps)
# Update the estimate via z
Z_gps = meas_gps[filterstep//250]
y_gps = Z_gps - (H_gps*x) # Innovation or Residual
x = x + (K_gps*y_gps)
# Update the error covariance
P = (I - (K_gps*H_gps))*P
# GPSvel
if filterstep%250 == 0:
# Compute the Kalman Gain
S_gpsvel = H_gpsvel*P*H_gpsvel.T + R_gpsvel
K_gpsvel = (P*H_gpsvel.T) * np.linalg.pinv(S_gpsvel)
# Update the estimate via z
Z_gpsvel = meas_gpsvel[filterstep//250]
y_gpsvel = Z_gpsvel - (H_gpsvel*x) # Innovation or Residual
x = x + (K_gpsvel*y_gpsvel)
# Update the error covariance
P = (I - (K_gpsvel*H_gpsvel))*P
# ========================
# Save states for Plotting
# ========================
zt.append(float(x[0]))
ht.append(float(x[1]))
dzt.append(float(x[2]))
zetat.append(float(x[3]))
Pz.append(float(P[0,0]))
Ph.append(float(P[1,1]))
Pdz.append(float(P[2,2]))
Pzeta.append(float(P[3,3]))
# Kz.append(float(K[0,0]))
# Kdz.append(float(K[1,0]))
# Kzeta.append(float(K[2,0]))
Explanation: V) TEST
Filter loop
End of explanation
plt.figure(figsize=(17,15))
plt.subplot(321)
plt.plot(t, zt, color='b')
plt.fill_between(t, np.array(zt) - 10* np.array(Pz), np.array(zt) + 10*np.array(Pz), alpha=0.2, color='b')
plt.plot(t, alt_true, 'g')
plt.plot(t_meas_baro, meas_baro, '.r')
plt.plot(t_meas_gps, meas_gps, 'ok')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([405 - 50 * amplitude, 405 + 30 * amplitude])
plt.legend(['estimate', 'true altitude', 'baro reading', 'gps reading', 'sonar switched off/on'], loc='lower right')
plt.title('Altitude')
plt.subplot(322)
plt.plot(t, ht, color='b')
plt.fill_between(t, np.array(ht) - 10* np.array(Ph), np.array(ht) + 10*np.array(Ph), alpha=0.2, color='b')
plt.plot(t, height_true, 'g')
plt.plot(t_meas_sonar, meas_sonar, '.r')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
# plt.ylim([5 - 1.5 * amplitude, 5 + 1.5 * amplitude])
plt.ylim([5 - 10 * amplitude, 5 + 10 * amplitude])
plt.legend(['estimate', 'true height above ground', 'sonar reading', 'sonar switched off/on'])
plt.title('Height')
plt.subplot(323)
plt.plot(t, dzt, color='b')
plt.fill_between(t, np.array(dzt) - 10* np.array(Pdz), np.array(dzt) + 10*np.array(Pdz), alpha=0.2, color='b')
plt.plot(t, vel_true, 'g')
plt.plot(t_meas_gps, meas_gpsvel, 'ok')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([0 - 10.0 * amplitude, + 10.0 * amplitude])
plt.legend(['estimate', 'true velocity', 'gps_vel reading', 'sonar switched off/on'])
plt.title('Velocity')
plt.subplot(324)
plt.plot(t, zetat, color='b')
plt.fill_between(t, np.array(zetat) - 10* np.array(Pzeta), np.array(zetat) + 10*np.array(Pzeta), alpha=0.2, color='b')
plt.plot(t, -acc_bias * np.ones_like(t), 'g')
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
plt.ylim([-2.0, 1.0])
# plt.ylim([0 - 2.0 * amplitude, + 2.0 * amplitude])
plt.legend(['estimate', 'true bias', 'sonar switched off/on'])
plt.title('Acc bias')
plt.subplot(325)
plt.plot(t, Pz)
plt.plot(t, Ph)
plt.plot(t, Pdz)
plt.ylim([0, 1.0])
plt.plot([t[2000], t[2000]], [-1000, 1000], '--k')
plt.plot([t[9000], t[9000]], [-1000, 1000], '--k')
plt.legend(['Altitude', 'Height', 'Velocity', 'sonar switched off/on'])
plt.title('Incertitudes')
Explanation: VI) PLOT
Altitude $z$
End of explanation |
11,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading CTD data with PySeabird
Author
Step1: Let's first download an example file with some CTD data
Step2: The profile dPIRX003.cnv.OK was loaded with the default rule cnv.yaml
Step3: We have latitude in the header, and pressure in the data. | Python Code:
%matplotlib inline
from seabird.cnv import fCNV
from gsw import z_from_p
Explanation: Reading CTD data with PySeabird
Author: Guilherme Castelão
pySeabird is a package to parse/load CTD data files. It should be an easy task but the problem is that the format have been changing along the time. Work with multiple ships/cruises data requires first to understand each file, to normalize it into a common format for only than start your analysis. That can still be done with few general regular expression rules, but I would rather use strict rules. If I'm loading hundreds or thousands of profiles, I want to be sure that no mistake passed by. I rather ignore a file in doubt and warn it, than belive that it was loaded right and be part of my analysis.
With that in mind, I wrote this package with the ability to load multiple rules, so new rules can be added without change the main engine.
For more information, check the documentatio
End of explanation
!wget https://raw.githubusercontent.com/castelao/seabird/master/sampledata/CTD/dPIRX003.cnv
profile = fCNV('dPIRX003.cnv')
Explanation: Let's first download an example file with some CTD data
End of explanation
print("Header: %s" % profile.attributes.keys())
print("Data: %s" % profile.keys())
Explanation: The profile dPIRX003.cnv.OK was loaded with the default rule cnv.yaml
End of explanation
z = z_from_p(profile['PRES'], profile.attributes['LATITUDE'])
from matplotlib import pyplot as plt
plt.plot(profile['TEMP'], z,'b')
plt.plot(profile['TEMP2'], z,'g')
plt.xlabel('temperature')
plt.ylabel('depth')
plt.title(profile.attributes['filename'])
Explanation: We have latitude in the header, and pressure in the data.
End of explanation |
11,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In general your solutions are more elegant.
Great use of available libraries
Like your solutions for day3 (spiral memory), day11 (hex grid)
Day 1
Smart, clean and elegant.
Step1: Day 3
Well done
Step2: Day 5
Although it can be better to expicitily check, you can use Exceptions to capture exceptional behaviour.
Note that your code is mor robutes and pos=-1 will work for even even when it shouldn't
Step3: Day 6
It can be difficult to decide when classes are useful
Step4: Day 7
Assertions should not be used in normal program execution because they can be disabled
You can define your own exceptions easily
Step5: Day 8
Exec is a good idea (and makes the code really simple), but there are also other solutions.
Same for day 18
Step6: Day 12
Continue more explicit than pass
Step7: Day 13
Do not make complex parsers if not needed
Step9: And do not need to do too many things
Step10: Day 15
Generators are an esaier way to make iterators. In the end the result is similar
Step11: Day 16
Regex are expensive, even if you do not "precompile" them
Step13: Day 22
Useless day just takes memory and time
Step14: Day 23
Just different approaches | Python Code:
digits = '91212129'
L = len(digits)
sum([int(digits[i]) for i in range(L) if digits[i] == digits[(i+1) % L]])
def solve(captcha):
captcha = list(map(int, captcha))
prev_val = captcha[-1]
repeated = 0
for v in captcha:
if v == prev_val:
repeated += v
prev_val = v
return repeated
solve(digits)
Explanation: In general your solutions are more elegant.
Great use of available libraries
Like your solutions for day3 (spiral memory), day11 (hex grid)
Day 1
Smart, clean and elegant.
End of explanation
import numpy as np
def number_to_coordinates(n):
q = int(np.sqrt(n))
r = n - q ** 2
if q % 2 != 0:
x = (q - 1) // 2 + min(1, r) + min(q - r + 1, 0)
y = - (q - 1) // 2 + min(max(r - 1, 0), q)
else:
x = 1 - (q // 2) - min(1, r) - min(q - r + 1, 0)
y = q // 2 - min(max(r - 1, 0), q)
return x, y
def spiral_manhattan(n):
x, y = number_to_coordinates(n)
return abs(x) + abs(y)
spiral_manhattan(1024)
import math
def get_side_size(point):
side_size = math.ceil(math.sqrt(point))
if side_size % 2 == 0:
side_size += 1
return side_size
def get_displacement(point, ring):
distances = []
for i in [1,3,5,7]:
distances.append(abs(point-i*ring))
return min(distances)
def distance(point):
if point == 1:
return 0
else:
side_size = get_side_size(point)
radius = (side_size - 1) // 2
rescaled = point - (side_size-2)**2
displacement = get_displacement(rescaled, radius)
return displacement + radius
distance(1024)
Explanation: Day 3
Well done
End of explanation
instructions = [0, 3, 0, 1, -3]
class Maze(object):
def __init__(self, curr_pos, state):
self.curr_pos = curr_pos
self.state = state.copy()
self.length = len(self.state)
def evolve(self):
self.state[self.curr_pos] += 1
self.curr_pos += self.state[self.curr_pos] - 1
def outside(self):
return (self.curr_pos >= self.length) or (self.curr_pos < 0)
def steps_maze(l):
maze = Maze(0, l)
count = 0
while not maze.outside():
maze.evolve()
count += 1
return count, maze.state
steps_maze(instructions)
def steps2exit(instructions):
position = 0
steps = 0
try:
while True:
jump = instructions[position]
instructions[position] = jump + 1
position += jump
steps += 1
except IndexError:
return steps
steps2exit(instructions)
%%timeit
steps_maze(instructions)
%%timeit
steps2exit(instructions)
Explanation: Day 5
Although it can be better to expicitily check, you can use Exceptions to capture exceptional behaviour.
Note that your code is mor robutes and pos=-1 will work for even even when it shouldn't
End of explanation
banks = [0, 2, 7, 0]
def reallocate(val, pos, n):
l = [val // n] * n
r = val % n
for i in range(r):
l[(pos + i + 1) % n] += 1
return l
def update(b):
blocks = sorted(list(enumerate(b)), key=lambda v: (v[1], -v[0]), reverse=True)
pos = blocks[0][0]
val = blocks[0][1]
c = [b[i] if i != pos else 0 for i in range(len(b))]
l = reallocate(val, pos, len(b))
for i, v in enumerate(c):
c[i] += l[i]
return c
def count_until_loop(b):
count = 0
previous = set()
h = hash(tuple(b))
while h not in previous:
previous.add(h)
count += 1
b = update(b)
h = hash(tuple(b))
return count
count_until_loop(banks)
class Memory:
def __init__(self, banks):
self.banks = banks
self.states = []
def _find_fullest(self):
blocks = max(self.banks)
return self.banks.index(blocks), blocks
def _redistribue(self):
pos, blocks = self._find_fullest()
self.banks[pos] = 0
while blocks > 0:
pos += 1
if pos >= len(self.banks):
pos = 0
self.banks[pos] += 1
blocks -= 1
def realloate_till_loop(self):
redistributions = 0
self.states.append(self.banks.copy())
while True:
self._redistribue()
redistributions += 1
configuration = self.banks.copy()
if configuration in self.states:
break
else:
self.states.append(configuration)
return redistributions
Memory(banks).realloate_till_loop()
%%timeit
count_until_loop(banks)
%%timeit
Memory(banks).realloate_till_loop()
Explanation: Day 6
It can be difficult to decide when classes are useful
End of explanation
def pick_cherry(leaves):
while leaves:
leaf = leaves.pop()
parent = parents[leaf]
offspring = children[parent]
try:
for child in offspring:
assert(children[child] == [])
return parent
except AssertionError:
pass
class Unbalanced(Exception):
pass
Explanation: Day 7
Assertions should not be used in normal program execution because they can be disabled
You can define your own exceptions easily
End of explanation
def apply_instructions(registers):
for reg, leap, cond in instructions:
bool_str = 'registers["{0}"]'.format(cond[0]) + ''.join(cond[1:])
update_str = 'if {0}: registers["{1}"] += {2} '.format(bool_str, reg, leap)
exec(update_str)
import operator
comparisons = {'>': operator.gt, '>=': operator.ge, '<': operator.lt,
'<=': operator.le, '==': operator.eq, '!=': operator.ne}
def process(instruction):
reg, operation, val, condition = parse(instruction)
cond_reg, cond_op, cond_val = condition
if cond_op(registers[cond_reg], cond_val):
registers[reg] = operation(registers[reg], val)
Explanation: Day 8
Exec is a good idea (and makes the code really simple), but there are also other solutions.
Same for day 18
End of explanation
def connected(node, pipes):
neighbors = pipes[node]
pending = list(neighbors)
while pending:
alice = pending.pop(0)
for bob in pipes[alice]:
if bob in neighbors:
pass # ---> continue
else:
neighbors.add(bob)
pending.append(bob)
return neighbors
Explanation: Day 12
Continue more explicit than pass
End of explanation
def parse_scanners(input_file):
scanners = defaultdict(int)
with open(input_file, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
for l in csv_reader:
scanners[int(l[0].rstrip(':'))] = int(l[1].rstrip())
return scanners
def parse(lines):
layers_depth = {}
for line in lines:
l = line.strip().split(': ')
layers_depth[int(l[0])] = int(l[1])
return layers_depth
Explanation: Day 13
Do not make complex parsers if not needed
End of explanation
test_input = 0: 3
1: 2
4: 4
6: 4.splitlines()
layers = parse(test_input)
import collections
def tick(lrank, time):
r = time % (2 * (lrank - 1))
return (r <= lrank - 1) * r + (r > lrank - 1) * (2 * (lrank - 1) - r)
def get_state(time, scanners):
state = dict(zip(list(scanners.keys()), [0] * len(scanners)))
if time == 0:
return state
elif time > 0:
for t in range(time + 1):
for scanner in scanners:
state[scanner] = tick(scanners[scanner], t)
return state
def trip_severity(scanners):
severity = 0
layers = max(list(scanners.keys()))
for t in range(layers + 1):
if scanners[t] != 0:
tick_before = tick(scanners[t], t)
tick_now = tick(scanners[t], t + 1)
if (tick_before == 0):
severity += scanners[t] * t
return severity
scanners = collections.defaultdict(int)
scanners.update(layers)
trip_severity(scanners)
def severity(layers_depth, start_time=0):
severity_ = 0
for i, depth in layers_depth.items():
if (start_time + i) % ((depth-1) * 2) == 0:
severity_ += i*depth
return severity_
severity(layers)
Explanation: And do not need to do too many things
End of explanation
class FancyGen(object):
def __init__(self, start, factor):
self.start = start
self.factor = factor
self.q = 2147483647
def __iter__(self):
self.a = self.start
return self
def __next__(self):
n = (self.a * self.factor) % self.q
self.a = n
return n
def compare_lowest_bits(n, m):
n = n % (2 ** 16)
m = m % (2 ** 16)
return n == m
def duel(starta, startb):
N = 40 * 10 ** 6
count = 0
gena = iter(FancyGen(starta, 16807))
genb = iter(FancyGen(startb, 48271))
for _ in range(N):
if compare_lowest_bits(next(gena), next(genb)):
count += 1
return count
%%timeit
duel(65, 8921)
def generator(start_value, factor):
val = start_value
while True:
val = val * factor % 2147483647
yield val
def compare(start_A, start_B, rounds):
matches = 0
for i, values in enumerate(zip(generator(start_A, 16807), generator(start_B, 48271))):
if i >= rounds:
return matches
else:
vA, vB = values
if vA.to_bytes(100, 'big')[-2:] == vB.to_bytes(100, 'big')[-2:]:
matches += 1
%%timeit
compare(65, 8921, 40*10**6)
Explanation: Day 15
Generators are an esaier way to make iterators. In the end the result is similar
End of explanation
import re
import numpy as np
import copy
def shuffle(p, moves):
s = copy.copy(p)
for move in moves:
spin = re.search('s(\d+)', move)
swapx = re.search('x(\d+)\/(\d+)', move)
swapp = re.search('p(\w)\/(\w)', move)
if spin:
s = np.roll(s, int(spin.group(1)))
if swapx:
a = int(swapx.group(1))
b = int(swapx.group(2))
s[a], s[b] = s[b], s[a]
if swapp:
a = swapp.group(1)
b = swapp.group(2)
a = ''.join(s).index(a)
b = ''.join(s).index(b)
s[a], s[b] = s[b], s[a]
return ''.join(s)
%%timeit
shuffle(list('abcde'), ['s1', 'x3/4', 'pe/b'])
def parse(instruction):
name = instruction[0]
params = instruction[1:]
if name == 's':
params = [int(params)]
else:
params = params.split('/')
if name == 'x':
params = list(map(int, params))
return name, params
class Programs:
def __init__(self, progs):
self.progs = progs
self.length = len(self.progs)
self.instructions_dict = {'s': self.spin, 'x': self.exchange, 'p': self.partner}
def spin(self, pos):
pos = pos % self.length
if pos > 0:
tmp = self.progs[-pos:]
progs = tmp + self.progs
self.progs = progs[:self.length]
def exchange(self, pos1, pos2):
v1 = self.progs[pos1]
v2 = self.progs[pos2]
self.progs = self.progs[:pos1] + v2 + self.progs[pos1+1:]
self.progs = self.progs[:pos2] + v1 + self.progs[pos2+1:]
def partner(self, prog1, prog2):
self.exchange(self.progs.index(prog1), self.progs.index(prog2))
def dance(self, instructions):
for inst, params in instructions:
self.instructions_dict[inst](*params)
return p.progs
%%timeit
p = Programs('abcde')
p.dance([parse(inst) for inst in ['s1', 'x3/4', 'pe/b']])
import re
import numpy as np
import copy
regex1 = re.compile('s(\d+)')
regex2 = re.compile('x(\d+)\/(\d+)')
regex3 = re.compile('p(\w)\/(\w)')
def shuffle2(p, moves):
s = copy.copy(p)
for move in moves:
spin = regex1.search(move)
swapx = regex2.search(move)
swapp = regex3.search(move)
if spin:
s = np.roll(s, int(spin.group(1)))
if swapx:
a = int(swapx.group(1))
b = int(swapx.group(2))
s[a], s[b] = s[b], s[a]
if swapp:
a = swapp.group(1)
b = swapp.group(2)
a = ''.join(s).index(a)
b = ''.join(s).index(b)
s[a], s[b] = s[b], s[a]
return ''.join(s)
%%timeit
shuffle2(list('abcde'), ['s1', 'x3/4', 'pe/b'])
Explanation: Day 16
Regex are expensive, even if you do not "precompile" them
End of explanation
test_input = ..#
#..
....splitlines()
import numpy as np
from collections import defaultdict
def parse_grid(f_input):
grid = defaultdict(lambda: '.')
size = 0
for l in f_input:
hash_row = {hash(np.array([size, i], dtype=np.int16).tostring()): v for i, v in enumerate(list(l.rstrip()))}
grid.update(hash_row)
size += 1
return grid, size
class Virus(object):
def __init__(self, grid, size):
self.grid = grid # enclosing the hashes and states of infected positions
self.pos = np.array([(size - 1) // 2, (size - 1) // 2], dtype=np.int16) # initially in the center of a positive grid
self.facing = np.array([-1, 0], dtype=np.int16) # initially facing up in our coords
self.count_infect = 0
def burst(self):
hash_pos = hash(self.pos.tostring())
rotation = np.array([[0, -1], [1, 0]], dtype=np.int16)
self.facing = np.dot(rotation, self.facing)
if self.grid[hash_pos] == '#':
self.grid[hash_pos] = '.'
self.facing *= -1
else:
self.grid[hash_pos] = '#'
self.count_infect += 1
self.pos += self.facing
def count_infect(grid, size, n):
test_virus = Virus(grid, size)
for _ in range(n):
test_virus.burst()
return test_virus.count_infect
grid, size = parse_grid(test_input)
%%timeit
count_infect(grid, size, 10000)
directions = 'nesw'
directions2move = {'n': (0, 1), 's': (0, -1), 'e': (1, 0), 'w': (-1, 0)}
def parse(lines):
nodes = []
size = len(lines[0].strip())
v = size // 2
for i, line in enumerate(lines):
for j, c in enumerate(line.strip()):
if c == '#':
nodes.append((j-v, (i-v)*(-1)))
return set(nodes)
def burst(infected_nodes, pos, direction):
x, y = pos
# next direction
if pos in infected_nodes:
i = (directions.index(direction) + 1) % 4
infected_nodes.remove(pos)
else:
i = (directions.index(direction) - 1) % 4
infected_nodes.add(pos)
next_direction = directions[i]
# next position
a, b = directions2move[next_direction]
next_pos = (x+a, y+b)
return infected_nodes, next_pos, next_direction
def count_infections(initial_status, iterations):
count = 0
status = initial_status
pos = (0,0)
direction = 'n'
for _ in range(iterations):
prev_size = len(status)
status, pos, direction = burst(status, pos, direction)
count += 1 if len(status) > prev_size else 0 # should be 0 or 1
return count
nodes = parse(test_input)
%%timeit
count_infections(nodes, 10**4)
Explanation: Day 22
Useless day just takes memory and time
End of explanation
def is_prime(x):
if x >= 2:
for y in range(2,x):
if not ( x % y ):
return False
else:
return False
return True
def run_coprocessor(alpha):
loop = False
a, b = alpha, 79
c = b
d, e, f, g, h = 0, 0, 0, 0, 0
if a != 0:
b *= 100
b += 100000
c = b
c += 17000
while (g != 0) or not loop:
loop = True
f = 1
d = 2
e = 2
if not is_prime(b):
f = 0
e = b
d = b
if f == 0:
h += 1
g = b
g = g - c
if g == 0:
return a, b, c, d, e, f, g, h
else:
b += 17
run_coprocessor(1)
def isprime(value):
for i in range(2, value):
if (value % i) == 0:
return False
return True
def count_primes(init, end, step):
count = 0
for i in range(init, end+1, step):
if isprime(i):
count += 1
return count
def part2():
b = 106500
c = 123500
h = (c-b)/17 # each loop b increases 17 until it matches c
h += 1 # there is an extra loop when b == c ??
h -= count_primes(b, c, 17) # on primes, f is set to 0 and h not increased
return int(h)
part2()
Explanation: Day 23
Just different approaches
End of explanation |
11,648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Titanic kaggle competition with SVM - Advanced
Step1: Let's load the processed data and feature scale Age and Fare
Step2: Select the features from data, and convert to numpy arrays
Step3: We want to perform algorithm tuning with CV now, to avoid information leak, let's create a hold out set
Step4: Below is a simple example of algorithm tuning with the rbf kernel of SVM.
Step5: It seems that gamma is good in a broad range, Let's just take the middle of the flat part.
Step6: Of course, in real life you should perform parameter grid search in both C and gamma. Let's try out our new GridSearchCV tools learned in the morning.
Step7: Are we nessary perform better than the simpler model?
The real test is to submit the file to kaggle and let their hold out set decide.
I did improve my result by ~0.03 with the newly added in name features. | Python Code:
#import all the needed package
import numpy as np
import scipy as sp
import re
import pandas as pd
import sklearn
from sklearn.cross_validation import train_test_split,cross_val_score
from sklearn import metrics
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
from sklearn.svm import SVC
Explanation: Titanic kaggle competition with SVM - Advanced
End of explanation
data = pd.read_csv('data/train_processed.csv', index_col=0)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(data[['Age', 'Fare']])
data[['Age', 'Fare']] = scaler.transform(data[['Age', 'Fare']])
data["PSA"] = data["Pclass"]*data["Female"]*data["Age"]
data["SP"] = data["SibSp"]+data["Parch"]
data.head()
Explanation: Let's load the processed data and feature scale Age and Fare
End of explanation
feature_cols=['PSA','SP','Pclass','Age','SibSp','Parch','Fare','Female','Title_Dr','Title_Lady','Title_Master','Title_Miss','Title_Mr','Title_Mrs','Title_Rev','Title_Sir']
X=data[feature_cols].values
y=data['Survived'].values
Explanation: Select the features from data, and convert to numpy arrays:
End of explanation
#create a holdout set
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=42)
Explanation: We want to perform algorithm tuning with CV now, to avoid information leak, let's create a hold out set
End of explanation
#tune the gamma parameter with our training set
scores_mean=[]
scores_std=[]
model=SVC()
model.C=1
gammas=np.logspace(-3,1,50)
for gamma in gammas:
model.gamma=gamma
scores=cross_val_score(model,X_train,y_train,cv=10,scoring='accuracy')
scores_mean.append(np.mean(scores))
scores_std.append(np.std(scores))
plt.semilogx(gammas,scores_mean,'.')
plt.show()
Explanation: Below is a simple example of algorithm tuning with the rbf kernel of SVM.
End of explanation
model.gamma=0.1
model.fit(X_train,y_train)
y_predta=model.predict(X_train)
y_pred=model.predict(X_test)
train_score=metrics.accuracy_score(y_train,y_predta)
test_score=metrics.accuracy_score(y_test,y_pred)
scores=cross_val_score(model,X,y,cv=10,scoring='accuracy')
cvscore=np.mean(scores)
cvscore_std=np.std(scores)
print(train_score,test_score,cvscore,cvscore_std)
Explanation: It seems that gamma is good in a broad range, Let's just take the middle of the flat part.
End of explanation
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.grid_search import GridSearchCV
C_range = np.logspace(-3, 3, 10)
gamma_range = np.logspace(-3, 3, 10)
param_grid = dict(gamma=gamma_range, C=C_range)
cv = StratifiedShuffleSplit(y_train, n_iter=5, test_size=0.3, random_state=42)
grid = GridSearchCV(SVC(kernel='rbf'), scoring="accuracy", param_grid=param_grid, cv=cv)
grid.fit(X_train, y_train)
print("The best parameters are %s with a score of %0.4f" % (grid.best_params_, grid.best_score_))
model.gamma=grid.best_params_['gamma']
model.C=grid.best_params_['C']
model.fit(X_train,y_train)
y_predta=model.predict(X_train)
y_pred=model.predict(X_test)
train_score=metrics.accuracy_score(y_train,y_predta)
test_score=metrics.accuracy_score(y_test,y_pred)
scores=cross_val_score(model,X,y,cv=10,scoring='accuracy')
cvscore=np.mean(scores)
cvscore_std=np.std(scores)
print(train_score,test_score,cvscore,cvscore_std)
Explanation: Of course, in real life you should perform parameter grid search in both C and gamma. Let's try out our new GridSearchCV tools learned in the morning.
End of explanation
model.fit(X,y)
holdout_data = pd.read_csv('data/test_processed.csv')
# rescale age and fare as we did for training data. This is critical
# Note that we can (and should) use the same scaler object we fit above to the training data
holdout_data[['Age', 'Fare']] = scaler.transform(holdout_data[['Age', 'Fare']])
holdout_data["PSA"] = holdout_data["Pclass"]*holdout_data["Female"]*holdout_data["Age"]
holdout_data["SP"] = holdout_data["SibSp"]+holdout_data["Parch"]
#use our new features.
#feature_cols=['Pclass','Age','SibSp','Parch','Fare','Female','Title_Dr','Title_Lady','Title_Master','Title_Miss','Title_Mr','Title_Mrs','Title_Rev','Title_Sir']
X_holdout=holdout_data[feature_cols]
X_holdout.head()
y_holdout=model.predict(X_holdout)
samplesubmit = pd.read_csv("data/titanic_submit_example.csv")
samplesubmit["Survived"]=np.int32(y_holdout)
samplesubmit.to_csv("data/titanic_submit_fancytitle.csv",index=False)
Explanation: Are we nessary perform better than the simpler model?
The real test is to submit the file to kaggle and let their hold out set decide.
I did improve my result by ~0.03 with the newly added in name features.
End of explanation |
11,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training Logistic Regression via Stochastic Gradient Ascent
The goal of this notebook is to implement a logistic regression classifier using stochastic gradient ascent. You will
Step1: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: The SFrame products now contains one column for each of the 193 important_words.
Step4: Split data into training and validation sets
We will now split the data into a 90-10 split where 90% is in the training set and 10% is in the validation set. We use seed=1 so that everyone gets the same result.
Step5: Convert SFrame to NumPy array
Just like in the earlier assignments, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step6: Note that we convert both the training and validation sets into NumPy arrays.
Warning
Step7: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands
Step8: Derivative of log likelihood with respect to a single coefficient
Let us now work on making minor changes to how the derivative computation is performed for logistic regression.
Recall from the lectures and Module 3 assignment that for logistic regression, the derivative of log likelihood with respect to a single coefficient is as follows
Step9: Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
To verify the correctness of the gradient computation, we provide a function for computing average log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
To track the performance of stochastic gradient ascent, we provide a function for computing average log likelihood.
$$\ell\ell_A(\mathbf{w}) = \color{red}{\frac{1}{N}} \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
Note that we made one tiny modification to the log likelihood function (called compute_log_likelihood) in our earlier assignments. We added a $\color{red}{1/N}$ term which averages the log likelihood accross all data points. The $\color{red}{1/N}$ term makes it easier for us to compare stochastic gradient ascent with batch gradient ascent. We will use this function to generate plots that are similar to those you saw in the lecture.
Step10: Quiz Question
Step11: Quiz Question
Step12: Quiz Question
Step13: Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
Checkpoint
The following cell tests your stochastic gradient ascent function using a toy dataset consisting of two data points. If the test does not pass, make sure you are normalizing the gradient update rule correctly.
Step14: Compare convergence behavior of stochastic gradient ascent
For the remainder of the assignment, we will compare stochastic gradient ascent against batch gradient ascent. For this, we need a reference implementation of batch gradient ascent. But do we need to implement this from scratch?
Quiz Question
Step15: Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Now run batch gradient ascent over the feature_matrix_train for 200 iterations using
Step16: Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Make "passes" over the dataset
To make a fair comparison betweeen stochastic gradient ascent and batch gradient ascent, we measure the average log likelihood as a function of the number of passes (defined as follows)
Step17: Log likelihood plots for stochastic gradient ascent
With the terminology in mind, let us run stochastic gradient ascent for 10 passes. We will use
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros.
Step18: We provide you with a utility function to plot the average log likelihood as a function of the number of passes.
Step19: Smoothing the stochastic gradient ascent curve
The plotted line oscillates so much that it is hard to see whether the log likelihood is improving. In our plot, we apply a simple smoothing operation using the parameter smoothing_window. The smoothing is simply a moving average of log likelihood over the last smoothing_window "iterations" of stochastic gradient ascent.
Step20: Checkpoint
Step21: We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.
Step22: Quiz Question
Step23: Plotting the log likelihood as a function of passes for each step size
Now, we will plot the change in log likelihood using the make_plot for each of the following values of step_size
Step24: Now, let us remove the step size step_size = 1e2 and plot the rest of the curves. | Python Code:
from __future__ import division
import graphlab
Explanation: Training Logistic Regression via Stochastic Gradient Ascent
The goal of this notebook is to implement a logistic regression classifier using stochastic gradient ascent. You will:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Write a function to compute the derivative of log likelihood function with respect to a single coefficient.
Implement stochastic gradient ascent.
Compare convergence of stochastic gradient ascent with that of batch gradient ascent.
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
products = graphlab.SFrame('amazon_baby_subset.gl/')
Explanation: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
End of explanation
import json
with open('important_words.json', 'r') as f:
important_words = json.load(f)
important_words = [str(s) for s in important_words]
# Remote punctuation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Remove punctuation using Python's built-in string manipulation functionality.
Compute word counts (only for the important_words)
Refer to Module 3 assignment for more details.
End of explanation
products
Explanation: The SFrame products now contains one column for each of the 193 important_words.
End of explanation
train_data, validation_data = products.random_split(.9, seed=1)
print 'Training set : %d data points' % len(train_data)
print 'Validation set: %d data points' % len(validation_data)
Explanation: Split data into training and validation sets
We will now split the data into a 90-10 split where 90% is in the training set and 10% is in the validation set. We use seed=1 so that everyone gets the same result.
End of explanation
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
Explanation: Convert SFrame to NumPy array
Just like in the earlier assignments, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
Explanation: Note that we convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
score = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1. / (1.+np.exp(-score))
return predictions
Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-10-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
Quiz question: In Module 3 assignment, there were 194 features (an intercept + one feature for each of the 193 important words). In this assignment, we will use stochastic gradient ascent to train the classifier using logistic regression. How does the changing the solver to stochastic gradient ascent affect the number of features?
Building on logistic regression
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in Module 3 assignment to make probability predictions, since this part is not affected by using stochastic gradient ascent as a solver. Only the way in which the coefficients are learned is affected by using stochastic gradient ascent as a solver.
End of explanation
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
return derivative
Explanation: Derivative of log likelihood with respect to a single coefficient
Let us now work on making minor changes to how the derivative computation is performed for logistic regression.
Recall from the lectures and Module 3 assignment that for logistic regression, the derivative of log likelihood with respect to a single coefficient is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
In Module 3 assignment, we wrote a function to compute the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts the following two parameters:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
Complete the following code block:
End of explanation
def compute_avg_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)/len(feature_matrix)
return lp
Explanation: Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
To verify the correctness of the gradient computation, we provide a function for computing average log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
To track the performance of stochastic gradient ascent, we provide a function for computing average log likelihood.
$$\ell\ell_A(\mathbf{w}) = \color{red}{\frac{1}{N}} \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
Note that we made one tiny modification to the log likelihood function (called compute_log_likelihood) in our earlier assignments. We added a $\color{red}{1/N}$ term which averages the log likelihood accross all data points. The $\color{red}{1/N}$ term makes it easier for us to compare stochastic gradient ascent with batch gradient ascent. We will use this function to generate plots that are similar to those you saw in the lecture.
End of explanation
j = 1 # Feature number
i = 10 # Data point number
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+1,:], coefficients)
indicator = (sentiment_train[i:i+1]==+1)
errors = indicator - predictions
gradient_single_data_point = feature_derivative(errors, feature_matrix_train[i:i+1,j])
print "Gradient single data point: %s" % gradient_single_data_point
print " --> Should print 0.0"
Explanation: Quiz Question: Recall from the lecture and the earlier assignment, the log likelihood (without the averaging term) is given by
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
How are the functions $\ell\ell(\mathbf{w})$ and $\ell\ell_A(\mathbf{w})$ related?
Modifying the derivative for stochastic gradient ascent
Recall from the lecture that the gradient for a single data point $\color{red}{\mathbf{x}_i}$ can be computed using the following formula:
$$
\frac{\partial\ell_{\color{red}{i}}(\mathbf{w})}{\partial w_j} = h_j(\color{red}{\mathbf{x}i})\left(\mathbf{1}[y\color{red}{i} = +1] - P(y_\color{red}{i} = +1 | \color{red}{\mathbf{x}_i}, \mathbf{w})\right)
$$
Computing the gradient for a single data point
Do we really need to re-write all our code to modify $\partial\ell(\mathbf{w})/\partial w_j$ to $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$?
Thankfully No!. Using NumPy, we access $\mathbf{x}i$ in the training data using feature_matrix_train[i:i+1,:]
and $y_i$ in the training data using sentiment_train[i:i+1]. We can compute $\partial\ell{\color{red}{i}}(\mathbf{w})/\partial w_j$ by re-using all the code written in feature_derivative and predict_probability.
We compute $\partial\ell_{\color{red}{i}}(\mathbf{w})/\partial w_j$ using the following steps:
* First, compute $P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ using the predict_probability function with feature_matrix_train[i:i+1,:] as the first parameter.
* Next, compute $\mathbf{1}[y_i = +1]$ using sentiment_train[i:i+1].
* Finally, call the feature_derivative function with feature_matrix_train[i:i+1, j] as one of the parameters.
Let us follow these steps for j = 1 and i = 10:
End of explanation
j = 1 # Feature number
i = 10 # Data point start
B = 10 # Mini-batch size
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+B,:], coefficients)
indicator = (sentiment_train[i:i+B]==+1)
errors = indicator - predictions
gradient_mini_batch = feature_derivative(errors, feature_matrix_train[i:i+B,j])
print "Gradient mini-batch data points: %s" % gradient_mini_batch
print " --> Should print 1.0"
Explanation: Quiz Question: The code block above computed $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ for j = 1 and i = 10. Is $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ a scalar or a 194-dimensional vector?
Modifying the derivative for using a batch of data points
Stochastic gradient estimates the ascent direction using 1 data point, while gradient uses $N$ data points to decide how to update the the parameters. In an optional video, we discussed the details of a simple change that allows us to use a mini-batch of $B \leq N$ data points to estimate the ascent direction. This simple approach is faster than regular gradient but less noisy than stochastic gradient that uses only 1 data point. Although we encorage you to watch the optional video on the topic to better understand why mini-batches help stochastic gradient, in this assignment, we will simply use this technique, since the approach is very simple and will improve your results.
Given a mini-batch (or a set of data points) $\mathbf{x}{i}, \mathbf{x}{i+1} \ldots \mathbf{x}{i+B}$, the gradient function for this mini-batch of data points is given by:
$$
\color{red}{\sum{s = i}^{i+B}} \frac{\partial\ell_{s}}{\partial w_j} = \color{red}{\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right)
$$
Computing the gradient for a "mini-batch" of data points
Using NumPy, we access the points $\mathbf{x}i, \mathbf{x}{i+1} \ldots \mathbf{x}_{i+B}$ in the training data using feature_matrix_train[i:i+B,:]
and $y_i$ in the training data using sentiment_train[i:i+B].
We can compute $\color{red}{\sum_{s = i}^{i+B}} \partial\ell_{s}/\partial w_j$ easily as follows:
End of explanation
from math import sqrt
def logistic_regression_SG(feature_matrix, sentiment, initial_coefficients, step_size, batch_size, max_iter):
log_likelihood_all = []
# make sure it's a numpy array
coefficients = np.array(initial_coefficients)
# set seed=1 to produce consistent results
np.random.seed(seed=1)
# Shuffle the data before starting
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0 # index of current batch
# Do a linear scan over data
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,:]
### YOUR CODE HERE
predictions = predict_probability(feature_matrix[i:i+batch_size,:], coefficients)
# Compute indicator value for (y_i = +1)
# Make sure to slice the i-th entry with [i:i+batch_size]
### YOUR CODE HERE
indicator = (sentiment[i:i+batch_size] == +1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]
# Compute the derivative for coefficients[j] and save it to derivative.
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,j]
### YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[i:i+batch_size,j])
# compute the product of the step size, the derivative, and the **normalization constant** (1./batch_size)
### YOUR CODE HERE
coefficients[j] += step_size * derivative / batch_size
# Checking whether log likelihood is increasing
# Print the log likelihood over the *current batch*
lp = compute_avg_log_likelihood(feature_matrix[i:i+batch_size,:], sentiment[i:i+batch_size],
coefficients)
log_likelihood_all.append(lp)
if itr <= 15 or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) \
or itr % 10000 == 0 or itr == max_iter-1:
data_size = len(feature_matrix)
print 'Iteration %*d: Average log likelihood (of data points in batch [%0*d:%0*d]) = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, \
int(np.ceil(np.log10(data_size))), i, \
int(np.ceil(np.log10(data_size))), i+batch_size, lp)
# if we made a complete pass over data, shuffle and restart
i += batch_size
if i+batch_size > len(feature_matrix):
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0
# We return the list of log likelihoods for plotting purposes.
return coefficients, log_likelihood_all
Explanation: Quiz Question: The code block above computed
$\color{red}{\sum_{s = i}^{i+B}}\partial\ell_{s}(\mathbf{w})/{\partial w_j}$
for j = 10, i = 10, and B = 10. Is this a scalar or a 194-dimensional vector?
Quiz Question: For what value of B is the term
$\color{red}{\sum_{s = 1}^{B}}\partial\ell_{s}(\mathbf{w})/\partial w_j$
the same as the full gradient
$\partial\ell(\mathbf{w})/{\partial w_j}$?
Averaging the gradient across a batch
It is a common practice to normalize the gradient update rule by the batch size B:
$$
\frac{\partial\ell_{\color{red}{A}}(\mathbf{w})}{\partial w_j} \approx \color{red}{\frac{1}{B}} {\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right)
$$
In other words, we update the coefficients using the average gradient over data points (instead of using a summation). By using the average gradient, we ensure that the magnitude of the gradient is approximately the same for all batch sizes. This way, we can more easily compare various batch sizes of stochastic gradient ascent (including a batch size of all the data points), and study the effect of batch size on the algorithm as well as the choice of step size.
Implementing stochastic gradient ascent
Now we are ready to implement our own logistic regression with stochastic gradient ascent. Complete the following function to fit a logistic regression model using gradient ascent:
End of explanation
sample_feature_matrix = np.array([[1.,2.,-1.], [1.,0.,1.]])
sample_sentiment = np.array([+1, -1])
coefficients, log_likelihood = logistic_regression_SG(sample_feature_matrix, sample_sentiment, np.zeros(3),
step_size=1., batch_size=2, max_iter=2)
print '-------------------------------------------------------------------------------------'
print 'Coefficients learned :', coefficients
print 'Average log likelihood per-iteration :', log_likelihood
if np.allclose(coefficients, np.array([-0.09755757, 0.68242552, -0.7799831]), atol=1e-3)\
and np.allclose(log_likelihood, np.array([-0.33774513108142956, -0.2345530939410341])):
# pass if elements match within 1e-3
print '-------------------------------------------------------------------------------------'
print 'Test passed!'
else:
print '-------------------------------------------------------------------------------------'
print 'Test failed'
Explanation: Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
Checkpoint
The following cell tests your stochastic gradient ascent function using a toy dataset consisting of two data points. If the test does not pass, make sure you are normalizing the gradient update rule correctly.
End of explanation
coefficients, log_likelihood = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=1, max_iter=10)
Explanation: Compare convergence behavior of stochastic gradient ascent
For the remainder of the assignment, we will compare stochastic gradient ascent against batch gradient ascent. For this, we need a reference implementation of batch gradient ascent. But do we need to implement this from scratch?
Quiz Question: For what value of batch size B above is the stochastic gradient ascent function logistic_regression_SG act as a standard gradient ascent algorithm?
Running gradient ascent using the stochastic gradient ascent implementation
Instead of implementing batch gradient ascent separately, we save time by re-using the stochastic gradient ascent function we just wrote — to perform gradient ascent, it suffices to set batch_size to the number of data points in the training data. Yes, we did answer above the quiz question for you, but that is an important point to remember in the future :)
Small Caveat. The batch gradient ascent implementation here is slightly different than the one in the earlier assignments, as we now normalize the gradient update rule.
We now run stochastic gradient ascent over the feature_matrix_train for 10 iterations using:
* initial_coefficients = np.zeros(194)
* step_size = 5e-1
* batch_size = 1
* max_iter = 10
End of explanation
# YOUR CODE HERE
coefficients_batch, log_likelihood_batch = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=len(feature_matrix_train), max_iter=200)
Explanation: Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Now run batch gradient ascent over the feature_matrix_train for 200 iterations using:
* initial_coefficients = np.zeros(194)
* step_size = 5e-1
* batch_size = len(feature_matrix_train)
* max_iter = 200
End of explanation
print 50000 / 100 * 2
Explanation: Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Make "passes" over the dataset
To make a fair comparison betweeen stochastic gradient ascent and batch gradient ascent, we measure the average log likelihood as a function of the number of passes (defined as follows):
$$
[\text{# of passes}] = \frac{[\text{# of data points touched so far}]}{[\text{size of dataset}]}
$$
Quiz Question Suppose that we run stochastic gradient ascent with a batch size of 100. How many gradient updates are performed at the end of two passes over a dataset consisting of 50000 data points?
End of explanation
step_size = 1e-1
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
Explanation: Log likelihood plots for stochastic gradient ascent
With the terminology in mind, let us run stochastic gradient ascent for 10 passes. We will use
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
def make_plot(log_likelihood_all, len_data, batch_size, smoothing_window=1, label=''):
plt.rcParams.update({'figure.figsize': (9,5)})
log_likelihood_all_ma = np.convolve(np.array(log_likelihood_all), \
np.ones((smoothing_window,))/smoothing_window, mode='valid')
plt.plot(np.array(range(smoothing_window-1, len(log_likelihood_all)))*float(batch_size)/len_data,
log_likelihood_all_ma, linewidth=4.0, label=label)
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
plt.xlabel('# of passes over data')
plt.ylabel('Average log likelihood per data point')
plt.legend(loc='lower right', prop={'size':14})
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
label='stochastic gradient, step_size=1e-1')
Explanation: We provide you with a utility function to plot the average log likelihood as a function of the number of passes.
End of explanation
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic gradient, step_size=1e-1')
Explanation: Smoothing the stochastic gradient ascent curve
The plotted line oscillates so much that it is hard to see whether the log likelihood is improving. In our plot, we apply a simple smoothing operation using the parameter smoothing_window. The smoothing is simply a moving average of log likelihood over the last smoothing_window "iterations" of stochastic gradient ascent.
End of explanation
step_size = 1e-1
batch_size = 100
num_passes = 200
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
## YOUR CODE HERE
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
Explanation: Checkpoint: The above plot should look smoother than the previous plot. Play around with smoothing_window. As you increase it, you should see a smoother plot.
Stochastic gradient ascent vs batch gradient ascent
To compare convergence rates for stochastic gradient ascent with batch gradient ascent, we call make_plot() multiple times in the same cell.
We are comparing:
* stochastic gradient ascent: step_size = 0.1, batch_size=100
* batch gradient ascent: step_size = 0.5, batch_size=len(feature_matrix_train)
Write code to run stochastic gradient ascent for 200 passes using:
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros.
End of explanation
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic, step_size=1e-1')
make_plot(log_likelihood_batch, len_data=len(feature_matrix_train), batch_size=len(feature_matrix_train),
smoothing_window=1, label='batch, step_size=5e-1')
Explanation: We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.
End of explanation
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd = {}
log_likelihood_sgd = {}
for step_size in np.logspace(-4, 2, num=7):
coefficients_sgd[step_size], log_likelihood_sgd[step_size] = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=step_size, batch_size=batch_size, max_iter=num_iterations)
Explanation: Quiz Question: In the figure above, how many passes does batch gradient ascent need to achieve a similar log likelihood as stochastic gradient ascent?
It's always better
10 passes
20 passes
150 passes or more
Explore the effects of step sizes on stochastic gradient ascent
In previous sections, we chose step sizes for you. In practice, it helps to know how to choose good step sizes yourself.
To start, we explore a wide range of step sizes that are equally spaced in the log space. Run stochastic gradient ascent with step_size set to 1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, and 1e2. Use the following set of parameters:
* initial_coefficients=np.zeros(194)
* batch_size=100
* max_iter initialized so as to run 10 passes over the data.
End of explanation
for step_size in np.logspace(-4, 2, num=7):
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
Explanation: Plotting the log likelihood as a function of passes for each step size
Now, we will plot the change in log likelihood using the make_plot for each of the following values of step_size:
step_size = 1e-4
step_size = 1e-3
step_size = 1e-2
step_size = 1e-1
step_size = 1e0
step_size = 1e1
step_size = 1e2
For consistency, we again apply smoothing_window=30.
End of explanation
for step_size in np.logspace(-4, 2, num=7)[0:6]:
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
Explanation: Now, let us remove the step size step_size = 1e2 and plot the rest of the curves.
End of explanation |
11,650 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Step1: Exercise 1
Step2: b. Graphing
Using the techniques laid out in lecture, plot a histogram of the returns
Step3: c. Cumulative distribution
Plot the cumulative distribution histogram for your returns
Step4: Exercise 2
Step5: b. Plotting
Graph a scatter plot of SPY and Starbucks.
Step6: c. Plotting Returns
Graph a scatter plot of the returns of SPY and Starbucks.
Step7: Remember a scatter plot must have the same number of values for each parameter. If SPY and SBUX did not have the same number of data points, your graph will return an error
Exercise 3
Step8: b. Data Structure
The data is returned to us as a pandas dataframe object. Index your data to convert them into simple strings.
Step9: c. Plotting
Plot the data for SBUX stock price as a function of time. Remember to label your axis and title the graph.
Step10: Exercise 4 | Python Code:
# Useful Functions
import numpy as np
import matplotlib.pyplot as plt
Explanation: Exercises: Plotting
By Christopher van Hoecke, Max Margenot, and Delaney Mackenzie
Lecture Link:
https://www.quantopian.com/lectures/plotting-data
IMPORTANT NOTE:
This lecture corresponds to the Plotting Data lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
End of explanation
## Your code goes here
Explanation: Exercise 1: Histograms
a. Returns
Find the daily returns for SPY over a 7-year window.
End of explanation
## Your code goes here
Explanation: b. Graphing
Using the techniques laid out in lecture, plot a histogram of the returns
End of explanation
## Your code goes here
Explanation: c. Cumulative distribution
Plot the cumulative distribution histogram for your returns
End of explanation
## Your code goes here
Explanation: Exercise 2 : Scatter Plots
a. Data
Start by collecting the close prices of McDonalds Corp. (MCD) and Starbucks (SBUX) for the last 5 years with daily frequency.
End of explanation
## Your code goes here
Explanation: b. Plotting
Graph a scatter plot of SPY and Starbucks.
End of explanation
## Your code goes here
Explanation: c. Plotting Returns
Graph a scatter plot of the returns of SPY and Starbucks.
End of explanation
## Your code goes here
Explanation: Remember a scatter plot must have the same number of values for each parameter. If SPY and SBUX did not have the same number of data points, your graph will return an error
Exercise 3 : Linear Plots
a. Getting Data
Use the techniques laid out in lecture to find the open price over a 2-year period for Starbucks (SBUX) and Dunkin Brands Group (DNKN). Print them out in a table.
End of explanation
data.columns = ## Your code goes here
## Your code goes here
Explanation: b. Data Structure
The data is returned to us as a pandas dataframe object. Index your data to convert them into simple strings.
End of explanation
## Your code goes here
plt.xlabel();## Your code goes here
plt.ylabel();## Your code goes here
plt.title(); ## Your code goes here
Explanation: c. Plotting
Plot the data for SBUX stock price as a function of time. Remember to label your axis and title the graph.
End of explanation
data1 = get_pricing('SBUX', fields='open_price', start_date='2013-01-01', end_date='2014-01-01')
data2 = get_pricing('SPY', fields='open_price', start_date = '2013-01-01', end_date='2014-01-01')
rdata1= data1.pct_change()[1:]
rdata2= data2.pct_change()[1:]
plt.scatter(rdata2, rdata1);
plt.scatter(rdata2, rdata1)
a = ## Your code goes here
b = ## Your code goes here
x = np.arange(-0.02, 0.03, 0.01)
y = a + (b*x)
plt.plot(x,y, color='r');
Explanation: Exercise 4 : Best fits plots
Here we have a scatter plot of two data sets. Vary the a and b parameter in the code to try to draw a line that 'fits' our data nicely. The line should seem as if it is describing a pattern in the data. While quantitative methods exist to do this automatically, we would like you to try to get an intuition for what this feels like.
End of explanation |
11,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
We have seen how you can evaluate a supervised learner with a loss function. Classification is the learning task where one tried to predict a binary response variable, this can be thought of as the answer to a yes/no question, as in "Will this stock value go up today?". This is in contrast to regression which predicts a continuous response variable, answering a question such as "What will be the increase in stock value today?". Classification has some interesting lessons for other machine learning tasks, and in this chapter, we will try to introduce many of the main concepts in classification.
Recall that the supervised learning setting has that the data consists of response variables, $y_i$ and predictor variables $x_i$ for $i=1,\ldots,n$. In this chapter we will focus on binary classification which we will encode as the binary variable
Step1: Evaluating classifiers and heuristics
In this section, we will introduce two heuristics
Step2: Let's read in a dataset from the UCI machine learning repository (CITE),
Step3: The bank dataset contains the outcome of loans in the 'y' variable and many predictors of mixed type. There is also some missingness. Before we can go any further exploring the data, it is best to make the train-test split.
Step4: Above we specified that roughly 1/3 of the data be reserved for the test set, and the rest for the training set. First notice that the response variable is binary, and in the training set 4000 of the 4521 records are 'no'. We call this situation class imbalance, in that the percentage of samples with each value of y is not balanced.
Step5: Consider the following density plot, which shows the density of age conditional on the response variable y. If we cared equally about predicting 'yes' and 'no' then we may predict that a 'yes' if age exceeds some value like 60 or is below some value such as 28. We say that "predict yes if age > 60" is a decision rule.
Step7: Some explanation is warranted. We can state more precisely what we mean by "caring equally about predicting 'yes' and 'no'" as conditional on these values let's treat the probability of error equally. Take the decision rule above, then these error probabilities are
$$
\mathbb P{ age > 60 | y = no}, \quad \mathbb P{ age \le 60 | y = yes},
$$
which are often called the type 1 and type 2 errors respectively. If we cared equally about these probabilities then it is reasonable to set the age threshold at 60 since after this threshold it looks like the conditional density of age is greater for $y = yes$. We can do something similar for a more complicated decision rule such as "predict y=yes if age > 60 or age < 28" which may make even more sense.
Step10: We can make similar considerations for the variables duration and balance below. It seems that there is some descriminative power for balance, and more significant amount of descriminative power for duration.
Step11: Confusion matrix and classification metrics
<table style='font-family
Step12: Searching
(pseudo-)Metrics
$d(x_i,x_j) = 0$
Step13: Confusion matrix and metrics
<table style='font-family
Step15: ROC should be in top left
PR should be large for all recall values
Linear classifiers
In the above analysis we chose a variable (duration) and used that as the score. Let's consider another score which is a linear combination such as $0.2 (age) + 0.8 (duration)$ and compare the training errors of these. If we took this to its logical conclusion, we would search in an automated way for the exact coefficients. This is precisely what linear classifiers, such as logistic regression, support vector machines, and the perceptron do. This would be based on the training dataset only and hence, the test error is an unbiased estimate of the true risk (the expected loss). We have already seen the form for a linear classifier, where the score function is $\hat s(x) = \hat \beta_0 + \hat \beta^\top x$ where $\beta$ is learned. With this score function in hand we can write the predict function as $\hat y(x) = \textrm{sign} (\hat s(x))$.
Optimization and training loss
Why the different classification methods if they are all linear classifiers? Throughout we will make a distinction between the training loss, which is the loss used for training, and the test loss, the loss used for testing. We have already seen that they can be different when we trained a regressor with square error loss, but tested with the 0-1 loss in Statistical Learning Machines. In regression, we know that ordinary least squares minimizes the training square error, and these classifiers work in a similar way, except each uses a different training loss. We have already seen the 0-1 loss, recall that it takes the form,
$$
\ell_{0/1} (\hat y_{i}, y_i) = \left{ \begin{array}{ll}
1,& \textrm{ if } \hat y_i \ne y_i\
0,& \textrm{ if } \hat y_i = y_i
\end{array} \right.
$$
Why not use this loss function for training? To answer this we need to take a quick detour to describe optimization, also known as mathematical programming.
Suppose that we have some parameter $\beta$ and a function of that parameter $F(beta)$ that takes real values (we will allow possibly infinite values as well). Furthermore, suppose that $\beta$ is constrained to be within the set $\mathcal B$. Then an optimization (minimization) program takes the form
$$
\textrm{Find } \hat \beta \textrm{ such that } \hat \beta \in \mathcal B \textrm{ and } F(\hat \beta) \le F(\beta) \textrm{ for all } \beta \in \mathcal B.
$$
We can rewrite this problem more succinctly as
$$
\min_{\beta \in \mathcal B}. F(\beta),
$$
where $\min.$ stands for minimize. An optimization program is an idealized program, and is not itself an algorithm. It says nothing about how to find it, and there are many methods for finding the minimum based on details about the function $F$ and the constraint set $\mathcal B$.
Some functions are hard to optimize, especially discontinuous functions. Most optimization algorithms work by starting with an initial value of $\beta$ and iteratively moving it in a way that will tend to decrease $F$. When a function is discontinuous then these moves can have unpredictable consequences. Returning to our question, suppose that we wanted to minimize training error with a 0-1 training loss. Then this could be written as the optimization program,
$$
\min_{\beta \in \mathbb R^{p+1}}.\frac{1}{n_0} \sum_{i=1}^{n_0} 1 { \textrm{sign}(\beta^\top x_i) \ne y_i }.
\tag{0-1 min}
$$
This optimization is discontinuous because it is the sum of discontinuous functions---the indicator can suddenly jump from 0 to 1 with an arbitrarily small movement of $\beta$.
Note
Step16: The red dots correspond to Y = +1 and blue is Y = -1. We can see that a classifier that classifies as +1 when the point is in the upper right of the coordinate system should do pretty well. We could propose several $\beta$ vectors to form linear classifiers and observe their training errors, finally selecting the one that minimized the training error. We have seen that a linear classifier has a separator hyperplane (a line in 2 dimensions). To find out what the prediction the classifier makes for a point one just needs to look at which side of the hyperplane it falls on. Consider a few such lines.
Step17: One of these will probably do a better job at separating the training data than the others, but if we wanted to do this over all possible $\beta \in \mathbb R^{p+1}$ then we need to solve the program (0-1 min) above, which we have already said is a hard objective to optimize. Logistic regression regression uses a different loss function that can be optimized in several ways, and we can see the resulting separator line below. (It is outside the scope of this chapter to introduce specific optimization methods.)
Step18: The points above this line are predicted as a +1, and so we can also isolate those points that we classified incorrectly. The 0-1 loss counts each of these points as a loss of 1.
Step19: Logistic regression uses a loss function that mimics some of the behavior of the 0-1 loss, but is not discontinuous. In this way, it is a surrogate loss, that acts as a surrogate for the 0-1 loss. It turns out that it is one of a few nice options for surrogate losses. Notice that we can rewrite the 0-1 loss for a linear classifier as
$$
\ell_{0/1}(\beta,x_i,y_i) = 1 { y_i \beta^\top x_i < 0 }.
$$
Throughout we will denote our losses as functions of $\beta$ to reflect the fact that we are only considering linear classifiers.
The logistic loss and the hinge loss are also functions of $y_i \beta^\top x_i$, they are
$$
\ell_{L} (\beta, x_i, y_i) = \log(1 + \exp(-y_i \beta^\top x_i))
\tag{logistic}
$$
and
$$
\ell_{H} (\beta, x_i, y_i) = (1 - y_i \beta^\top x_i))+
\tag{hinge}
$$
where $a+ = a 1{ a > 0}$ is the positive part of the real number $a$.
If we are free to select training loss functions, then why not square error loss? For example, we could choose
$$
\ell_{S} (\beta, x_i, y_i) = (y_i - \beta^\top x_i))^2 = (1 - y_i \beta^\top x_i))^2.
\tag{square error}
$$
In order to motivate the use of these, let's plot the losses as a function of $y_i \beta^\top x_i$.
Step20: Comparing these we see that the logistic loss is smooth---it has continuous first and second derivatives---and it is decreasing as $y_i \beta^\top x_i$ is increasing. The hinge loss is interesting, it is continuous, but it has a discontinuous first derivative. This changes the nature of optimization algorithms that we will tend to use. On the other hand the hinge loss is zero for large enough $y_i \beta^\top x_i$, as opposed to the logistic loss which is always non-zero. Below we depict these two losses by weighting each point by the loss for the fitted classifier.
Step21: We see that for the logistic loss the size is vanishing when the points are on the wrong side of the separator hyperplane. The hinge loss is zero if we are sufficiently far from the hyperplane. The square error loss has increased weight for those far from the hyperplane, even if they are correctly classified. Hence, square error loss is not a good surrogate for 0-1 loss.
Support vector machines are algorithmically instable if we only try to minimize the training error with the hinge loss. So, we add an additional term to make the optimization somewhat easier, which we call a ridge regularization. Regularization is the process of adding a term to the objective that biases the results in a certain way. Ridge regularization means that we minimize the following objective for our loss,
$$
\min_{\beta \in \mathbb R^{p+1}}. \frac 1n \sum_{i=1}^n \ell(\beta,x_i,y_i) + \lambda \sum_{j=1}^p \beta_j^2.
$$
This has the effect of biasing the $\hat \beta_j$ towards 0 for $j=1,\ldots,p$. Typically, this has little effect on the test error, except for when $\lambda$ is too large, but it does often make the result more computationally stable.
In Scikit-learn we can initialize a logistic regression instance with linear_model.LogisticRegression. For historical reasons they parametrize the ridge regularization with the reciprocal of the $\lambda$ parameter that they call C. All supervised learners in Scikit-learn have a fit and predict method. Let's apply this to the bank data using the predictors age, balance, and duration.
Step22: We can then predict on the test set and see what the resulting 0-1 test error is.
Step23: On the other hand we can extract a score by using the coefficients themselves. We can also extract the ROC and PR curves for this score.
Step24: We can also do this same procedure for SVMs. Notice that we set the kernel='linear' argument in SVC. SVMs in general can use a kernel to make them non-linear classifiers, but we want to only consider linear classifiers here.
Step25: We see that SVMs achieves a slightly lower 0-1 test error. We can also compare these two methods based on the ROC and PR curves.
Step27: Despite having a lower misclassification rate (0-1 test error), the ROC for logistic regression is uniformly better than that for SVMs, and for all but the lowest recall regions, the precision is better for logistic regression. We might conclude that despite the misclassification rate, logistic regression is the better classifier in this case.
Tuning the ridge penalty $\lambda$
Let's consider how regularized logistic regression performs with different values of $\lambda$. It is quite common for selection of $\lambda$ to not improve the test error in any significant way for ridge regularization. We mainly use it for computational reasons.
Step30: In the above plot, we see that there is not a significant change in error as we increase $\lambda$, which could be for a few reasons. The most likely explanation is that if we look only at misclassification rate, it is hard to beat predicting every observation as a -1. Due to the class imbalance, the proportion of 1's is 0.106 which is quite close to the test error in this case, so the classifier only needs to do better than random on a very small proportion of points to beat this baseline rate.
We can also do the same thing for Precision, and obtain a similar result.
Step31: Class Weighting
Recall that the response variable in the bank data had significant class imbalance, with a prevelence of "no" responses (encoded as -1). One way that we can deal with class imbalance is by weighting differently the losses for the different classes. Specifically, specify a class weighting function $\pi(y)$ such that $\pi(1)$ is the weight applied to the positive class and $\pi(-1) = 1-\pi(1)$ is the weight on the negative classes. Then the objective for our classifier becomes
$$
\frac 1n \sum_{i=1}^n \pi(y_i) \ell(\beta,x_i,y_i).
$$
In the event that $\pi(1) > \pi(-1)$ then we will put more emphasis on classifying the 1's correctly. By default, $\pi(1) = 0.5$ which is equivalent to no weighting. This may be appropriate if we have class imbalance with fewer 1's. Let's consider what these weighted losses now look like as a function of $\beta^\top x_i$ for logistic and 0-1 loss (below $\pi(1) = .8$).
Step32: Hence, the loss is higher if we misclassify the 1's versus the -1's. We can also visualize what this would do for the points in our two dimensional simulated dataset.
Step33: In the above plot we are weighting the loss for negative points 4 times as much as positive points. How do we choose the amount that we weight by? Typically, this is chosen to be inversely proportional to the proportion of points for that class. In Scikit-learn this is achieved using class_weight='balanced'. Below we fit the balanced logistic regression and SVM and compare to the old version in PR curve.
Step34: It seems that in this case class balancing is not essential, and does not substantially change the results.
Aside
Step35: Feature engineering and overfitting
Feature engineering is the process of constructing new variables from other variables or unstructured data. There are a few reasons why you might want to do this
Step36: We initialize the encoder which when initialized does nothing to the data. We allow it to ignore NAs and output a dense matrix. When we fit on the data it determines the unique values and assigns dummy variables for the values.
Step37: Let's look at the output of the transformation.
Step38: Then look at the actual variable.
Step39: It seems that the first dummy is assigned to 'divorced', the second to 'married', and the third to 'single'. This method cannot handle missing data, so a tool such as impute.SimpleImputer should be used first. All of these methods are considered transformations, and they have fit methods. The reason for this interface design is to make it so that we can fit on the training set and then transform can be applied to the test data.
In the following, we will create an imputer and encoder for each continuous variable, then transform the training data.
Step41: We can also apply a fixed transformation the variables. While this could include multiple variables and encode interactions, we will only use univariate transformations. The following method applies the transformations,
$x \to x^2$ and $x \to \log(1 + x)$ and combines these with the original numerical dataset.
Step42: We can now apply imputation and this fixed transformation to the numerical variables.
Step43: I noticed that for various reasons that the standard deviation for some created variables is 0 (probably when there is only one value for a binary variable). We filter these out in the following lines.
Step44: The variable names are then also filtered out, this will be used later to be able to identify the variables by their indices in the transformed variables.
Step45: These transformations can now be applied to the test data. Because they were all fit with the training data, we do not run the risk of the fitting to the testing data. It is important to maintain the organization that we fit to the training data first, then transform the test data.
Step47: Now we are ready to train our linear classifier, and we will use logistic regression.
Step48: We output the training error along with the testing 0-1 error. The testing error is actually lower than the training error. Notably neither are much larger than the proportion of 1s in the dataset. This indicates to us that the 0-1 loss is not a very good measure of performance in this imbalanced dataset. Instead, let's compare the PR curve for these two.
Step50: We see that only in the low recall regions do all of these variables improve our test precision. So did we do all of that work creating new variables for nothing?
Model selection and overfitting
What we are observing above is overfitting. When you add a new predictor variable in to the regression it may help you explain the response variable, but what if it is completely independent of the response? It turns out that this new variable can actually hurt you by adding variability to your classifier. For illustration, consider the following simulation.
Step51: This dataset is generated such that only the second X variable (on the Y-axis in the plot) is influencing the probability of a point being positive or negative. There are also only 20 data points. Let's fit a logistic regression with both predictors and also only the relevant predictor, then plot the separator line.
Step54: We see that due to the randomness of the data points the classifier that uses both predictors is significantly perturbed and will result in a higher test error than the one that only uses the relevant variable. This is an example of how adding irrelevant variables can hurt your prediction because the classifier will begin to fit to the data. This problem is pronounced when you have a small amount of data or you have many predictors.
One remedy to this is to order your predictor variables by some measure that you think indicates importance. Then you can select the best K variables and look at your test error to determine K. K is called a tuning parameter, and to select it you have to compare test errors, because otherwise you will not necessarily detect overfitting. In Scikit-learn several methods are available for selecting best K predictors, and we will focus on using the method feature_selection.SelectKBest. By default this considers a single predictor variable at a time, then performs an ANOVA on that predictor with the binary response variable as the independent variable (it flips the roles of the response and predictors). Then it computes the F score which when maximized gives the best predictor variable. Choosing the top K F scores gives you the best K variables. In the following code, we use the best K predictors to transform the training and test set. Like in other transformations, it is important to fit it only on the training data.
Step55: In the above code, we return the 0-1 errors and the precision when we recommend the best 11.5% of the scores. This number is chosen because it is the proportion of positives in the training set.
Step56: We can plot the 0-1 error and see how it responds to the tuning parameter K.
Step57: Unfortunately, as before the 0-1 error is not very helpful as a metric, and does not deviate significantly from the just predicting all negatives. Looking at precision is somewhat more useful, and we can see that it increases in a noisy way, until it drops significantly as we increase K. This is consistent with an overfitting-underfitting tradeoff. At the beginning we are underfitting because we have not selected all of the important predictor variables, but then we start to overfit when K gets large.
Step58: Let's look at the variables that are selected in the best 10 and best 25.
Step59: It seems that duration is included in both but many other variables are added in addition to age and balance. We can also compare the PR curve for the test error for different models. | Python Code:
import pandas as pd
import numpy as np
import matplotlib as mpl
import plotnine as p9
import matplotlib.pyplot as plt
import itertools
import warnings
warnings.simplefilter("ignore")
from matplotlib.pyplot import rcParams
rcParams['figure.figsize'] = 6,6
Explanation: Classification
We have seen how you can evaluate a supervised learner with a loss function. Classification is the learning task where one tried to predict a binary response variable, this can be thought of as the answer to a yes/no question, as in "Will this stock value go up today?". This is in contrast to regression which predicts a continuous response variable, answering a question such as "What will be the increase in stock value today?". Classification has some interesting lessons for other machine learning tasks, and in this chapter, we will try to introduce many of the main concepts in classification.
Recall that the supervised learning setting has that the data consists of response variables, $y_i$ and predictor variables $x_i$ for $i=1,\ldots,n$. In this chapter we will focus on binary classification which we will encode as the binary variable: $y_i \in {0,1}$. We will see that two hueristics can help us understand the basics of evaluating classifiers.
End of explanation
from sklearn import neighbors, preprocessing, impute, metrics, model_selection, linear_model, svm, feature_selection
Explanation: Evaluating classifiers and heuristics
In this section, we will introduce two heuristics: a similarity-based search (nearest neighbors) and a custom sorting criteria (a output score). Throughout this chapter we will be using Scikit-learn (Sklearn for short), the main machine learning package in Python. Recall that the basic pipeline for offline supervised learning is the following,
randomly split data into training and testing set
fit on the training data with predictors and response variables
predict on the test data with predictors
observe losses from predictions and the test response variable
Sklearn provides tools for making train-test split via in the Sklearn.model_selection module. We will use several other modules that we will import,
End of explanation
bank = pd.read_csv('../data/bank.csv',sep=';',na_values=['unknown',999,'nonexistent'])
bank.info()
Explanation: Let's read in a dataset from the UCI machine learning repository (CITE),
End of explanation
bank_tr, bank_te = model_selection.train_test_split(bank,test_size=.33)
Explanation: The bank dataset contains the outcome of loans in the 'y' variable and many predictors of mixed type. There is also some missingness. Before we can go any further exploring the data, it is best to make the train-test split.
End of explanation
bank['y'].describe()
Explanation: Above we specified that roughly 1/3 of the data be reserved for the test set, and the rest for the training set. First notice that the response variable is binary, and in the training set 4000 of the 4521 records are 'no'. We call this situation class imbalance, in that the percentage of samples with each value of y is not balanced.
End of explanation
p9.ggplot(bank_tr, p9.aes(x = 'age',fill = 'y')) + p9.geom_density(alpha=.2)
Explanation: Consider the following density plot, which shows the density of age conditional on the response variable y. If we cared equally about predicting 'yes' and 'no' then we may predict that a 'yes' if age exceeds some value like 60 or is below some value such as 28. We say that "predict yes if age > 60" is a decision rule.
End of explanation
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
Explanation: Some explanation is warranted. We can state more precisely what we mean by "caring equally about predicting 'yes' and 'no'" as conditional on these values let's treat the probability of error equally. Take the decision rule above, then these error probabilities are
$$
\mathbb P{ age > 60 | y = no}, \quad \mathbb P{ age \le 60 | y = yes},
$$
which are often called the type 1 and type 2 errors respectively. If we cared equally about these probabilities then it is reasonable to set the age threshold at 60 since after this threshold it looks like the conditional density of age is greater for $y = yes$. We can do something similar for a more complicated decision rule such as "predict y=yes if age > 60 or age < 28" which may make even more sense.
End of explanation
p9.ggplot(bank_tr[['balance','y']].dropna(axis=0)) + p9.aes(x = 'balance',fill = 'y') \
+ p9.geom_density(alpha=.5)
p9.ggplot(bank_tr[['duration','y']].dropna(axis=0)) + p9.aes(x = 'duration',fill = 'y')\
+ p9.geom_density(alpha=.5)
def train_bank_to_xy(bank):
standardize and impute training
bank_sel = bank[['age','balance','duration','y']].values
X,y = bank_sel[:,:-1], bank_sel[:,-1]
scaler = preprocessing.StandardScaler().fit(X)
imputer = impute.SimpleImputer(fill_value=0).fit(X)
trans_prep = lambda Z: imputer.transform(scaler.transform(Z))
X = trans_prep(X)
y = 2*(y == 'yes')-1
return (X, y), trans_prep
def test_bank_to_xy(bank, trans_prep):
standardize and impute test
bank_sel = bank[['age','balance','duration','y']].values
X,y = bank_sel[:,:-1], bank_sel[:,-1]
X = trans_prep(X)
y = 2*(y == 'yes')-1
return (X, y)
(X_tr, y_tr), trans_prep = train_bank_to_xy(bank_tr)
X_te, y_te = test_bank_to_xy(bank_te, trans_prep)
## Set the score to be standardized duration
score_dur = X_te[:,2]
print(plt.style.available)
def plot_conf_score(y_te,score,tau):
y_pred = 2*(score > tau) - 1
classes = [1,-1]
conf = metrics.confusion_matrix(y_te, y_pred,labels=classes)
plot_confusion_matrix(conf, classes)
plot_conf_score(y_te,score_dur,1.)
plot_conf_score(y_te,score_dur,2.)
Explanation: We can make similar considerations for the variables duration and balance below. It seems that there is some descriminative power for balance, and more significant amount of descriminative power for duration.
End of explanation
plot_conf_score(y_te,score_dur,.5)
Explanation: Confusion matrix and classification metrics
<table style='font-family:"Courier New", Courier, monospace; font-size:120%'>
<tr><td></td><td>Pred 1</td><td>Pred 0</td></tr>
<tr><td>True 1</td><td>True Pos</td><td>False Neg</td></tr>
<tr><td>True 0</td><td>False Pos</td><td>True Neg</td></tr>
</table>
$$
\textrm{FPR} = \frac{FP}{FP+TN}
$$
$$
\textrm{TPR, Recall} = \frac{TP}{TP + FN}
$$
$$
\textrm{Precision} = \frac{TP}{TP + FP}
$$
End of explanation
## Fit and find NNs
nn = neighbors.NearestNeighbors(n_neighbors=10,metric="l2")
nn.fit(X_tr)
dists, NNs = nn.kneighbors(X_te)
NNs[1], y_tr[NNs[1]].mean(), y_te[1]
score_nn = np.array([(y_tr[knns] == 1).mean() for knns in NNs])
plot_conf_score(y_te,score_nn,.2)
nn = neighbors.KNeighborsClassifier(n_neighbors=10)
nn.fit(X_tr, y_tr)
score_nn = nn.predict_proba(X_te)[:,1]
plot_conf_score(y_te,score_nn,.2)
def print_top_k(score_dur,y_te,k_top):
ordering = np.argsort(score_dur)[::-1]
print("k: score, y")
for k, (yv,s) in enumerate(zip(y_te[ordering],score_dur[ordering])):
print("{}: {}, {}".format(k,s,yv))
if k >= k_top - 1:
break
print_top_k(score_dur,y_te,10)
Explanation: Searching
(pseudo-)Metrics
$d(x_i,x_j) = 0$: most similar
$d(x_i,x_j)$ larger: less similar
K nearest neighbors:
For a test point $x_{n+1}$
Compute distances to $x_1,\ldots,x_n$
Sort training points by distance
return K closest
End of explanation
plt.style.use('ggplot')
fpr_dur, tpr_dur, threshs = metrics.roc_curve(y_te,score_dur)
plt.figure(figsize=(6,6))
plt.plot(fpr_dur,tpr_dur)
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.title("ROC for 'duration'")
def plot_temp():
plt.figure(figsize=(6,6))
plt.plot(fpr_dur,tpr_dur,label='duration')
plt.plot(fpr_nn,tpr_nn,label='knn')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.legend()
plt.title("ROC")
fpr_nn, tpr_nn, threshs = metrics.roc_curve(y_te,score_nn)
plot_temp()
def plot_temp():
plt.figure(figsize=(6,6))
plt.plot(rec_dur,prec_dur,label='duration')
plt.plot(rec_nn,prec_nn,label='knn')
plt.xlabel('recall')
plt.ylabel('precision')
plt.legend()
plt.title("PR curve")
prec_dur, rec_dur, threshs = metrics.precision_recall_curve(y_te,score_dur)
prec_nn, rec_nn, threshs = metrics.precision_recall_curve(y_te,score_nn)
plot_temp()
Explanation: Confusion matrix and metrics
<table style='font-family:"Courier New", Courier, monospace; font-size:120%'>
<tr><td></td><td>Pred 1</td><td>Pred -1</td></tr>
<tr><td>True 1</td><td>True Pos</td><td>False Neg</td></tr>
<tr><td>True -1</td><td>False Pos</td><td>True Neg</td></tr>
</table>
$$
\textrm{FPR} = \frac{FP}{FP+TN}
$$
$$
\textrm{TPR, Recall} = \frac{TP}{TP + FN}
$$
$$
\textrm{Precision} = \frac{TP}{TP + FP}
$$
End of explanation
def lm_sim(N = 100):
simulate a binary response and two predictors
X1 = (np.random.randn(N*2)).reshape((N,2)) + np.array([2,3])
X0 = (np.random.randn(N*2)).reshape((N,2)) + np.array([.5,1.5])
y = - np.ones(N*2)
y[:N]=1
X = np.vstack((X1,X0))
return X, y, X0, X1
X_sim,y_sim,X0,X1 = lm_sim()
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.title("Two dimensional classification simulation")
_ = plt.legend(loc=2)
Explanation: ROC should be in top left
PR should be large for all recall values
Linear classifiers
In the above analysis we chose a variable (duration) and used that as the score. Let's consider another score which is a linear combination such as $0.2 (age) + 0.8 (duration)$ and compare the training errors of these. If we took this to its logical conclusion, we would search in an automated way for the exact coefficients. This is precisely what linear classifiers, such as logistic regression, support vector machines, and the perceptron do. This would be based on the training dataset only and hence, the test error is an unbiased estimate of the true risk (the expected loss). We have already seen the form for a linear classifier, where the score function is $\hat s(x) = \hat \beta_0 + \hat \beta^\top x$ where $\beta$ is learned. With this score function in hand we can write the predict function as $\hat y(x) = \textrm{sign} (\hat s(x))$.
Optimization and training loss
Why the different classification methods if they are all linear classifiers? Throughout we will make a distinction between the training loss, which is the loss used for training, and the test loss, the loss used for testing. We have already seen that they can be different when we trained a regressor with square error loss, but tested with the 0-1 loss in Statistical Learning Machines. In regression, we know that ordinary least squares minimizes the training square error, and these classifiers work in a similar way, except each uses a different training loss. We have already seen the 0-1 loss, recall that it takes the form,
$$
\ell_{0/1} (\hat y_{i}, y_i) = \left{ \begin{array}{ll}
1,& \textrm{ if } \hat y_i \ne y_i\
0,& \textrm{ if } \hat y_i = y_i
\end{array} \right.
$$
Why not use this loss function for training? To answer this we need to take a quick detour to describe optimization, also known as mathematical programming.
Suppose that we have some parameter $\beta$ and a function of that parameter $F(beta)$ that takes real values (we will allow possibly infinite values as well). Furthermore, suppose that $\beta$ is constrained to be within the set $\mathcal B$. Then an optimization (minimization) program takes the form
$$
\textrm{Find } \hat \beta \textrm{ such that } \hat \beta \in \mathcal B \textrm{ and } F(\hat \beta) \le F(\beta) \textrm{ for all } \beta \in \mathcal B.
$$
We can rewrite this problem more succinctly as
$$
\min_{\beta \in \mathcal B}. F(\beta),
$$
where $\min.$ stands for minimize. An optimization program is an idealized program, and is not itself an algorithm. It says nothing about how to find it, and there are many methods for finding the minimum based on details about the function $F$ and the constraint set $\mathcal B$.
Some functions are hard to optimize, especially discontinuous functions. Most optimization algorithms work by starting with an initial value of $\beta$ and iteratively moving it in a way that will tend to decrease $F$. When a function is discontinuous then these moves can have unpredictable consequences. Returning to our question, suppose that we wanted to minimize training error with a 0-1 training loss. Then this could be written as the optimization program,
$$
\min_{\beta \in \mathbb R^{p+1}}.\frac{1}{n_0} \sum_{i=1}^{n_0} 1 { \textrm{sign}(\beta^\top x_i) \ne y_i }.
\tag{0-1 min}
$$
This optimization is discontinuous because it is the sum of discontinuous functions---the indicator can suddenly jump from 0 to 1 with an arbitrarily small movement of $\beta$.
Note: Let us introduce the indicator notation,
$$
1{ {\rm condition} } = \left{ \begin{array}{ll}
1,& \textrm{ if condition is true}\
0,& \textrm{ if condition is not true}
\end{array} \right.
$$
We will suppress the intercept term by assuming that the first variable is an intercept variable. Specifically, if $x_1,\ldots,x_p$ were our original predictor variables, then we can include $x_0 = 1$ then if $\tilde x = (x_0,x_1,\ldots,x_p)$ and $\tilde \beta = (\beta_0, \beta_1, \ldots, \beta_p)$ we have that
$$
\tilde \beta^\top \tilde x = \beta_0 + \beta^\top x.
$$
Hence, without loss of generality, we can think of $\beta$ and $x$ as being $(p+1)$-dimensional vectors, and no longer treat the intercept as a special parameter.
Let's simulate some data for which a linear classifier will do well. We will use this to introduce the two main methods: logistic regression and support vector machines.
End of explanation
lr_sim = linear_model.LogisticRegression()
lr_sim.fit(X_sim,y_sim)
beta1 = lr_sim.coef_[0,0]
beta2 = lr_sim.coef_[0,1]
beta0 = lr_sim.intercept_
mults=0.8
T = np.linspace(-1,4,100)
x2hat = -(beta0 + beta1*T) / beta2
line1 = -(beta0 + np.random.randn(1)*2 +
(beta1 + np.random.randn(1)*mults) *T) / (beta2 + np.random.randn(1)*mults)
line2 = -(beta0 + np.random.randn(1)*2 +
(beta1 + np.random.randn(1)*mults) *T) / (beta2 + np.random.randn(1)*mults)
line3 = -(beta0 + np.random.randn(1)*2 +
(beta1 + np.random.randn(1)*mults) *T) / (beta2 + np.random.randn(1)*mults)
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.plot(T,line3,c='k')
plt.plot(T,line1,c='k')
plt.plot(T,line2,c='k')
plt.ylim([-1,7])
plt.title("Three possible separator lines")
_ = plt.legend(loc=2)
Explanation: The red dots correspond to Y = +1 and blue is Y = -1. We can see that a classifier that classifies as +1 when the point is in the upper right of the coordinate system should do pretty well. We could propose several $\beta$ vectors to form linear classifiers and observe their training errors, finally selecting the one that minimized the training error. We have seen that a linear classifier has a separator hyperplane (a line in 2 dimensions). To find out what the prediction the classifier makes for a point one just needs to look at which side of the hyperplane it falls on. Consider a few such lines.
End of explanation
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.title("Logistic regression separator line")
_ = plt.legend(loc=2)
Explanation: One of these will probably do a better job at separating the training data than the others, but if we wanted to do this over all possible $\beta \in \mathbb R^{p+1}$ then we need to solve the program (0-1 min) above, which we have already said is a hard objective to optimize. Logistic regression regression uses a different loss function that can be optimized in several ways, and we can see the resulting separator line below. (It is outside the scope of this chapter to introduce specific optimization methods.)
End of explanation
N = 100
y_hat = lr_sim.predict(X_sim)
plt.scatter(X0[y_hat[N:] == 1,0],X0[y_hat[N:] == 1,1],c='b',label='neg')
plt.scatter(X1[y_hat[:N] == -1,0],X1[y_hat[:N] == -1,1],c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.title("Points classified incorrectly")
_ = plt.legend(loc=2)
Explanation: The points above this line are predicted as a +1, and so we can also isolate those points that we classified incorrectly. The 0-1 loss counts each of these points as a loss of 1.
End of explanation
z_range = np.linspace(-5,5,200)
zoloss = z_range < 0
l2loss = (1-z_range)**2.
hingeloss = (1 - z_range) * (z_range < 1)
logisticloss = np.log(1 + np.exp(-z_range))
plt.plot(z_range, logisticloss + 1 - np.log(2.),label='logistic')
plt.plot(z_range, zoloss,label='0-1')
plt.plot(z_range, hingeloss,label='hinge')
plt.plot(z_range, l2loss,label='sq error')
plt.ylim([-.2,5])
plt.xlabel(r'$y_i \beta^\top x_i$')
plt.ylabel('loss')
plt.title('A comparison of classification loss functions')
_ = plt.legend()
Explanation: Logistic regression uses a loss function that mimics some of the behavior of the 0-1 loss, but is not discontinuous. In this way, it is a surrogate loss, that acts as a surrogate for the 0-1 loss. It turns out that it is one of a few nice options for surrogate losses. Notice that we can rewrite the 0-1 loss for a linear classifier as
$$
\ell_{0/1}(\beta,x_i,y_i) = 1 { y_i \beta^\top x_i < 0 }.
$$
Throughout we will denote our losses as functions of $\beta$ to reflect the fact that we are only considering linear classifiers.
The logistic loss and the hinge loss are also functions of $y_i \beta^\top x_i$, they are
$$
\ell_{L} (\beta, x_i, y_i) = \log(1 + \exp(-y_i \beta^\top x_i))
\tag{logistic}
$$
and
$$
\ell_{H} (\beta, x_i, y_i) = (1 - y_i \beta^\top x_i))+
\tag{hinge}
$$
where $a+ = a 1{ a > 0}$ is the positive part of the real number $a$.
If we are free to select training loss functions, then why not square error loss? For example, we could choose
$$
\ell_{S} (\beta, x_i, y_i) = (y_i - \beta^\top x_i))^2 = (1 - y_i \beta^\top x_i))^2.
\tag{square error}
$$
In order to motivate the use of these, let's plot the losses as a function of $y_i \beta^\top x_i$.
End of explanation
z_log = y_sim*lr_sim.decision_function(X_sim)
logisticloss = np.log(1 + np.exp(-z_log))
plt.scatter(X0[:,0],X0[:,1],s=logisticloss[N:]*30.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=logisticloss[:N]*30.,c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.xlim([-1,3])
plt.ylim([0,4])
plt.title("Points weighted by logistic loss")
_ = plt.legend(loc=2)
hingeloss = (1-z_log)*(z_log < 1)
plt.scatter(X0[:,0],X0[:,1],s=hingeloss[N:]*30.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=hingeloss[:N]*30.,c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.xlim([-1,3])
plt.ylim([0,4])
plt.title("Points weighted by hinge loss")
_ = plt.legend(loc=2)
l2loss = (1-z_log)**2.
plt.scatter(X0[:,0],X0[:,1],s=l2loss[N:]*10.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=l2loss[:N]*10.,c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.xlim([-1,3])
plt.ylim([0,4])
plt.title("Points weighted by sqr. loss")
_ = plt.legend(loc=2)
Explanation: Comparing these we see that the logistic loss is smooth---it has continuous first and second derivatives---and it is decreasing as $y_i \beta^\top x_i$ is increasing. The hinge loss is interesting, it is continuous, but it has a discontinuous first derivative. This changes the nature of optimization algorithms that we will tend to use. On the other hand the hinge loss is zero for large enough $y_i \beta^\top x_i$, as opposed to the logistic loss which is always non-zero. Below we depict these two losses by weighting each point by the loss for the fitted classifier.
End of explanation
lamb = 1.
lr = linear_model.LogisticRegression(penalty='l2', C = 1/lamb)
lr.fit(X_tr,y_tr)
Explanation: We see that for the logistic loss the size is vanishing when the points are on the wrong side of the separator hyperplane. The hinge loss is zero if we are sufficiently far from the hyperplane. The square error loss has increased weight for those far from the hyperplane, even if they are correctly classified. Hence, square error loss is not a good surrogate for 0-1 loss.
Support vector machines are algorithmically instable if we only try to minimize the training error with the hinge loss. So, we add an additional term to make the optimization somewhat easier, which we call a ridge regularization. Regularization is the process of adding a term to the objective that biases the results in a certain way. Ridge regularization means that we minimize the following objective for our loss,
$$
\min_{\beta \in \mathbb R^{p+1}}. \frac 1n \sum_{i=1}^n \ell(\beta,x_i,y_i) + \lambda \sum_{j=1}^p \beta_j^2.
$$
This has the effect of biasing the $\hat \beta_j$ towards 0 for $j=1,\ldots,p$. Typically, this has little effect on the test error, except for when $\lambda$ is too large, but it does often make the result more computationally stable.
In Scikit-learn we can initialize a logistic regression instance with linear_model.LogisticRegression. For historical reasons they parametrize the ridge regularization with the reciprocal of the $\lambda$ parameter that they call C. All supervised learners in Scikit-learn have a fit and predict method. Let's apply this to the bank data using the predictors age, balance, and duration.
End of explanation
yhat = lr.predict(X_te)
(yhat != y_te).mean()
Explanation: We can then predict on the test set and see what the resulting 0-1 test error is.
End of explanation
score_lr = X_te @ lr.coef_[0,:]
fpr_lr, tpr_lr, threshs = metrics.roc_curve(y_te,score_lr)
prec_lr, rec_lr, threshs = metrics.precision_recall_curve(y_te,score_lr)
Explanation: On the other hand we can extract a score by using the coefficients themselves. We can also extract the ROC and PR curves for this score.
End of explanation
lamb = 1.
svc = svm.SVC(C = 1/lamb,kernel='linear')
svc.fit(X_tr,y_tr)
yhat = svc.predict(X_te)
score_svc = X_te @ svc.coef_[0,:]
fpr_svc, tpr_svc, threshs = metrics.roc_curve(y_te,score_svc)
prec_svc, rec_svc, threshs = metrics.precision_recall_curve(y_te,score_svc)
(yhat != y_te).mean()
Explanation: We can also do this same procedure for SVMs. Notice that we set the kernel='linear' argument in SVC. SVMs in general can use a kernel to make them non-linear classifiers, but we want to only consider linear classifiers here.
End of explanation
plt.figure(figsize=(6,6))
plt.plot(fpr_lr,tpr_lr,label='logistic')
plt.plot(fpr_svc,tpr_svc,label='svm')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.legend()
plt.title("ROC curve comparison")
plt.figure(figsize=(6,6))
plt.plot(rec_lr,prec_lr,label='logistic')
plt.plot(rec_svc,prec_svc,label='svm')
plt.xlabel('recall')
plt.ylabel('precision')
plt.legend()
plt.title("PR curve comparison")
Explanation: We see that SVMs achieves a slightly lower 0-1 test error. We can also compare these two methods based on the ROC and PR curves.
End of explanation
def test_lamb(lamb):
Test error for logistic regression and different lambda
lr = linear_model.LogisticRegression(penalty='l2', C = 1/lamb)
lr.fit(X_tr,y_tr)
yhat = lr.predict(X_te)
return (yhat != y_te).mean()
test_frame = pd.DataFrame({'lamb':lamb,'error':test_lamb(lamb)} for lamb in 1.5**np.arange(-5,30))
p9.ggplot(test_frame,p9.aes(x='lamb',y='error')) + p9.geom_line() + p9.scale_x_log10()\
+ p9.labels.ggtitle('Misclassification rate for tuning lambda')
(y_te == 1).mean()
Explanation: Despite having a lower misclassification rate (0-1 test error), the ROC for logistic regression is uniformly better than that for SVMs, and for all but the lowest recall regions, the precision is better for logistic regression. We might conclude that despite the misclassification rate, logistic regression is the better classifier in this case.
Tuning the ridge penalty $\lambda$
Let's consider how regularized logistic regression performs with different values of $\lambda$. It is quite common for selection of $\lambda$ to not improve the test error in any significant way for ridge regularization. We mainly use it for computational reasons.
End of explanation
def get_prec(lr,X,y,K):
Find precision for top K
lr_score = X @ lr.coef_[0,:]
sc_sorted_id = np.argsort(lr_score)[::-1]
return np.mean(y[sc_sorted_id[:K]] == 1)
def prec_lamb(lamb):
Test error for logistic regression and different lambda
lr = linear_model.LogisticRegression(penalty='l2', C = 1/lamb)
lr.fit(X_tr,y_tr)
prec_K = int(.12 * len(y_te))
return get_prec(lr,X_te,y_te,prec_K)
test_frame = pd.DataFrame({'lamb':lamb,'prec':prec_lamb(lamb)} for lamb in 1.5**np.arange(-5,30))
p9.ggplot(test_frame,p9.aes(x='lamb',y='prec')) + p9.geom_line() + p9.scale_x_log10()\
+ p9.labels.ggtitle('Precision for tuning lambda')
Explanation: In the above plot, we see that there is not a significant change in error as we increase $\lambda$, which could be for a few reasons. The most likely explanation is that if we look only at misclassification rate, it is hard to beat predicting every observation as a -1. Due to the class imbalance, the proportion of 1's is 0.106 which is quite close to the test error in this case, so the classifier only needs to do better than random on a very small proportion of points to beat this baseline rate.
We can also do the same thing for Precision, and obtain a similar result.
End of explanation
alpha = 4.
zolossn = z_range < 0
zolossp = z_range > 0
logisticlossn = np.log(1 + np.exp(-z_range))
logisticlossp = np.log(1 + np.exp(z_range))
plt.plot(z_range, logisticlossn + 1 - np.log(2.),label='logistic y=-1')
plt.plot(z_range, alpha*(logisticlossp + 1 - np.log(2.)),label='logistic y=1')
plt.plot(z_range, zolossn,label='0-1 loss y=-1')
plt.plot(z_range, alpha*zolossp,label='0-1 loss y=1')
plt.ylim([-.2,8])
plt.title('Class weighted losses')
plt.xlabel(r'$\beta^\top x_i$')
plt.ylabel('weighted loss')
_ = plt.legend()
Explanation: Class Weighting
Recall that the response variable in the bank data had significant class imbalance, with a prevelence of "no" responses (encoded as -1). One way that we can deal with class imbalance is by weighting differently the losses for the different classes. Specifically, specify a class weighting function $\pi(y)$ such that $\pi(1)$ is the weight applied to the positive class and $\pi(-1) = 1-\pi(1)$ is the weight on the negative classes. Then the objective for our classifier becomes
$$
\frac 1n \sum_{i=1}^n \pi(y_i) \ell(\beta,x_i,y_i).
$$
In the event that $\pi(1) > \pi(-1)$ then we will put more emphasis on classifying the 1's correctly. By default, $\pi(1) = 0.5$ which is equivalent to no weighting. This may be appropriate if we have class imbalance with fewer 1's. Let's consider what these weighted losses now look like as a function of $\beta^\top x_i$ for logistic and 0-1 loss (below $\pi(1) = .8$).
End of explanation
y_hat = lr_sim.predict(X_sim)
z_log = y_sim*lr_sim.decision_function(X_sim)
logisticloss = np.log(1 + np.exp(-z_log))
plt.scatter(X0[:,0],X0[:,1],s=alpha*logisticloss[N:]*20.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=logisticloss[:N]*20.,c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.xlim([-1,3])
plt.ylim([0,4])
_ = plt.legend(loc=2)
Explanation: Hence, the loss is higher if we misclassify the 1's versus the -1's. We can also visualize what this would do for the points in our two dimensional simulated dataset.
End of explanation
lr_bal = linear_model.LogisticRegression(class_weight='balanced')
lr_bal.fit(X_tr,y_tr)
z_log = lr_bal.predict_proba(X_te)[:,1]
plt.figure(figsize=(6,6))
prec_lr_bal, rec_lr_bal, _ = metrics.precision_recall_curve(y_te,z_log)
plt.plot(rec_lr_bal,prec_lr_bal,label='Weighted logistic')
plt.plot(rec_lr,prec_lr,label='Unweighted logistic')
plt.xlabel('recall')
plt.ylabel('precision')
plt.ylim([-.1,1.1])
plt.legend(loc=3)
_ = plt.plot()
svc_bal = svm.SVC(C = 1/lamb,kernel='linear',class_weight='balanced')
svc_bal.fit(X_tr,y_tr)
plt.figure(figsize=(6,6))
z_svm = svc_bal.decision_function(X_te)
plt.figure(figsize=(6,6))
prec_svm_bal, rec_svm_bal, _ = metrics.precision_recall_curve(y_te,z_svm)
plt.plot(rec_svm_bal,prec_svm_bal,label='Weighted svm')
plt.plot(rec_svc,prec_svc,label='Unweighted svm')
plt.xlabel('recall')
plt.ylabel('precision')
plt.ylim([-.1,1.1])
plt.legend(loc=3)
_ = plt.plot()
Explanation: In the above plot we are weighting the loss for negative points 4 times as much as positive points. How do we choose the amount that we weight by? Typically, this is chosen to be inversely proportional to the proportion of points for that class. In Scikit-learn this is achieved using class_weight='balanced'. Below we fit the balanced logistic regression and SVM and compare to the old version in PR curve.
End of explanation
K_max = np.argmax(prec_svc)
Explanation: It seems that in this case class balancing is not essential, and does not substantially change the results.
Aside: This interpretation of logistic regression and SVMs is not the standard first introduction to these two methods. Instead of introducing surrogate losses, it is common to introduce the logistic model to motivate logistic regression. Under a particular statistical model, logistic regression is a maximum likelihood estimator. Also, support vector machines are often motivated by considering the separator hyperplane and trying to maximize the distance that the points are from that plane. This interpretation is complicated by the fact that if the dataset is not perfectly separated, then we need to introduce 'slack' variables that allow some slack to have misclassified points. We find that compared to these, the surrogate loss interpretation is simpler and often more enlightening.
End of explanation
bank_tr['marital'].unique()
Explanation: Feature engineering and overfitting
Feature engineering is the process of constructing new variables from other variables or unstructured data. There are a few reasons why you might want to do this:
Your data is unstructured, text, or categorical and you need your predictors to be numeric;
due to some domain knowledge you have good guesses at important predictors that you need to construct;
or you want to turn a linear classifier into a non-linear classifier with a non-linear transformation.
We will discuss dealing with unstructured and text data more in later chapters, but for now, let's consider dealing with categorical data. The most common way to encode categorical data is to create dummy variables for each category. This is called the one-hot encoding, and in Scikit-learn it is preprocessing.OneHotEncoder. We can see the unique values for the marital variable in the bank data.
End of explanation
OH_marital = preprocessing.OneHotEncoder(handle_unknown='ignore',sparse=False)
OH_marital.fit(bank_tr[['marital']])
marital_trans = OH_marital.transform(bank_tr[['marital']])
Explanation: We initialize the encoder which when initialized does nothing to the data. We allow it to ignore NAs and output a dense matrix. When we fit on the data it determines the unique values and assigns dummy variables for the values.
End of explanation
marital_trans[[0,10,17],:]
Explanation: Let's look at the output of the transformation.
End of explanation
bank_tr.iloc[:3,2]
Explanation: Then look at the actual variable.
End of explanation
## TRAINING TRANSFORMATIONS
## extract y and save missingness
y_tr = bank_tr['y']
del bank_tr['y']
y_tr = 2*(y_tr == 'yes').values - 1
bank_nas_tr = bank_tr.isna().values
# find object and numerical column names
obj_vars = bank_tr.dtypes[bank_tr.dtypes == 'object'].index.values
num_vars = bank_tr.dtypes[bank_tr.dtypes != 'object'].index.values
# create imputers and encoders for categorical vars and fit
obj_imp = [impute.SimpleImputer(strategy='most_frequent').fit(bank_tr[[var]])\
for var in obj_vars]
obj_tr_trans = [imp.transform(bank_tr[[var]]) for imp,var in zip(obj_imp,obj_vars)]
obj_OH = [preprocessing.OneHotEncoder(handle_unknown='ignore',sparse=False).fit(var_data)\
for var_data in obj_tr_trans]
obj_tr_trans = [OH.transform(var_data)[:,:-1] for OH, var_data in zip(obj_OH,obj_tr_trans)]
# Store the variable names associated with transformations
obj_var_names = sum(([var]*trans.shape[1] for var,trans in zip(obj_vars,obj_tr_trans)),[])
Explanation: It seems that the first dummy is assigned to 'divorced', the second to 'married', and the third to 'single'. This method cannot handle missing data, so a tool such as impute.SimpleImputer should be used first. All of these methods are considered transformations, and they have fit methods. The reason for this interface design is to make it so that we can fit on the training set and then transform can be applied to the test data.
In the following, we will create an imputer and encoder for each continuous variable, then transform the training data.
End of explanation
def fixed_trans(df):
selected fixed transformations
return np.hstack([df, df**2, np.log(np.abs(df)+1)])
Explanation: We can also apply a fixed transformation the variables. While this could include multiple variables and encode interactions, we will only use univariate transformations. The following method applies the transformations,
$x \to x^2$ and $x \to \log(1 + x)$ and combines these with the original numerical dataset.
End of explanation
# create imputers for numerical vars and fit
num_tr_vals = bank_tr[num_vars]
num_imp = impute.SimpleImputer(strategy='median').fit(num_tr_vals)
num_tr_trans = num_imp.transform(num_tr_vals)
num_tr_trans = fixed_trans(num_tr_trans)
# numerical variable names
num_var_names = list(num_tr_vals.columns.values)*3
Explanation: We can now apply imputation and this fixed transformation to the numerical variables.
End of explanation
# stack together for training predictors
X_tr = np.hstack(obj_tr_trans + [num_tr_trans,bank_nas_tr])
keep_cols = (X_tr.std(axis=0) != 0)
X_tr = X_tr[:,keep_cols]
Explanation: I noticed that for various reasons that the standard deviation for some created variables is 0 (probably when there is only one value for a binary variable). We filter these out in the following lines.
End of explanation
var_names = np.array(obj_var_names + num_var_names + list(bank_tr.columns.values))
var_names = var_names[keep_cols]
Explanation: The variable names are then also filtered out, this will be used later to be able to identify the variables by their indices in the transformed variables.
End of explanation
## TESTING TRANSFORMATIONS
y_te = bank_te['y']
del bank_te['y']
y_te = 2*(y_te == 'yes') - 1
y_te = np.array(y_te)
bank_nas_te = bank_te.isna().values
obj_te_trans = [imp.transform(bank_te[[var]]) for imp,var in zip(obj_imp,obj_vars)]
obj_te_trans = [OH.transform(var_data)[:,:-1] for OH, var_data in zip(obj_OH,obj_te_trans)]
num_te_vals = bank_te[num_vars]
num_te_trans = num_imp.transform(num_te_vals)
num_te_trans = fixed_trans(num_te_trans)
X_te = np.hstack(obj_te_trans + [num_te_trans,bank_nas_te])
X_te = X_te[:,keep_cols]
Explanation: These transformations can now be applied to the test data. Because they were all fit with the training data, we do not run the risk of the fitting to the testing data. It is important to maintain the organization that we fit to the training data first, then transform the test data.
End of explanation
lr = linear_model.LogisticRegression()
lr.fit(X_tr,y_tr)
yhat_te = lr.predict(X_te)
yhat_tr = lr.predict(X_tr)
print(
Training error:{}
Testing error: {}
.format((yhat_tr != y_tr).mean(), (yhat_te != y_te).mean()))
Explanation: Now we are ready to train our linear classifier, and we will use logistic regression.
End of explanation
z_log = lr.predict_proba(X_te)[:,1]
plt.figure(figsize=(6,6))
prec, rec, _ = metrics.precision_recall_curve(y_te,z_log)
plt.plot(rec,prec,label='all vars')
plt.plot(rec_lr,prec_lr,label='3 vars')
plt.xlabel('recall')
plt.ylabel('precision')
plt.ylim([-.1,1.1])
plt.legend(loc=3)
_ = plt.plot()
Explanation: We output the training error along with the testing 0-1 error. The testing error is actually lower than the training error. Notably neither are much larger than the proportion of 1s in the dataset. This indicates to us that the 0-1 loss is not a very good measure of performance in this imbalanced dataset. Instead, let's compare the PR curve for these two.
End of explanation
def lm_sim(N = 20):
simulate a binary response and two predictors
X1 = (np.random.randn(N*2)).reshape((N,2)) + np.array([3,2])
X0 = (np.random.randn(N*2)).reshape((N,2)) + np.array([3,4])
y = - np.ones(N*2)
y[:N]=1
X = np.vstack((X1,X0))
return X, y, X0, X1
X_sim, y_sim, X0, X1 = lm_sim()
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.title("Logistic regression separator line")
_ = plt.legend(loc=2)
Explanation: We see that only in the low recall regions do all of these variables improve our test precision. So did we do all of that work creating new variables for nothing?
Model selection and overfitting
What we are observing above is overfitting. When you add a new predictor variable in to the regression it may help you explain the response variable, but what if it is completely independent of the response? It turns out that this new variable can actually hurt you by adding variability to your classifier. For illustration, consider the following simulation.
End of explanation
def sep_lr(T,lr):
T = np.linspace(0,5,100)
beta1 = lr_sim.coef_[0,0]
beta2 = lr_sim.coef_[0,1]
beta0 = lr_sim.intercept_
return -(beta0 + beta1*T) / beta2
lr_sim = linear_model.LogisticRegression()
lr_sim.fit(X_sim,y_sim)
mults=0.8
T = np.linspace(0,7,100)
x2hat = sep_lr(T,lr_sim)
X_other = X_sim.copy()
X_other[:,0] = 0.
lr_sim = linear_model.LogisticRegression()
lr_sim.fit(X_other,y_sim)
x1hat = sep_lr(T,lr_sim)
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.plot(T,x1hat,'--',c='k',label='1-var sep')
plt.plot(T,x2hat,c='k',label='2-var sep')
plt.title("Logistic regression separator line")
_ = plt.legend(loc=2)
Explanation: This dataset is generated such that only the second X variable (on the Y-axis in the plot) is influencing the probability of a point being positive or negative. There are also only 20 data points. Let's fit a logistic regression with both predictors and also only the relevant predictor, then plot the separator line.
End of explanation
def get_prec(lr,X,y,K):
Find precision for top K
lr_score = X @ lr.coef_[0,:]
sc_sorted_id = np.argsort(lr_score)[::-1]
return np.mean(y[sc_sorted_id[:K]] == 1)
def test_kbest(X_tr,y_tr,X_te,y_te,k,prec_perc = .115):
Training and testing only on k-best variables
## Training
# Feature Selection
skb = feature_selection.SelectKBest(k=k)
skb.fit(X_tr,y_tr)
X_tr_kb = skb.transform(X_tr)
# Fitting
lr = linear_model.LogisticRegression()
lr.fit(X_tr_kb,y_tr)
yhat_tr = lr.predict(X_tr_kb)
prec_K = int(prec_perc * len(y_tr))
tr_prec = get_prec(lr,X_tr_kb,y_tr,prec_K)
tr_error = (yhat_tr != y_tr).mean()
## Testing
X_te_kb = skb.transform(X_te)
yhat_te = lr.predict(X_te_kb)
prec_K = int(prec_perc * len(y_te))
te_prec = get_prec(lr,X_te_kb,y_te,prec_K)
te_error = (yhat_te != y_te).mean()
return tr_error, te_error, tr_prec, te_prec
Explanation: We see that due to the randomness of the data points the classifier that uses both predictors is significantly perturbed and will result in a higher test error than the one that only uses the relevant variable. This is an example of how adding irrelevant variables can hurt your prediction because the classifier will begin to fit to the data. This problem is pronounced when you have a small amount of data or you have many predictors.
One remedy to this is to order your predictor variables by some measure that you think indicates importance. Then you can select the best K variables and look at your test error to determine K. K is called a tuning parameter, and to select it you have to compare test errors, because otherwise you will not necessarily detect overfitting. In Scikit-learn several methods are available for selecting best K predictors, and we will focus on using the method feature_selection.SelectKBest. By default this considers a single predictor variable at a time, then performs an ANOVA on that predictor with the binary response variable as the independent variable (it flips the roles of the response and predictors). Then it computes the F score which when maximized gives the best predictor variable. Choosing the top K F scores gives you the best K variables. In the following code, we use the best K predictors to transform the training and test set. Like in other transformations, it is important to fit it only on the training data.
End of explanation
errors = [test_kbest(X_tr,y_tr,X_te,y_te,k) for k in range(1,X_tr.shape[1]+1)]
train_error, test_error, tr_prec, te_prec = zip(*errors)
Explanation: In the above code, we return the 0-1 errors and the precision when we recommend the best 11.5% of the scores. This number is chosen because it is the proportion of positives in the training set.
End of explanation
plt.plot(train_error,label='training')
plt.plot(test_error,label='testing')
plt.legend()
Explanation: We can plot the 0-1 error and see how it responds to the tuning parameter K.
End of explanation
plt.plot(tr_prec,label='training')
plt.plot(te_prec,label='testing')
plt.legend()
Explanation: Unfortunately, as before the 0-1 error is not very helpful as a metric, and does not deviate significantly from the just predicting all negatives. Looking at precision is somewhat more useful, and we can see that it increases in a noisy way, until it drops significantly as we increase K. This is consistent with an overfitting-underfitting tradeoff. At the beginning we are underfitting because we have not selected all of the important predictor variables, but then we start to overfit when K gets large.
End of explanation
skb = feature_selection.SelectKBest(k=10)
skb.fit(X_tr,y_tr)
set(var_names[skb.get_support()])
skb = feature_selection.SelectKBest(k=25)
skb.fit(X_tr,y_tr)
set(var_names[skb.get_support()])
Explanation: Let's look at the variables that are selected in the best 10 and best 25.
End of explanation
X_tr_kb = skb.transform(X_tr)
lr = linear_model.LogisticRegression()
lr.fit(X_tr_kb,y_tr)
X_te_kb = skb.transform(X_te)
z_log = lr.predict_proba(X_te_kb)[:,1]
plt.figure(figsize=(6,6))
prec_skb, rec_skb, _ = metrics.precision_recall_curve(y_te,z_log)
plt.plot(rec_skb,prec_skb,label='25-best vars')
plt.plot(rec,prec,label='all vars')
plt.plot(rec_lr,prec_lr,label='3 vars')
plt.xlabel('recall')
plt.ylabel('precision')
plt.ylim([-.1,1.1])
plt.legend(loc=3)
_ = plt.plot()
Explanation: It seems that duration is included in both but many other variables are added in addition to age and balance. We can also compare the PR curve for the test error for different models.
End of explanation |
11,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Import Python Packages
To install the kernel used by NERSC-metatlas users, copy the following text to $HOME/.ipython/kernels/mass_spec_cori/kernel.json
{
"argv"
Step1: 2. Set atlas, project and output directories from your nersc home directory
Create a project folder name for this analysis by replacing the PROJECTDIRECTORY string text in red below. Make sure to update the rest of the direcory to point to your home directory. The pwd block will print out the directory where this jupyter notebook is stored.
Create a subdirectory name for the output, each run through you may want to create a new output folder.
When you run the block the folders will be created in your home directory. If the directory already exists, the block will just set the path for use with future code blocks.
Step2: 3. Select groups and get QC files
Step3: 4. Get template QC atlas from database
Available templates in Database
Step4: 4b. Uncomment the block below to adjust RT window
Step5: 5. Create metatlas dataset from QC files and QC atlas
Step6: 5b Optional
Step7: 6. Summarize RT peak across files and make data frame
Step8: 7. Create Compound atlas RTs plot and choose file for prediction
Step9: 8. Create RT adjustment model - Linear & Polynomial Regression
Step10: 8. Plot actual vs predict RT values and fit a median coeff+intercept line
Step11: 9. Choose your model
Step12: 10. Save RT model (optional)
Step13: 11. Auto RT adjust Template atlases
Available templates in Database
Step14: OPTIONAL BLOCK FOR RT PREDICTION OF CUSTOM ATLAS | Python Code:
from IPython.core.display import Markdown, display, clear_output, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
%matplotlib notebook
%matplotlib inline
%env HDF5_USE_FILE_LOCKING=FALSE
import sys, os
#### add a path to your private code if not using production code ####
#print ('point path to metatlas repo')
sys.path.insert(0,"/global/homes/v/vrsingan/repos/metatlas") #where your private code is
######################################################################
from metatlas.plots import dill2plots as dp
from metatlas.io import metatlas_get_data_helper_fun as ma_data
from metatlas.plots import chromatograms_mp_plots as cp
from metatlas.plots import chromplotplus as cpp
from metatlas.datastructures import metatlas_objects as metob
import time
import numpy as np
import multiprocessing as mp
import pandas as pd
import operator
import matplotlib.pyplot as plt
pd.set_option('display.max_rows', 5000)
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_colwidth', 100)
def printmd(string):
display(Markdown(string))
Explanation: 1. Import Python Packages
To install the kernel used by NERSC-metatlas users, copy the following text to $HOME/.ipython/kernels/mass_spec_cori/kernel.json
{
"argv": [
"/global/common/software/m2650/python-cori/bin/python",
"-m",
"IPython.kernel",
"-f",
"{connection_file}"
],
"env": {
"PATH": "/global/common/software/m2650/python-cori/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
},
"display_name": "mass_spec_cori",
"language": "python"
}
End of explanation
project_directory='/global/homes/FIRST-INITIAL-OF-USERNAME/USERNAME/PROJECTDIRECTORY/' # <- edit this line, do not copy the path directly from NERSC (ex. the u1, or u2 directories)
output_subfolder='HILIC_POS_20190830/' # <- edit this as 'chromatography_polarity_yyyymmdd/'
output_dir = os.path.join(project_directory,output_subfolder)
output_data_qc = os.path.join(output_dir,'data_QC')
if not os.path.exists(project_directory):
os.makedirs(project_directory)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
if not os.path.exists(output_data_qc):
os.makedirs(output_data_qc)
Explanation: 2. Set atlas, project and output directories from your nersc home directory
Create a project folder name for this analysis by replacing the PROJECTDIRECTORY string text in red below. Make sure to update the rest of the direcory to point to your home directory. The pwd block will print out the directory where this jupyter notebook is stored.
Create a subdirectory name for the output, each run through you may want to create a new output folder.
When you run the block the folders will be created in your home directory. If the directory already exists, the block will just set the path for use with future code blocks.
End of explanation
groups = dp.select_groups_for_analysis(name = '%20201106%505892%HILIC%KLv1%',
most_recent = True,
remove_empty = True,
include_list = ['QC'], exclude_list = ['NEG']) #['QC','Blank']
groups = sorted(groups, key=operator.attrgetter('name'))
file_df = pd.DataFrame(columns=['file','time','group'])
for g in groups:
for f in g.items:
if hasattr(f, 'acquisition_time'):
file_df = file_df.append({'file':f, 'time':f.acquisition_time,'group':g}, ignore_index=True)
else:
file_df = file_df.append({'file':f, 'time':0,'group':g}, ignore_index=True)
file_df = file_df.sort_values(by=['time'])
for file_data in file_df.iterrows():
print(file_data[1].file.name)
Explanation: 3. Select groups and get QC files
End of explanation
# DO NOT EDIT THIS BLOCK
pos_templates = ['HILICz150_ANT20190824_TPL_EMA_Unlab_POS',
'HILICz150_ANT20190824_TPL_QCv3_Unlab_POS',
'HILICz150_ANT20190824_TPL_ISv5_Unlab_POS',
'HILICz150_ANT20190824_TPL_ISv5_13C15N_POS',
'HILICz150_ANT20190824_TPL_IS_LabUnlab2_POS']
neg_templates = ['HILICz150_ANT20190824_TPL_EMA_Unlab_NEG',
'HILICz150_ANT20190824_TPL_QCv3_Unlab_NEG',
'HILICz150_ANT20190824_TPL_ISv5_Unlab_NEG',
'HILICz150_ANT20190824_TPL_ISv5_13C15N_NEG',
'HILICz150_ANT20190824_TPL_IS_LabUnlab2_NEG']
#Atlas File Name
QC_template_filename = pos_templates[1]
atlases = metob.retrieve('Atlas',name=QC_template_filename,
username='vrsingan')
names = []
for i,a in enumerate(atlases):
print(i,a.name,pd.to_datetime(a.last_modified,unit='s'),len(a.compound_identifications))
# #Alternatively use this block to create QC atlas from spreadsheet
# import datetime
#dp = reload(dp)
# QC_template_filename = " " #<- Give the template filename to be used for storing in Database
#myAtlas = dp.make_atlas_from_spreadsheet('/global/project/projectdirs/metatlas/projects/1_TemplateAtlases/TemplateAtlas_HILICz150mm_Annotation20190824_QCv3_Unlabeled_Positive.csv',
# QC_template_filename,
# filetype='csv',
# sheetname='',
# polarity = 'positive',
# store=True,
# mz_tolerance = 20)
#atlases = dp.get_metatlas_atlas(name=QC_template_filename,do_print = True,most_recent=True)
myAtlas = atlases[-1]
atlas_df = ma_data.make_atlas_df(myAtlas)
atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
print(myAtlas.name)
print(myAtlas.username)
Explanation: 4. Get template QC atlas from database
Available templates in Database:
Index Atlas_name(POS)\
0 HILICz150_ANT20190824_TPL_EMA_Unlab_POS\
1 HILICz150_ANT20190824_TPL_QCv3_Unlab_POS\
2 HILICz150_ANT20190824_TPL_ISv5_Unlab_POS\
3 HILICz150_ANT20190824_TPL_ISv5_13C15N_POS\
4 HILICz150_ANT20190824_TPL_IS_LabUnlab2_POS
Index Atlas_name(NEG)\
0 HILICz150_ANT20190824_TPL_EMA_Unlab_NEG\
1 HILICz150_ANT20190824_TPL_QCv3_Unlab_NEG\
2 HILICz150_ANT20190824_TPL_ISv5_Unlab_NEG\
3 HILICz150_ANT20190824_TPL_ISv5_13C15N_NEG\
4 HILICz150_ANT20190824_TPL_IS_LabUnlab2_NEG
End of explanation
# rt_allowance = 1.5
# atlas_df['rt_min'] = atlas_df['rt_peak'].apply(lambda rt: rt-rt_allowance)
# atlas_df['rt_max'] = atlas_df['rt_peak'].apply(lambda rt: rt+rt_allowance)
# for compound in range(len(myAtlas.compound_identifications)):
# rt_peak = myAtlas.compound_identifications[compound].rt_references[0].rt_peak
# myAtlas.compound_identifications[compound].rt_references[0].rt_min = rt_peak - rt_allowance
# myAtlas.compound_identifications[compound].rt_references[0].rt_max = rt_peak + rt_allowance
Explanation: 4b. Uncomment the block below to adjust RT window
End of explanation
all_files = []
for file_data in file_df.iterrows():
all_files.append((file_data[1].file,file_data[1].group,atlas_df,myAtlas))
pool = mp.Pool(processes=min(4, len(all_files)))
t0 = time.time()
metatlas_dataset = pool.map(ma_data.get_data_for_atlas_df_and_file, all_files)
pool.close()
pool.terminate()
#If you're code crashes here, make sure to terminate any processes left open.
print(time.time() - t0)
Explanation: 5. Create metatlas dataset from QC files and QC atlas
End of explanation
# dp = reload(dp)
# num_data_points_passing = 3
# peak_height_passing = 1e4
# atlas_df_passing = dp.filter_atlas(atlas_df=atlas_df, input_dataset=metatlas_dataset, num_data_points_passing = num_data_points_passing, peak_height_passing = peak_height_passing)
# print("# Compounds in Atlas: "+str(len(atlas_df)))
# print("# Compounds passing filter: "+str(len(atlas_df_passing)))
# atlas_passing = myAtlas.name+'_filteredby-datapnts'+str(num_data_points_passing)+'-pkht'+str(peak_height_passing)
# myAtlas_passing = dp.make_atlas_from_spreadsheet(atlas_df_passing,
# atlas_passing,
# filetype='dataframe',
# sheetname='',
# polarity = 'positive',
# store=True,
# mz_tolerance = 20)
# atlases = dp.get_metatlas_atlas(name=atlas_passing,do_print = True, most_recent=True)
# myAtlas = atlases[-1]
# atlas_df = ma_data.make_atlas_df(myAtlas)
# atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
# print(myAtlas.name)
# print(myAtlas.username)
# metob.to_dataframe([myAtlas])#
# all_files = []
# for file_data in file_df.iterrows():
# all_files.append((file_data[1].file,file_data[1].group,atlas_df,myAtlas))
# pool = mp.Pool(processes=min(4, len(all_files)))
# t0 = time.time()
# metatlas_dataset = pool.map(ma_data.get_data_for_atlas_df_and_file, all_files)
# pool.close()
# pool.terminate()
# #If you're code crashes here, make sure to terminate any processes left open.
# print(time.time() - t0)
Explanation: 5b Optional: Filter atlas for compounds with no or low signals
Uncomment the below 3 blocks to filter the atlas.
Please ensure that correct polarity is used for the atlases.
End of explanation
from importlib import reload
dp=reload(dp)
rts_df = dp.make_output_dataframe(input_dataset = metatlas_dataset, fieldname='rt_peak', use_labels=True, output_loc = output_data_qc, summarize=True)
rts_df.to_csv(os.path.join(output_data_qc,"QC_Measured_RTs.csv"))
rts_df
Explanation: 6. Summarize RT peak across files and make data frame
End of explanation
import itertools
import math
from __future__ import division
from matplotlib import gridspec
import matplotlib.ticker as mticker
rts_df['atlas RT peak'] = [compound['identification'].rt_references[0].rt_peak for compound in metatlas_dataset[0]]
# number of columns in rts_df that are not values from a specific input file
num_not_files = len(rts_df.columns) - len(metatlas_dataset)
rts_df_plot = rts_df.sort_values(by='standard deviation', ascending=False, na_position='last') \
.drop(['#NaNs'], axis=1) \
.dropna(axis=0, how='all', subset=rts_df.columns[:-num_not_files])
fontsize = 2
pad = 0.1
cols = 8
rows = int(math.ceil((rts_df.shape[0]+1)/8))
fig = plt.figure()
gs = gridspec.GridSpec(rows, cols, figure=fig, wspace=0.2, hspace=0.4)
for i, (index, row) in enumerate(rts_df_plot.iterrows()):
ax = fig.add_subplot(gs[i])
ax.tick_params(direction='in', length=1, pad=pad, width=0.1, labelsize=fontsize)
ax.scatter(range(rts_df_plot.shape[1]-num_not_files),row[:-num_not_files], s=0.2)
ticks_loc = np.arange(0,len(rts_df_plot.columns)-num_not_files , 1.0)
ax.axhline(y=row['atlas RT peak'], color='r', linestyle='-', linewidth=0.2)
ax.set_xlim(-0.5,len(rts_df_plot.columns)-num_not_files+0.5)
ax.xaxis.set_major_locator(mticker.FixedLocator(ticks_loc))
range_columns = list(rts_df_plot.columns[:-num_not_files])+['atlas RT peak']
ax.set_ylim(np.nanmin(row.loc[range_columns])-0.12,
np.nanmax(row.loc[range_columns])+0.12)
[s.set_linewidth(0.1) for s in ax.spines.values()]
# truncate name so it fits above a single subplot
ax.set_title(row.name[:33], pad=pad, fontsize=fontsize)
ax.set_xlabel('Files', labelpad=pad, fontsize=fontsize)
ax.set_ylabel('Actual RTs', labelpad=pad, fontsize=fontsize)
plt.savefig(os.path.join(output_data_qc, 'Compound_Atlas_RTs.pdf'), bbox_inches="tight")
for i,a in enumerate(rts_df.columns):
print(i, a)
selected_column=9
Explanation: 7. Create Compound atlas RTs plot and choose file for prediction
End of explanation
from sklearn.linear_model import LinearRegression, RANSACRegressor
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_absolute_error as mae
actual_rts, pred_rts, polyfit_rts = [],[],[]
current_actual_df = rts_df.loc[:,rts_df.columns[selected_column]]
bad_qc_compounds = np.where(~np.isnan(current_actual_df))
current_actual_df = current_actual_df.iloc[bad_qc_compounds]
current_pred_df = atlas_df.iloc[bad_qc_compounds][['rt_peak']]
actual_rts.append(current_actual_df.values.tolist())
pred_rts.append(current_pred_df.values.tolist())
ransac = RANSACRegressor(random_state=42)
rt_model_linear = ransac.fit(current_pred_df, current_actual_df)
coef_linear = rt_model_linear.estimator_.coef_[0]
intercept_linear = rt_model_linear.estimator_.intercept_
poly_reg = PolynomialFeatures(degree=2)
X_poly = poly_reg.fit_transform(current_pred_df)
rt_model_poly = LinearRegression().fit(X_poly, current_actual_df)
coef_poly = rt_model_poly.coef_
intercept_poly = rt_model_poly.intercept_
for i in range(rts_df.shape[1]-5):
current_actual_df = rts_df.loc[:,rts_df.columns[i]]
bad_qc_compounds = np.where(~np.isnan(current_actual_df))
current_actual_df = current_actual_df.iloc[bad_qc_compounds]
current_pred_df = atlas_df.iloc[bad_qc_compounds][['rt_peak']]
actual_rts.append(current_actual_df.values.tolist())
pred_rts.append(current_pred_df.values.tolist())
Explanation: 8. Create RT adjustment model - Linear & Polynomial Regression
End of explanation
#User can change to use particular qc file
import itertools
import math
from __future__ import division
from matplotlib import gridspec
x = list(itertools.chain(*pred_rts))
y = list(itertools.chain(*actual_rts))
rows = int(math.ceil((rts_df.shape[1]+1)/5))
cols = 5
fig = plt.figure(constrained_layout=False)
gs = gridspec.GridSpec(rows, cols, figure=fig)
plt.rc('font', size=6)
plt.rc('axes', labelsize=6)
plt.rc('xtick', labelsize=3)
plt.rc('ytick', labelsize=3)
for i in range(rts_df.shape[1]-5):
x = list(itertools.chain(*pred_rts[i]))
y = actual_rts[i]
ax = fig.add_subplot(gs[i])
ax.scatter(x, y, s=2)
ax.plot(np.linspace(0, max(x),100), coef_linear*np.linspace(0,max(x),100)+intercept_linear, linewidth=0.5,color='red')
ax.plot(np.linspace(0, max(x),100), (coef_poly[1]*np.linspace(0,max(x),100))+(coef_poly[2]*(np.linspace(0,max(x),100)**2))+intercept_poly, linewidth=0.5,color='green')
ax.set_title("File: "+str(i))
ax.set_xlabel('predicted RTs')
ax.set_ylabel('actual RTs')
fig_legend = "FileIndex FileName"
for i in range(rts_df.shape[1]-5):
fig_legend = fig_legend+"\n"+str(i)+" "+rts_df.columns[i]
fig.tight_layout(pad=0.5)
plt.text(0,-0.03*rts_df.shape[1], fig_legend, transform=plt.gcf().transFigure)
plt.savefig(os.path.join(output_data_qc, 'Actual_vs_Predicted_RTs.pdf'), bbox_inches="tight")
Explanation: 8. Plot actual vs predict RT values and fit a median coeff+intercept line
End of explanation
qc_df = rts_df[[rts_df.columns[selected_column]]]
qc_df = qc_df.copy()
print("Linear Parameters :", coef_linear, intercept_linear)
print("Polynomial Parameters :", coef_poly,intercept_poly)
qc_df.columns = ['RT Measured']
atlas_df.index = qc_df.index
qc_df['RT Reference'] = atlas_df['rt_peak']
qc_df['RT Linear Pred'] = qc_df['RT Reference'].apply(lambda rt: coef_linear*rt+intercept_linear)
qc_df['RT Polynomial Pred'] = qc_df['RT Reference'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
qc_df['RT Diff Linear'] = qc_df['RT Measured'] - qc_df['RT Linear Pred']
qc_df['RT Diff Polynomial'] = qc_df['RT Measured'] - qc_df['RT Polynomial Pred']
qc_df.to_csv(os.path.join(output_data_qc, "RT_Predicted_Model_Comparison.csv"))
qc_df
# CHOOSE YOUR MODEL HERE (linear / polynomial).
#model = 'linear'
model = 'polynomial'
Explanation: 9. Choose your model
End of explanation
# Save model
with open(os.path.join(output_data_qc,'rt_model.txt'), 'w') as f:
if model == 'linear':
f.write('coef = {}\nintercept = {}\nqc_actual_rts = {}\nqc_predicted_rts = {}'.format(coef_linear,
intercept_linear,
', '.join([g.name for g in groups]),
myAtlas.name))
f.write('\n'+repr(rt_model_linear.set_params()))
else:
f.write('coef = {}\nintercept = {}\nqc_actual_rts = {}\nqc_predicted_rts = {}'.format(coef_poly,
intercept_poly,
', '.join([g.name for g in groups]),
myAtlas.name))
f.write('\n'+repr(rt_model_poly.set_params()))
Explanation: 10. Save RT model (optional)
End of explanation
pos_atlas_indices = [0,1,2,3,4]
neg_atlas_indices = [0,1,2,3,4]
free_text = '' # this will be appended to the end of the csv filename exported
save_to_db = False
for ix in pos_atlas_indices:
atlases = metob.retrieve('Atlas',name=pos_templates[ix], username='vrsingan')
prd_atlas_name = pos_templates[ix].replace('TPL', 'PRD')
if free_text != '':
prd_atlas_name = prd_atlas_name+"_"+free_text
prd_atlas_filename = prd_atlas_name+'.csv'
myAtlas = atlases[-1]
PRD_atlas_df = ma_data.make_atlas_df(myAtlas)
PRD_atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
if model == 'linear':
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: coef_linear*rt+intercept_linear)
else:
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
PRD_atlas_df['rt_min'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt-.5)
PRD_atlas_df['rt_max'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt+.5)
PRD_atlas_df.to_csv(os.path.join(output_data_qc,prd_atlas_filename), index=False)
if save_to_db:
dp.make_atlas_from_spreadsheet(PRD_atlas_df,
prd_atlas_name,
filetype='dataframe',
sheetname='',
polarity = 'positive',
store=True,
mz_tolerance = 12)
print(prd_atlas_name+" Created!")
for ix in neg_atlas_indices:
atlases = metob.retrieve('Atlas',name=neg_templates[ix], username='vrsingan')
prd_atlas_name = neg_templates[ix].replace('TPL', 'PRD')
if free_text != '':
prd_atlas_name = prd_atlas_name+"_"+free_text
prd_atlas_filename = prd_atlas_name+'.csv'
myAtlas = atlases[-1]
PRD_atlas_df = ma_data.make_atlas_df(myAtlas)
PRD_atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
if model == 'linear':
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: coef_linear*rt+intercept_linear)
else:
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
PRD_atlas_df['rt_min'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt-.5)
PRD_atlas_df['rt_max'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt+.5)
PRD_atlas_df.to_csv(os.path.join(output_data_qc,prd_atlas_filename), index=False)
if save_to_db:
dp.make_atlas_from_spreadsheet(PRD_atlas_df,
prd_atlas_name,
filetype='dataframe',
sheetname='',
polarity = 'negative',
store=True,
mz_tolerance = 12)
print(prd_atlas_name+" Created!")
Explanation: 11. Auto RT adjust Template atlases
Available templates in Database:
Index Atlas_name(POS)\
0 HILICz150_ANT20190824_TPL_EMA_Unlab_POS\
1 HILICz150_ANT20190824_TPL_QCv3_Unlab_POS\
2 HILICz150_ANT20190824_TPL_ISv5_Unlab_POS\
3 HILICz150_ANT20190824_TPL_ISv5_13C15N_POS\
4 HILICz150_ANT20190824_TPL_IS_LabUnlab2_POS
Index Atlas_name(NEG)\
0 HILICz150_ANT20190824_TPL_EMA_Unlab_NEG\
1 HILICz150_ANT20190824_TPL_QCv3_Unlab_NEG\
2 HILICz150_ANT20190824_TPL_ISv5_Unlab_NEG\
3 HILICz150_ANT20190824_TPL_ISv5_13C15N_NEG\
4 HILICz150_ANT20190824_TPL_IS_LabUnlab2_NEG
End of explanation
## Optional for custom template predictions
# atlas_name = '' #atlas name
# save_to_db = False
# atlases = metob.retrieve('Atlas',name=atlas_name, username='*')
# myAtlas = atlases[-1]
# PRD_atlas_df = ma_data.make_atlas_df(myAtlas)
# PRD_atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
# if model == 'linear':
# PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: coef_linear*rt+intercept_linear)
# else:
# PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
# PRD_atlas_df['rt_min'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt-.5)
# PRD_atlas_df['rt_max'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt+.5)
# PRD_atlas_df.to_csv(os.path.join(output_data_qc, name=atlas_name.replace('TPL','PRD'), index=False)
# if save_to_db:
# dp.make_atlas_from_spreadsheet(PRD_atlas_df,
# PRD_atlas_name,
# filetype='dataframe',
# sheetname='',
# polarity = 'positive', # NOTE - Please make sure you are choosing the correct polarity
# store=True,
# mz_tolerance = 12)
Explanation: OPTIONAL BLOCK FOR RT PREDICTION OF CUSTOM ATLAS
End of explanation |
11,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generation of tables and figures of MRIQC paper
This notebook is associated to the paper
Step1: Read some data (from mriqc package)
Step2: Figure 1
Step3: Figure 2
Step4: Figure 3
Step5: Figure 5
Step6: Evaluation on DS030
This section deals with the results obtained on DS030.
Table 4
Step7: Figure 6A
Step8: Figure 6B | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import os.path as op
import numpy as np
import pandas as pd
from pkg_resources import resource_filename as pkgrf
from mriqc.viz import misc as mviz
from mriqc.classifier.data import read_dataset, combine_datasets
# Where the outputs should be saved
outputs_path = '../../mriqc-data/'
# Path to ABIDE's BIDS structure
abide_path = '/home/oesteban/Data/ABIDE/'
# Path to DS030's BIDS structure
ds030_path = '/home/oesteban/Data/ds030/'
Explanation: Generation of tables and figures of MRIQC paper
This notebook is associated to the paper:
Esteban O, Birman D, Schaer M, Koyejo OO, Poldrack RA, Gorgolewski KJ; MRIQC: Predicting Quality in Manual MRI Assessment Protocols Using No-Reference Image Quality Measures; bioRxiv 111294; doi:10.1101/111294.
End of explanation
x_path = pkgrf('mriqc', 'data/csv/x_abide.csv')
y_path = pkgrf('mriqc', 'data/csv/y_abide.csv')
ds030_x_path = pkgrf('mriqc', 'data/csv/x_ds030.csv')
ds030_y_path = pkgrf('mriqc', 'data/csv/y_ds030.csv')
rater_types = {'rater_1': float, 'rater_2': float, 'rater_3': float}
mdata = pd.read_csv(y_path, index_col=False, dtype=rater_types)
sites = list(sorted(list(set(mdata.site.values.ravel().tolist()))))
Explanation: Read some data (from mriqc package)
End of explanation
out_file = op.join(outputs_path, 'figures', 'fig01-artifacts.svg')
mviz.figure1(
op.join(abide_path, 'sub-50137', 'anat', 'sub-50137_T1w.nii.gz'),
op.join(abide_path, 'sub-50110', 'anat', 'sub-50110_T1w.nii.gz'),
out_file)
out_file_pdf = out_file[:4] + '.pdf'
!rsvg-convert -f pdf -o $out_file_pdf $out_file
Explanation: Figure 1: artifacts in MRI
Shows a couple of subpar datasets from the ABIDE dataset
End of explanation
from mriqc.classifier.sklearn import preprocessing as mcsp
# Concatenate ABIDE & DS030
fulldata = combine_datasets([
(x_path, y_path, 'ABIDE'),
(ds030_x_path, ds030_y_path, 'DS030'),
])
# Names of all features
features =[
'cjv', 'cnr', 'efc', 'fber',
'fwhm_avg', 'fwhm_x', 'fwhm_y', 'fwhm_z',
'icvs_csf', 'icvs_gm', 'icvs_wm',
'inu_med', 'inu_range',
'qi_1', 'qi_2',
'rpve_csf', 'rpve_gm', 'rpve_wm',
'size_x', 'size_y', 'size_z',
'snr_csf', 'snr_gm', 'snr_total', 'snr_wm',
'snrd_csf', 'snrd_gm', 'snrd_total', 'snrd_wm',
'spacing_x', 'spacing_y', 'spacing_z',
'summary_bg_k', 'summary_bg_mad', 'summary_bg_mean', 'summary_bg_median', 'summary_bg_n', 'summary_bg_p05', 'summary_bg_p95', 'summary_bg_stdv',
'summary_csf_k', 'summary_csf_mad', 'summary_csf_mean', 'summary_csf_median', 'summary_csf_n', 'summary_csf_p05', 'summary_csf_p95', 'summary_csf_stdv',
'summary_gm_k', 'summary_gm_mad', 'summary_gm_mean', 'summary_gm_median', 'summary_gm_n', 'summary_gm_p05', 'summary_gm_p95', 'summary_gm_stdv',
'summary_wm_k', 'summary_wm_mad', 'summary_wm_mean', 'summary_wm_median', 'summary_wm_n', 'summary_wm_p05', 'summary_wm_p95', 'summary_wm_stdv',
'tpm_overlap_csf', 'tpm_overlap_gm', 'tpm_overlap_wm',
'wm2max'
]
# Names of features that can be normalized
coi = [
'cjv', 'cnr', 'efc', 'fber', 'fwhm_avg', 'fwhm_x', 'fwhm_y', 'fwhm_z',
'snr_csf', 'snr_gm', 'snr_total', 'snr_wm', 'snrd_csf', 'snrd_gm', 'snrd_total', 'snrd_wm',
'summary_csf_mad', 'summary_csf_mean', 'summary_csf_median', 'summary_csf_p05', 'summary_csf_p95', 'summary_csf_stdv', 'summary_gm_k', 'summary_gm_mad', 'summary_gm_mean', 'summary_gm_median', 'summary_gm_p05', 'summary_gm_p95', 'summary_gm_stdv', 'summary_wm_k', 'summary_wm_mad', 'summary_wm_mean', 'summary_wm_median', 'summary_wm_p05', 'summary_wm_p95', 'summary_wm_stdv'
]
# Plot batches
fig = mviz.plot_batches(fulldata, cols=list(reversed(coi)),
out_file=op.join(outputs_path, 'figures/fig02-batches-a.pdf'))
# Apply new site-wise scaler
scaler = mcsp.BatchRobustScaler(by='site', columns=coi)
scaled = scaler.fit_transform(fulldata)
fig = mviz.plot_batches(scaled, cols=coi, site_labels='right',
out_file=op.join(outputs_path, 'figures/fig02-batches-b.pdf'))
Explanation: Figure 2: batch effects
This code was use to generate the second figure
End of explanation
from sklearn.metrics import cohen_kappa_score
overlap = mdata[np.all(~np.isnan(mdata[['rater_1', 'rater_2']]), axis=1)]
y1 = overlap.rater_1.values.ravel().tolist()
y2 = overlap.rater_2.values.ravel().tolist()
fig = mviz.inter_rater_variability(y1, y2, out_file=op.join(outputs_path, 'figures', 'fig02-irv.pdf'))
print("Cohen's Kappa %f" % cohen_kappa_score(y1, y2))
y1 = overlap.rater_1.values.ravel()
y1[y1 == 0] = 1
y2 = overlap.rater_2.values.ravel()
y2[y2 == 0] = 1
print("Cohen's Kappa (binarized): %f" % cohen_kappa_score(y1, y2))
Explanation: Figure 3: Inter-rater variability
In this figure we evaluate the inter-observer agreement between both raters on the 100 data points overlapping of ABIDE. Also the Cohen's Kappa is computed.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sn
rfc_acc=[0.842, 0.815, 0.648, 0.609, 0.789, 0.761, 0.893, 0.833, 0.842, 0.767, 0.806, 0.850, 0.878, 0.798, 0.559, 0.881, 0.375]
svc_lin_acc=[0.947, 0.667, 0.870, 0.734, 0.754, 0.701, 0.750, 0.639, 0.877, 0.767, 0.500, 0.475, 0.837, 0.768, 0.717, 0.050, 0.429]
svc_rbf_acc=[0.947, 0.852, 0.500, 0.578, 0.772, 0.712, 0.821, 0.583, 0.912, 0.767, 0.500, 0.450, 0.837, 0.778, 0.441, 0.950, 0.339]
df = pd.DataFrame({
'site': list(range(len(sites))) * 3,
'accuracy': rfc_acc + svc_lin_acc + svc_rbf_acc,
'Model': ['RFC'] * len(sites) + ['SVC_lin'] * len(sites) + ['SVC_rbf'] * len(sites)
})
x = np.arange(len(sites))
data = list(zip(rfc_acc, svc_lin_acc, svc_rbf_acc))
dim = len(data[0])
w = 0.81
dimw = w / dim
colors = ['dodgerblue', 'orange', 'darkorange']
allvals = [rfc_acc, svc_lin_acc, svc_rbf_acc]
fig = plt.figure(figsize=(10, 3))
ax2 = plt.subplot2grid((1, 4), (0, 3))
plot = sn.violinplot(data=df, x='Model', y="accuracy", ax=ax2, palette=colors, bw=.1, linewidth=.7)
for i in range(dim):
ax2.axhline(np.average(allvals[i]), ls='--', color=colors[i], lw=.8)
# ax2.axhline(np.percentile(allvals[i], 50), ls='--', color=colors[i], lw=.8)
# sn.swarmplot(x="model", y="accuracy", data=df, color="w", alpha=.5, ax=ax2);
ax2.yaxis.tick_right()
ax2.set_ylabel('')
ax2.set_xticklabels(ax2.get_xticklabels(), rotation=40)
ax2.set_ylim([0.0, 1.0])
ax1 = plt.subplot2grid((1, 4), (0, 0), colspan=3)
for i in range(dim):
y = [d[i] for d in data]
b = ax1.bar(x + i * dimw, y, dimw, bottom=0.001, color=colors[i], alpha=.6)
print(np.average(allvals[i]), np.std(allvals[i]))
ax1.axhline(np.average(allvals[i]), ls='--', color=colors[i], lw=.8)
plt.xlim([-0.2, 16.75])
plt.grid(False)
_ = plt.xticks(np.arange(0, 17) + 0.33, sites, rotation='vertical')
ax1.set_ylim([0.0, 1.0])
ax1.set_ylabel('Accuracy (ACC)')
fig.savefig(op.join(outputs_path, 'figures/fig05-acc.pdf'), bbox_inches='tight', dpi=300)
rfc_roc_auc=[0.597, 0.380, 0.857, 0.610, 0.698, 0.692, 0.963, 0.898, 0.772, 0.596, 0.873, 0.729, 0.784, 0.860, 0.751, 0.900, 0.489]
svc_lin_roc_auc=[0.583, 0.304, 0.943, 0.668, 0.691, 0.754, 1.000, 0.778, 0.847, 0.590, 0.857, 0.604, 0.604, 0.838, 0.447, 0.650, 0.501]
svc_rbf_roc_auc=[0.681, 0.217, 0.827, 0.553, 0.738, 0.616, 0.889, 0.813, 0.845, 0.658, 0.779, 0.493, 0.726, 0.510, 0.544, 0.500, 0.447]
df = pd.DataFrame({
'site': list(range(len(sites))) * 3,
'auc': rfc_roc_auc + svc_lin_roc_auc + svc_rbf_roc_auc,
'Model': ['RFC'] * len(sites) + ['SVC_lin'] * len(sites) + ['SVC_rbf'] * len(sites)
})
x = np.arange(len(sites))
data = list(zip(rfc_roc_auc, svc_lin_roc_auc, svc_rbf_roc_auc))
dim = len(data[0])
w = 0.81
dimw = w / dim
colors = ['dodgerblue', 'orange', 'darkorange']
allvals = [rfc_roc_auc, svc_lin_roc_auc, svc_rbf_roc_auc]
fig = plt.figure(figsize=(10, 3))
ax2 = plt.subplot2grid((1, 4), (0, 3))
plot = sn.violinplot(data=df, x='Model', y="auc", ax=ax2, palette=colors, bw=.1, linewidth=.7)
for i in range(dim):
ax2.axhline(np.average(allvals[i]), ls='--', color=colors[i], lw=.8)
ax2.yaxis.tick_right()
ax2.set_ylabel('')
ax2.set_xticklabels(ax2.get_xticklabels(), rotation=40)
ax2.set_ylim([0.0, 1.0])
ax1 = plt.subplot2grid((1, 4), (0, 0), colspan=3)
for i in range(dim):
y = [d[i] for d in data]
b = ax1.bar(x + i * dimw, y, dimw, bottom=0.001, color=colors[i], alpha=.6)
print(np.average(allvals[i]), np.std(allvals[i]))
ax1.axhline(np.average(allvals[i]), ls='--', color=colors[i], lw=.8)
plt.xlim([-0.2, 16.75])
plt.grid(False)
_ = plt.xticks(np.arange(0, 17) + 0.33, sites, rotation='vertical')
ax1.set_ylim([0.0, 1.0])
ax1.set_ylabel('Area under the curve (AUC)')
fig.savefig(op.join(outputs_path, 'figures/fig05-auc.pdf'), bbox_inches='tight', dpi=300)
Explanation: Figure 5: Model selection
End of explanation
from sklearn.metrics import confusion_matrix
pred_file = op.abspath(op.join(
'..', 'mriqc/data/csv',
'mclf_run-20170724-191452_mod-rfc_ver-0.9.7-rc8_class-2_cv-loso_data-test_pred.csv'))
pred_y = pd.read_csv(pred_file)
true_y = pd.read_csv(ds030_y_path)
true_y.rater_1 *= -1
true_y.rater_1[true_y.rater_1 < 0] = 0
print(confusion_matrix(true_y.rater_1.tolist(), pred_y.pred_y.values.ravel().tolist(), labels=[0, 1]))
Explanation: Evaluation on DS030
This section deals with the results obtained on DS030.
Table 4: Confusion matrix
End of explanation
import seaborn as sn
from sklearn.externals.joblib import load as loadpkl
sn.set_style("white")
# Get the RFC
estimator = loadpkl(pkgrf('mriqc', 'data/mclf_run-20170724-191452_mod-rfc_ver-0.9.7-rc8_class-2_cv-loso_data-train_estimator.pklz'))
forest = estimator.named_steps['rfc']
# Features selected in cross-validation
features = [
"cjv", "cnr", "efc", "fber", "fwhm_avg", "fwhm_x", "fwhm_y", "fwhm_z", "icvs_csf", "icvs_gm", "icvs_wm",
"qi_1", "qi_2", "rpve_csf", "rpve_gm", "rpve_wm", "snr_csf", "snr_gm", "snr_total", "snr_wm", "snrd_csf",
"snrd_gm", "snrd_total", "snrd_wm", "summary_bg_k", "summary_bg_stdv", "summary_csf_k", "summary_csf_mad",
"summary_csf_mean", "summary_csf_median", "summary_csf_p05", "summary_csf_p95", "summary_csf_stdv",
"summary_gm_k", "summary_gm_mad", "summary_gm_mean", "summary_gm_median", "summary_gm_p05", "summary_gm_p95",
"summary_gm_stdv", "summary_wm_k", "summary_wm_mad", "summary_wm_mean", "summary_wm_median", "summary_wm_p05",
"summary_wm_p95", "summary_wm_stdv", "tpm_overlap_csf", "tpm_overlap_gm", "tpm_overlap_wm"]
nft = len(features)
forest = estimator.named_steps['rfc']
importances = np.median([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
# importances = np.median(, axis=0)
indices = np.argsort(importances)[::-1]
df = {'Feature': [], 'Importance': []}
for tree in forest.estimators_:
for i in indices:
df['Feature'] += [features[i]]
df['Importance'] += [tree.feature_importances_[i]]
fig = plt.figure(figsize=(20, 6))
# plt.title("Feature importance plot")
sn.boxplot(x='Feature', y='Importance', data=pd.DataFrame(df), linewidth=1, notch=True)
plt.xlabel('Features selected (%d)' % len(features))
# plt.bar(range(nft), importances[indices],
# color="r", yerr=std[indices], align="center")
plt.xticks(range(nft))
plt.gca().set_xticklabels([features[i] for i in indices], rotation=90)
plt.xlim([-1, nft])
plt.show()
fig.savefig(op.join(outputs_path, 'figures', 'fig06-exp2-fi.pdf'),
bbox_inches='tight', pad_inches=0, dpi=300)
Explanation: Figure 6A: Feature importances
End of explanation
fn = ['10225', '10235', '10316', '10339', '10365', '10376',
'10429', '10460', '10506', '10527', '10530', '10624',
'10696', '10891', '10948', '10968', '10977', '11050',
'11052', '11142', '11143', '11149', '50004', '50005',
'50008', '50010', '50016', '50027', '50029', '50033',
'50034', '50036', '50043', '50047', '50049', '50053',
'50054', '50055', '50085', '60006', '60010', '60012',
'60014', '60016', '60021', '60046', '60052', '60072',
'60073', '60084', '60087', '70051', '70060', '70072']
fp = ['10280', '10455', '10523', '11112', '50020', '50048',
'50052', '50061', '50073', '60077']
fn_clear = [
('10316', 98),
('10968', 122),
('11050', 110),
('11149', 111)
]
import matplotlib.pyplot as plt
from mriqc.viz.utils import plot_slice
import nibabel as nb
for im, z in fn_clear:
image_path = op.join(ds030_path, 'sub-%s' % im, 'anat', 'sub-%s_T1w.nii.gz' % im)
imdata = nb.load(image_path).get_data()
fig, ax = plt.subplots()
plot_slice(imdata[..., z], annotate=True)
fig.savefig(op.join(outputs_path, 'figures', 'fig-06_sub-%s_slice-%03d.svg' % (im, z)),
dpi=300, bbox_inches='tight')
plt.clf()
plt.close()
fp_clear = [
('10455', 140),
('50073', 162),
]
for im, z in fp_clear:
image_path = op.join(ds030_path, 'sub-%s' % im, 'anat', 'sub-%s_T1w.nii.gz' % im)
imdata = nb.load(image_path).get_data()
fig, ax = plt.subplots()
plot_slice(imdata[..., z], annotate=True)
fig.savefig(op.join(outputs_path, 'figures', 'fig-06_sub-%s_slice-%03d.svg' % (im, z)),
dpi=300, bbox_inches='tight')
plt.clf()
plt.close()
Explanation: Figure 6B: Misclassified images of DS030
End of explanation |
11,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align
Step12: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are
Step13: Test on Images
Now you should build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
Step15: run your solution on all test_images and make copies into the test_images directory).
Step16: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos
Step17: Let's try the one with the solid white lane on the right first ...
Step19: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
Step21: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.
Now for the one with the solid yellow lane on the left. This one's more tricky!
Step23: Reflections
Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?
Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!
Submission
If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! | Python Code:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
import math
%matplotlib inline
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimesions:', image.shape)
plt.imshow(image) #call as plt.imshow(gray, cmap='gray') to show a grayscaled image
Explanation: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
End of explanation
# test image for all unit tests
test_image = (mpimg.imread('test_images/solidYellowLeft.jpg'))
def grayscale(img):
Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
you should call plt.imshow(gray, cmap='gray')
#gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Use BGR2GRAY if you read an image with cv2.imread()
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
############## UNIT TEST ##############
gray = grayscale(test_image)
#this should be a single channel image shaped like(image.shape[0], image.shape[1])
print("gray image shape: {}".format(gray.shape))
plt.imshow(gray, cmap='gray');
############################
def gaussian_blur(img, kernel_size=7):
Applies a Gaussian Noise kernel
gray_image = grayscale(img)
return cv2.GaussianBlur(gray_image, (kernel_size, kernel_size), 0)
############## UNIT TEST ##############
gaussian_blur_test = gaussian_blur(test_image)
# this should still be a single channel image
print("gaussian_blur_test shape: {}".format(gaussian_blur_test.shape))
plt.imshow(gaussian_blur_test, cmap='gray');
######################
def canny(img, low_threshold=70, high_threshold=210):
Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
############## UNIT TEST ##############
test_edges = canny(test_image)
print("canny image shape".format(test_edges.shape))
# this should still be a singel channel image.
plt.imshow(test_edges, cmap='gray')
######################
def region_of_interest(edges):
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
#defining a blank mask to start with
#Create a masked edges image
mask = np.zeros_like(edges)
ignore_mask_color = 255
# Define a four sided polygon to mask.
# numpy.array returns a tuple of number of rows, columns and channels.
imshape = edges.shape
vertices = np.array([[(50,imshape[0]),(380, 350), (580, 350), (900,imshape[0])]], dtype=np.int32)
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_edges = cv2.bitwise_and(edges, mask)
return masked_edges
############## UNIT TEST ##############
test_edges = canny(test_image)
masked_edges = region_of_interest(test_edges)
print("masked_edges shape {}".format(masked_edges.shape))
# again a single channel image
plt.imshow(masked_edges, cmap='gray')
######################
After you separate them, calculate the average slope of the segments per lane.
With that slope, decide on two Y coordinates where you want the lane lines to start and end
(for example, use the bottom of the image as a Y_bottom point, and Y_top = 300 or something
like that – where the horizon is). Now that you have your Y coordinates, calculate the
X coordinates per lane line.
# Helper variables for comparing and averaging values
prev_left_top_x = prev_right_top_x = prev_right_bottom_x = prev_left_bottom_x = 0
all_left_top_x = all_right_top_x = all_right_bottom_x = all_left_bottom_x = [0]
all_left_top_x = np.array(all_left_top_x)
all_left_bottom_x = np.array(all_left_bottom_x)
all_right_top_x = np.array(all_right_top_x)
all_right_bottom_x = np.array(all_right_bottom_x)
def cruise_control (previous, current, factor):
Helper function for comparing current and previous values
Uncomment print line to watch value differences it's kind of neat!
# print (previous, current, previous - current)
difference = int(abs(previous) - abs(current))
#print(difference)
if difference <= factor:
return current
else:
return previous
def get_point_horizontal( vx, vy, x1, y1, y_ref ):
Helper function for draw_lines
Calculates 'x' matching: 2 points on a line, its slope, and a given 'y' coordinate.
m = vy / vx
b = y1 - ( m * x1 )
x = ( y_ref - b ) / m
return x
def draw_lines(line_img, lines, color=[255, 0, 0], thickness=6):
average/extrapolate the line segments you detect to map out the full extent of the lane
right_segment_points = []
left_segment_points = []
top_y = 350
bot_y = line_img.shape[0]
smoothie = 6 #lower number = more discared frames.
for line in lines:
for x1,y1,x2,y2 in line:
# 1, find slope
slope = float((y2-y1)/(x2-x1))
# print (slope)
max_slope_thresh = .85
min_slope_thresh = .2
# 2, use sloap to split lanes into left and right.
# theory that a negative slope will be right lane
if max_slope_thresh >= slope >= min_slope_thresh:
# print (slope)
# append all points to points array
right_segment_points.append([x1,y1])
right_segment_points.append([x2,y2])
# declare numpy array
# fit a line with those points
# TODO explore other options besides DIST_12
# TODO compare to polyfit implementation
right_segment = np.array(right_segment_points)
[r_vx, r_vy, r_cx, r_cy] = cv2.fitLine(right_segment, cv2.DIST_L12, 0, 0.01, 0.01)
# define 2 x points for right lane line
right_top_x = get_point_horizontal( r_vx, r_vy, r_cx, r_cy, top_y )
right_bottom_x = get_point_horizontal( r_vx, r_vy, r_cx, r_cy, bot_y )
elif -max_slope_thresh <= slope <= -min_slope_thresh:
# print (slope)
# append all points to points array
left_segment_points.append([x1,y1])
left_segment_points.append([x2,y2])
# declare numpy array
# fit a line with those points
# TODO add something to test if segment points not blank
left_segment = np.array(left_segment_points)
[r_vx, r_vy, r_cx, r_cy] = cv2.fitLine(left_segment, cv2.DIST_L12, 0, 0.01, 0.01)
# define 2 x points for left lane line
left_top_x = get_point_horizontal( r_vx, r_vy, r_cx, r_cy, top_y )
left_bottom_x = get_point_horizontal( r_vx, r_vy, r_cx, r_cy, bot_y )
#TODO split into lists to avoid so much repeat
These global functions acomplish two things:
a) Averaging and weighting point values
b) Discarding frames that are too far out of "normal" as defined by smoothie variable
Smoothie compares absolute value difference between current and previous frame, in pixels,
and if the current frame has a greater difference than smoothie variable, it uses the previous frame.
global prev_left_top_x, all_left_top_x
left_top_x_corrected = np.mean(all_left_top_x) * 2 + (cruise_control(prev_left_top_x, left_top_x, smoothie) * 2)/2
np.append(all_left_top_x, left_top_x)
prev_left_top_x = left_top_x
global prev_left_bottom_x, all_left_bottom_x
left_bottom_x_corrected = (np.mean(all_left_bottom_x) * 2) + (cruise_control(prev_left_bottom_x, left_bottom_x, smoothie) * 2)/2
np.append(all_left_bottom_x, left_bottom_x)
prev_left_bottom_x = left_bottom_x
global prev_right_top_x, all_right_top_x
right_top_x_corrected = (np.mean(all_right_top_x) * 2) + (cruise_control(prev_right_top_x, right_top_x, smoothie) * 2)/2
np.append(all_right_top_x, right_top_x)
prev_right_top_x = right_top_x
global prev_right_bottom_x, all_right_bottom_x
right_bottom_x_corrected = (np.mean(all_right_bottom_x) * 2) + (cruise_control(prev_right_bottom_x, right_bottom_x, smoothie) * 2)/2
np.append(all_right_bottom_x, right_bottom_x)
prev_right_bottom_x = right_bottom_x
# Print two lines based on above
cv2.line(line_img, (int(left_bottom_x_corrected), bot_y), (int(left_top_x_corrected), top_y), color, thickness)
cv2.line(line_img, (int(right_bottom_x_corrected), bot_y), (int(right_top_x_corrected), top_y), color, thickness)
def hough_lines(img, rho=1, theta=np.pi/180, threshold=20, min_line_len=40, max_line_gap=45):
Run Hough on edge detected image
Output "lines" is an array containing endpoints of detected line segments
edges = canny(img)
masked_edges = region_of_interest(edges)
lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
############## UNIT TEST ##############
test_hough = hough_lines(test_image)
print("masked_edges shape {}".format(test_hough.shape))
plt.imshow(test_hough)
######################
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=1, β=1, λ=0.):
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
return cv2.addWeighted(initial_img, α, img, β, λ)
############## UNIT TEST ##############
test_hough = hough_lines(test_image)
test_weighted = weighted_img(test_hough, test_image)
print("masked_edges shape {}".format(test_weighted.shape))
plt.imshow(test_weighted)
######################
Explanation: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
import os
#os.listdir("test_images/")
Explanation: Test on Images
Now you should build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
Collects test images, creates folder to put them in, runs pipeline, and saves images.
import os
import shutil
test_images = os.listdir("test_images/")
try:
processed_images = os.listdir("test_images/processed_images/")
except FileNotFoundError:
print("File not found")
if processed_images:
shutil.rmtree("test_images/processed_images/", ignore_errors=True)
#Create New Folder for Processing
create_success = os.mkdir("test_images/processed_images/")
for img in test_images:
if '.jpg' in img:
image = mpimg.imread("test_images/%(filename)s" % {"filename": img})
hough = hough_lines(image)
processed_image = weighted_img(hough, image)
color_fix = cv2.cvtColor(processed_image, cv2.COLOR_BGR2RGB)
cv2.imwrite("test_images/processed_images/%(filename)s_processed.jpg" %
{"filename": img.replace(".jpg","")}, color_fix)
Explanation: run your solution on all test_images and make copies into the test_images directory).
End of explanation
import imageio
imageio.plugins.ffmpeg.download()
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
hough = hough_lines(image)
result = weighted_img(hough, image)
return result
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
End of explanation
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(white_output))
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output))
Explanation: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
def process_image2(image, rho=1, theta=np.pi/180, threshold=100, min_line_len=0, max_line_gap=0):
hough = hough_lines(image)
result = weighted_img(hough, image)
return result
challenge_output = 'extra1.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image2)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(challenge_output))
Explanation: Reflections
Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?
Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!
Submission
If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation |
11,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Characteristic times in real networks
Step1: Model Chassagnole2002
Create a new network object and load the informations from the model Chassagnole2002.
In the original model, the concentration of the coffactors depends explicitly on the time. To obtain steady state it is necessary to get rid of this explicit dependence, instead the concentrations of those coffactors are defined as constants.
Also the Phosphotransferase system reactions has an unbalanced stoichiometry different from its actual stoichiometry. In the studied model, the stoichiometry is rectified but to maintain the rate to the author's choice, the $r_{max}^{PTS}$ is scaled by a factor 65.
The file Chassagnole2002_info.csv contains the information about the number of carbons constituing each metabolite and it tells us wich metabolite exchange labelled carbons. A metabolite that do not echange labelled carbons behaves as sink, it is an exit for the system.
Step2: A Network object containts other objects stored in arrays
Step3: The following calls are required before generating the jacobians for the tracer and concentration perturbation.
- chassagnole.generateDerivatives()
Generate the derivative function accessible at chassagnole.derivatives
- chassagnole.generateRates()
Generate the rate function accessible at chassagnole.rates
- chassagnole.testCarbonBalance()
Compute the carbon balance of each reaction. Accessible at chassagnole.reactions[i].carbonBalance
Jacobians ###
Step4: To find the jacobian that accounts for the tracers dynamics the algorithm first searches for the steady state of the model. At steady state the probability for a labelled molecule $A^t$ to be transformed through a reaction $v^+$ is proportional to the fraction of $A$ that is labelled. The tracer reaction releases labelled carbons that are shared between the substrate of the reaction proportionally to their stoichiometry and to the number of carbons they contain.
Step5: Model Teusink 2000
Step6: Model Mosca 2012
Step7: Model Curto 1998 | Python Code:
from imp import reload
import re
import numpy as np
from scipy.integrate import ode
import NetworkComponents
Explanation: Characteristic times in real networks
End of explanation
chassagnole = NetworkComponents.Network("chassagnole2002")
chassagnole.readSBML("./published_models/Chassagnole2002.xml")
chassagnole.readInformations("./published_models/Chassagnole2002_info.csv")
Explanation: Model Chassagnole2002
Create a new network object and load the informations from the model Chassagnole2002.
In the original model, the concentration of the coffactors depends explicitly on the time. To obtain steady state it is necessary to get rid of this explicit dependence, instead the concentrations of those coffactors are defined as constants.
Also the Phosphotransferase system reactions has an unbalanced stoichiometry different from its actual stoichiometry. In the studied model, the stoichiometry is rectified but to maintain the rate to the author's choice, the $r_{max}^{PTS}$ is scaled by a factor 65.
The file Chassagnole2002_info.csv contains the information about the number of carbons constituing each metabolite and it tells us wich metabolite exchange labelled carbons. A metabolite that do not echange labelled carbons behaves as sink, it is an exit for the system.
End of explanation
chassagnole.separateForwardBackwardFluxes()
chassagnole.updateNetwork()
chassagnole.generateDerivatives()
chassagnole.generateRates()
chassagnole.testCarbonBalance()
Explanation: A Network object containts other objects stored in arrays:
- chassagnole.compartments contains the Compartment objects
- chassagnole.metabolites contains the Metabolite objects
- chassagnole.reactions contains the Reaction objects
- chassagnole.parameters contains the Parameters objects
- chassagnole.functionDefinitions contains the FunctionDefinitions objects
Separate the forward and backward fluxes
To derive the tracer dynamics, one need to know the values of the forard and the backward values of the reactions. The function separateForwardBackwardFluxes perform this separation of a rate law from the original model into two new rate laws; one accounts for the forward rate and the second accounts for the backward rate.
The function updateNetwork compiles the network to assign an index and a formula to every reactions and species. After this step it is possible to create the derivative function for the concentration vector.
End of explanation
Jtracer = chassagnole.generateTracerJacobian()
Explanation: The following calls are required before generating the jacobians for the tracer and concentration perturbation.
- chassagnole.generateDerivatives()
Generate the derivative function accessible at chassagnole.derivatives
- chassagnole.generateRates()
Generate the rate function accessible at chassagnole.rates
- chassagnole.testCarbonBalance()
Compute the carbon balance of each reaction. Accessible at chassagnole.reactions[i].carbonBalance
Jacobians ###
End of explanation
Jperturbation = chassagnole.generatePerturbationJacobian()
tauc,Tc = chassagnole.computeCharacteristicTimes("perturbation",method="integration")
taut,Tt = chassagnole.computeCharacteristicTimes("tracer",method="inverseJacobian")
print("tau_c = %f s"%(tauc))
print("tau_t = %f s"%(taut))
print("T_c = %f s"%(Tc))
print("T_t = %f s"%(Tt))
Explanation: To find the jacobian that accounts for the tracers dynamics the algorithm first searches for the steady state of the model. At steady state the probability for a labelled molecule $A^t$ to be transformed through a reaction $v^+$ is proportional to the fraction of $A$ that is labelled. The tracer reaction releases labelled carbons that are shared between the substrate of the reaction proportionally to their stoichiometry and to the number of carbons they contain.
End of explanation
teusink = NetworkComponents.Network("Teusink2000")
teusink.readSBML("./published_models/Teusink2000.xml")
teusink.readInformations("./published_models/Teusink2000_info.csv")
teusink.separateForwardBackwardFluxes()
teusink.updateNetwork()
teusink.generateDerivatives()
teusink.generateRates()
teusink.testCarbonBalance()
Jtracer = teusink.generateTracerJacobian()
Jperturbation = teusink.generatePerturbationJacobian()
tauc,Tc = teusink.computeCharacteristicTimes("perturbation",method="integration")
taut,Tt = teusink.computeCharacteristicTimes("tracer",method="integration")
print("tau_c = %f s"%(tauc*60))
print("tau_t = %f s"%(taut*60))
print("T_c = %f s"%(Tc*60))
print("T_t = %f s"%(Tt*60))
Explanation: Model Teusink 2000
End of explanation
mosca = NetworkComponents.Network("Mosca2012")
mosca.readSBML("./published_models/Mosca2012.xml")
mosca.readInformations("./published_models/Mosca2012_info.csv")
mosca.separateForwardBackwardFluxes()
mosca.updateNetwork()
mosca.generateDerivatives()
mosca.generateRates()
mosca.testCarbonBalance()
Jtracer = mosca.generateTracerJacobian()
Jperturbation = mosca.generatePerturbationJacobian()
tauc,Tc = mosca.computeCharacteristicTimes("perturbation",method="integration")
taut,Tt = mosca.computeCharacteristicTimes("tracer",method="inverseJacobian")
print("tau_c = %f s"%(tauc*60))
print("tau_t = %f s"%(taut*60))
print("T_c = %f s"%(Tc*60))
print("T_t = %f s"%(Tt*60))
Explanation: Model Mosca 2012
End of explanation
curto = NetworkComponents.Network("Curto1998")
curto.readSBML("./published_models/Curto1998.xml")
curto.readInformations("./published_models/Curto1998_info.csv")
curto.separateForwardBackwardFluxes()
curto.updateNetwork()
curto.generateDerivatives()
curto.generateRates()
curto.testCarbonBalance()
Jtracer = curto.generateTracerJacobian()
Jperturbation = curto.generatePerturbationJacobian()
tauc,Tc = curto.computeCharacteristicTimes("perturbation",method="inverseJacobian")
taut,Tt = curto.computeCharacteristicTimes("tracer",method="inverseJacobian")
print("tau_c = %f s"%(tauc*60))
print("tau_t = %f s"%(taut*60))
print("T_c = %f s"%(Tc*60))
print("T_t = %f s"%(Tt*60))
Explanation: Model Curto 1998
End of explanation |
11,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem Set 8 Review & Transfer Learning with word2vec
Import various modules that we need for this notebook (now using Keras 1.0.0)
Step1: I. Problem Set 8, Part 1
Let's work through a solution to the first part of problem set 8, where you applied various techniques to the STL-10 dataset.
Step2: And construct a flattened version of it, for the linear model case
Step3: (1) neural network
We now build and evaluate a neural network.
Step4: (2) support vector machine
And now, a basic linear support vector machine.
Step5: (3) penalized logistc model
And finally, an L1 penalized model
Step6: II. Problem Set 8, Part 2
Now, let's read in the Chicago crime dataset and see how well we can get a neural network to perform on it.
Step7: Now, built a neural network for the model
Step8: III. Transfer Learning IMDB Sentiment analysis
Now, let's use the word2vec embeddings on the IMDB sentiment analysis corpus. This will allow us to use a significantly larger vocabulary of words. I'll start by reading in the IMDB corpus again from the raw text.
Step9: I'll fit a significantly larger vocabular this time, as the embeddings are basically given for us. | Python Code:
%pylab inline
import copy
import numpy as np
import pandas as pd
import sys
import os
import re
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, RMSprop
from keras.layers.normalization import BatchNormalization
from keras.layers.wrappers import TimeDistributed
from keras.preprocessing.text import Tokenizer
from keras.preprocessing import sequence
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import SimpleRNN, LSTM, GRU
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from gensim.models import word2vec
Explanation: Problem Set 8 Review & Transfer Learning with word2vec
Import various modules that we need for this notebook (now using Keras 1.0.0)
End of explanation
dir_in = "../../../class_data/stl10/"
X_train = np.genfromtxt(dir_in + 'X_train_new.csv', delimiter=',')
Y_train = np.genfromtxt(dir_in + 'Y_train.csv', delimiter=',')
X_test = np.genfromtxt(dir_in + 'X_test_new.csv', delimiter=',')
Y_test = np.genfromtxt(dir_in + 'Y_test.csv', delimiter=',')
Explanation: I. Problem Set 8, Part 1
Let's work through a solution to the first part of problem set 8, where you applied various techniques to the STL-10 dataset.
End of explanation
Y_train_flat = np.zeros(Y_train.shape[0])
Y_test_flat = np.zeros(Y_test.shape[0])
for i in range(10):
Y_train_flat[Y_train[:,i] == 1] = i
Y_test_flat[Y_test[:,i] == 1] = i
Explanation: And construct a flattened version of it, for the linear model case:
End of explanation
model = Sequential()
model.add(Dense(1024, input_shape = (X_train.shape[1],)))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
rms = RMSprop()
model.compile(loss='categorical_crossentropy', optimizer=rms,
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=32, nb_epoch=5, verbose=1)
test_rate = model.evaluate(X_test, Y_test)[1]
print("Test classification rate %0.05f" % test_rate)
Explanation: (1) neural network
We now build and evaluate a neural network.
End of explanation
svc_obj = SVC(kernel='linear', C=1)
svc_obj.fit(X_train, Y_train_flat)
pred = svc_obj.predict(X_test)
pd.crosstab(pred, Y_test_flat)
c_rate = sum(pred == Y_test_flat) / len(pred)
print("Test classification rate %0.05f" % c_rate)
Explanation: (2) support vector machine
And now, a basic linear support vector machine.
End of explanation
lr = LogisticRegression(penalty = 'l1')
lr.fit(X_train, Y_train_flat)
pred = lr.predict(X_test)
pd.crosstab(pred, Y_test_flat)
c_rate = sum(pred == Y_test_flat) / len(pred)
print("Test classification rate %0.05f" % c_rate)
Explanation: (3) penalized logistc model
And finally, an L1 penalized model:
End of explanation
dir_in = "../../../class_data/chi_python/"
X_train = np.genfromtxt(dir_in + 'chiCrimeMat_X_train.csv', delimiter=',')
Y_train = np.genfromtxt(dir_in + 'chiCrimeMat_Y_train.csv', delimiter=',')
X_test = np.genfromtxt(dir_in + 'chiCrimeMat_X_test.csv', delimiter=',')
Y_test = np.genfromtxt(dir_in + 'chiCrimeMat_Y_test.csv', delimiter=',')
Explanation: II. Problem Set 8, Part 2
Now, let's read in the Chicago crime dataset and see how well we can get a neural network to perform on it.
End of explanation
model = Sequential()
model.add(Dense(1024, input_shape = (434,)))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(5))
model.add(Activation('softmax'))
rms = RMSprop()
model.compile(loss='categorical_crossentropy', optimizer=rms,
metrics=['accuracy'])
# downsample, if need be:
num_sample = X_train.shape[0]
model.fit(X_train[:num_sample], Y_train[:num_sample], batch_size=32,
nb_epoch=10, verbose=1)
test_rate = model.evaluate(X_test, Y_test)[1]
print("Test classification rate %0.05f" % test_rate)
Explanation: Now, built a neural network for the model
End of explanation
path = "../../../class_data/aclImdb/"
ff = [path + "train/pos/" + x for x in os.listdir(path + "train/pos")] + \
[path + "train/neg/" + x for x in os.listdir(path + "train/neg")] + \
[path + "test/pos/" + x for x in os.listdir(path + "test/pos")] + \
[path + "test/neg/" + x for x in os.listdir(path + "test/neg")]
TAG_RE = re.compile(r'<[^>]+>')
def remove_tags(text):
return TAG_RE.sub('', text)
input_label = ([1] * 12500 + [0] * 12500) * 2
input_text = []
for f in ff:
with open(f) as fin:
pass
input_text += [remove_tags(" ".join(fin.readlines()))]
Explanation: III. Transfer Learning IMDB Sentiment analysis
Now, let's use the word2vec embeddings on the IMDB sentiment analysis corpus. This will allow us to use a significantly larger vocabulary of words. I'll start by reading in the IMDB corpus again from the raw text.
End of explanation
num_words = 5000
max_len = 400
tok = Tokenizer(num_words)
tok.fit_on_texts(input_text[:25000])
X_train = tok.texts_to_sequences(input_text[:25000])
X_test = tok.texts_to_sequences(input_text[25000:])
y_train = input_label[:25000]
y_test = input_label[25000:]
X_train = sequence.pad_sequences(X_train, maxlen=max_len)
X_test = sequence.pad_sequences(X_test, maxlen=max_len)
words = []
for iter in range(num_words):
words += [key for key,value in tok.word_index.items() if value==iter+1]
loc = "/Users/taylor/files/word2vec_python/GoogleNews-vectors-negative300.bin"
w2v = word2vec.Word2Vec.load_word2vec_format(loc, binary=True)
weights = np.zeros((num_words,300))
for idx, w in enumerate(words):
try:
weights[idx,:] = w2v[w]
except KeyError as e:
pass
model = Sequential()
model.add(Embedding(num_words, 300, input_length=max_len))
model.add(Dropout(0.5))
model.add(GRU(16,activation='relu'))
model.add(Dense(128))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.layers[0].set_weights([weights])
model.layers[0].trainable = False
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, nb_epoch=5, verbose=1,
validation_data=(X_test, y_test))
Explanation: I'll fit a significantly larger vocabular this time, as the embeddings are basically given for us.
End of explanation |
11,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
# TODO: Implement Function
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return {'.':'||period||',
',':'||comma||',
'"':'||quotation_mark||',
';':'||semicolon||',
'!':'||exclamation_mark||',
'?':'||question_mark||',
'(':'||left_paren||',
')':'||right_paren||',
'--':'||dash||',
'\n':'||return||'}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
# TODO: Implement Function
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
num_layers = 1
basic_cells = [tf.contrib.rnn.BasicLSTMCell(rnn_size) for _ in range(num_layers)]
cell = tf.contrib.rnn.MultiRNNCell(basic_cells)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embeddings = tf.Variable(tf.random_uniform([vocab_size, embed_dim], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embeddings, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, embed_dim)
rnn, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(inputs=rnn, \
num_outputs=vocab_size, \
activation_fn=None,\
weights_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.01),\
biases_initializer=tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
n_batches = len(int_text) // (batch_size * seq_length)
total_size = n_batches * batch_size * seq_length
x = np.array(int_text[:total_size])
y = np.roll(x, -1)
input_batch = np.split(x.reshape(batch_size, -1),
n_batches, 1)
target_batch = np.split(y.reshape(batch_size, -1),
n_batches, 1)
batches = np.array(list(zip(input_batch, target_batch)))
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 500
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 400
# Sequence Length
seq_length = 14
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 25
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
input_0 = loaded_graph.get_tensor_by_name("input:0")
initial_state_0 = loaded_graph.get_tensor_by_name("initial_state:0")
final_state_0 = loaded_graph.get_tensor_by_name("final_state:0")
probs_0 = loaded_graph.get_tensor_by_name("probs:0")
return input_0, initial_state_0, final_state_0, probs_0
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
predicted_word = int_to_vocab[np.argmax(probabilities)]
return predicted_word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
11,658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook describes setting up a model inspired by the Maxout Network (Goodfellow et al.) which they ran out the CIFAR-10 dataset.
The yaml file was modified as little as possible, substituting variables for settings and dataset paths, getting rid of their data pre-processing, and changing the number of channels.
Step1: Before we can start training the model, we need to create a dictionary with all preprocessing settings and poimt to the yaml file corresponding to the model
Step2: To set up the path for settings, utils and os must be imported.
Step3: Now the settings can be saved.
Step4: Now we can start training the model with
Step5: After trying to run the model, it broke with an error that partialSum does not divide numModules. Turns out that partialSum is a parameter of a convolutional layer that affects the performance of the weight gradient computation and it has to divide the area of the output grid in this layer, which is given by numModules. Conveniently, the error gave the values of numModules (which are not specified in the yaml file) so we just changed partialSum in each layer to a factor of the corresponding numModules. | Python Code:
!obj:pylearn2.train.Train {
dataset: &train !obj:neukrill_net.dense_dataset.DensePNGDataset {
settings_path: %(settings_path)s,
run_settings: %(run_settings_path)s,
training_set_mode: "train"
},
model: !obj:pylearn2.models.mlp.MLP {
batch_size: &batch_size 128,
layers: [
!obj:pylearn2.models.maxout.MaxoutConvC01B {
layer_name: 'h0',
pad: 4,
tied_b: 1,
W_lr_scale: .05,
b_lr_scale: .05,
num_channels: 96,
num_pieces: 2,
kernel_shape: [8, 8],
pool_shape: [4, 4],
pool_stride: [2, 2],
irange: .005,
max_kernel_norm: .9,
partial_sum: 33,
},
!obj:pylearn2.models.maxout.MaxoutConvC01B {
layer_name: 'h1',
pad: 3,
tied_b: 1,
W_lr_scale: .05,
b_lr_scale: .05,
num_channels: 192,
num_pieces: 2,
kernel_shape: [8, 8],
pool_shape: [4, 4],
pool_stride: [2, 2],
irange: .005,
max_kernel_norm: 1.9365,
partial_sum: 15,
},
!obj:pylearn2.models.maxout.MaxoutConvC01B {
pad: 3,
layer_name: 'h2',
tied_b: 1,
W_lr_scale: .05,
b_lr_scale: .05,
num_channels: 192,
num_pieces: 2,
kernel_shape: [5, 5],
pool_shape: [2, 2],
pool_stride: [2, 2],
irange: .005,
max_kernel_norm: 1.9365,
},
!obj:pylearn2.models.maxout.Maxout {
layer_name: 'h3',
irange: .005,
num_units: 500,
num_pieces: 5,
max_col_norm: 1.9
},
!obj:pylearn2.models.mlp.Softmax {
max_col_norm: 1.9365,
layer_name: 'y',
n_classes: %(n_classes)i,
irange: .005
}
],
input_space: !obj:pylearn2.space.Conv2DSpace {
shape: &window_shape [32, 32],
num_channels: 3,
axes: ['c', 0, 1, 'b'],
},
},
algorithm: !obj:pylearn2.training_algorithms.sgd.SGD {
learning_rate: .17,
learning_rule: !obj:pylearn2.training_algorithms.learning_rule.Momentum {
init_momentum: .5
},
train_iteration_mode: 'even_shuffled_sequential',
monitor_iteration_mode: 'even_sequential',
monitoring_dataset:
{
'test' : !obj:neukrill_net.dense_dataset.DensePNGDataset {
settings_path: %(settings_path)s,
run_settings: %(run_settings_path)s,
training_set_mode: "test"
},
},
cost: !obj:pylearn2.costs.mlp.dropout.Dropout {
input_include_probs: { 'h0' : .8 },
input_scales: { 'h0' : 1. }
},
termination_criterion: !obj:pylearn2.termination_criteria.EpochCounter {
max_epochs: 474
},
},
extensions: [
!obj:pylearn2.training_algorithms.learning_rule.MomentumAdjustor {
start: 1,
saturate: 250,
final_momentum: .65
},
!obj:pylearn2.training_algorithms.sgd.LinearDecayOverEpoch {
start: 1,
saturate: 500,
decay_factor: .01
},
!obj:pylearn2.train_extensions.best_params.MonitorBasedSaveBest {
channel_name: test_y_misclass,
save_path: '%(save_path)s'
},
],
}
Explanation: This notebook describes setting up a model inspired by the Maxout Network (Goodfellow et al.) which they ran out the CIFAR-10 dataset.
The yaml file was modified as little as possible, substituting variables for settings and dataset paths, getting rid of their data pre-processing, and changing the number of channels.
End of explanation
run_settings = {
"model type":"pylearn2",
"yaml file": "cifar10.yaml",
"preprocessing":{"resize":[48,48]},
"final_shape":[48,48],
"augmentation_factor":1,
"train_split": 0.8
}
Explanation: Before we can start training the model, we need to create a dictionary with all preprocessing settings and poimt to the yaml file corresponding to the model: cifar10.yaml.
End of explanation
import neukrill_net.utils
import os
reload(neukrill_net.utils)
cd ..
run_settings["run_settings_path"] = os.path.abspath("run_settings/cifar10_based.json")
run_settings
Explanation: To set up the path for settings, utils and os must be imported.
End of explanation
neukrill_net.utils.save_run_settings(run_settings)
!cat run_settings/cifar10_based.json
Explanation: Now the settings can be saved.
End of explanation
python train.py run_settings/cifar10_based.json
Explanation: Now we can start training the model with:
End of explanation
!obj:pylearn2.train.Train {
dataset: &train !obj:neukrill_net.dense_dataset.DensePNGDataset {
settings_path: %(settings_path)s,
run_settings: %(run_settings_path)s,
training_set_mode: "train"
},
model: !obj:pylearn2.models.mlp.MLP {
batch_size: &batch_size 128,
layers: [
!obj:pylearn2.models.maxout.MaxoutConvC01B {
layer_name: 'h0',
pad: 4,
tied_b: 1,
W_lr_scale: .05,
b_lr_scale: .05,
num_channels: 96,
num_pieces: 2,
kernel_shape: [8, 8],
pool_shape: [4, 4],
pool_stride: [2, 2],
irange: .005,
max_kernel_norm: .9,
partial_sum: 49,
},
!obj:pylearn2.models.maxout.MaxoutConvC01B {
layer_name: 'h1',
pad: 3,
tied_b: 1,
W_lr_scale: .05,
b_lr_scale: .05,
num_channels: 192,
num_pieces: 2,
kernel_shape: [8, 8],
pool_shape: [4, 4],
pool_stride: [2, 2],
irange: .005,
max_kernel_norm: 1.9365,
partial_sum: 23,
},
!obj:pylearn2.models.maxout.MaxoutConvC01B {
pad: 3,
layer_name: 'h2',
tied_b: 1,
W_lr_scale: .05,
b_lr_scale: .05,
num_channels: 192,
num_pieces: 2,
kernel_shape: [5, 5],
pool_shape: [2, 2],
pool_stride: [2, 2],
irange: .005,
max_kernel_norm: 1.9365,
},
!obj:pylearn2.models.maxout.Maxout {
layer_name: 'h3',
irange: .005,
num_units: 500,
num_pieces: 5,
max_col_norm: 1.9
},
!obj:pylearn2.models.mlp.Softmax {
max_col_norm: 1.9365,
layer_name: 'y',
n_classes: %(n_classes)i,
irange: .005
}
],
input_space: !obj:pylearn2.space.Conv2DSpace {
shape: &window_shape [32, 32],
num_channels: 3,
axes: ['c', 0, 1, 'b'],
},
},
algorithm: !obj:pylearn2.training_algorithms.sgd.SGD {
learning_rate: .17,
learning_rule: !obj:pylearn2.training_algorithms.learning_rule.Momentum {
init_momentum: .5
},
train_iteration_mode: 'even_shuffled_sequential',
monitor_iteration_mode: 'even_sequential',
monitoring_dataset:
{
'test' : !obj:neukrill_net.dense_dataset.DensePNGDataset {
settings_path: %(settings_path)s,
run_settings: %(run_settings_path)s,
training_set_mode: "test"
},
},
cost: !obj:pylearn2.costs.mlp.dropout.Dropout {
input_include_probs: { 'h0' : .8 },
input_scales: { 'h0' : 1. }
},
termination_criterion: !obj:pylearn2.termination_criteria.EpochCounter {
max_epochs: 474
},
},
extensions: [
!obj:pylearn2.training_algorithms.learning_rule.MomentumAdjustor {
start: 1,
saturate: 250,
final_momentum: .65
},
!obj:pylearn2.training_algorithms.sgd.LinearDecayOverEpoch {
start: 1,
saturate: 500,
decay_factor: .01
},
!obj:pylearn2.train_extensions.best_params.MonitorBasedSaveBest {
channel_name: test_y_misclass,
save_path: '%(save_path)s'
},
],
}
Explanation: After trying to run the model, it broke with an error that partialSum does not divide numModules. Turns out that partialSum is a parameter of a convolutional layer that affects the performance of the weight gradient computation and it has to divide the area of the output grid in this layer, which is given by numModules. Conveniently, the error gave the values of numModules (which are not specified in the yaml file) so we just changed partialSum in each layer to a factor of the corresponding numModules.
End of explanation |
11,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DV360 Report Emailed To BigQuery
Pulls a DV360 Report from a gMail email into BigQuery.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter DV360 Report Emailed To BigQuery Recipe Parameters
The person executing this recipe must be the recipient of the email.
Schedule a DV360 report to be sent to an email like ****.
Or set up a redirect rule to forward a report you already receive.
The report can be sent as an attachment or a link.
Ensure this recipe runs after the report is email daily.
Give a regular expression to match the email subject.
Configure the destination in BigQuery to write the data.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute DV360 Report Emailed To BigQuery
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: DV360 Report Emailed To BigQuery
Pulls a DV360 Report from a gMail email into BigQuery.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'email':'', # Email address report was sent to.
'subject':'.*', # Regular expression to match subject. Double escape backslashes.
'dataset':'', # Existing dataset in BigQuery.
'table':'', # Name of table to be written to.
'dbm_schema':'[]', # Schema provided in JSON list format or empty list.
'is_incremental_load':False, # Append report data to table based on date column, de-duplicates.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter DV360 Report Emailed To BigQuery Recipe Parameters
The person executing this recipe must be the recipient of the email.
Schedule a DV360 report to be sent to an email like ****.
Or set up a redirect rule to forward a report you already receive.
The report can be sent as an attachment or a link.
Ensure this recipe runs after the report is email daily.
Give a regular expression to match the email subject.
Configure the destination in BigQuery to write the data.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'email':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'read':{
'from':'[email protected]',
'to':{'field':{'name':'email','kind':'string','order':1,'default':'','description':'Email address report was sent to.'}},
'subject':{'field':{'name':'subject','kind':'string','order':2,'default':'.*','description':'Regular expression to match subject. Double escape backslashes.'}},
'link':'https://storage.googleapis.com/.*',
'attachment':'.*'
},
'write':{
'bigquery':{
'dataset':{'field':{'name':'dataset','kind':'string','order':3,'default':'','description':'Existing dataset in BigQuery.'}},
'table':{'field':{'name':'table','kind':'string','order':4,'default':'','description':'Name of table to be written to.'}},
'schema':{'field':{'name':'dbm_schema','kind':'json','order':5,'default':'[]','description':'Schema provided in JSON list format or empty list.'}},
'header':True,
'is_incremental_load':{'field':{'name':'is_incremental_load','kind':'boolean','order':6,'default':False,'description':'Append report data to table based on date column, de-duplicates.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute DV360 Report Emailed To BigQuery
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
11,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NATURAL LANGUAGE PROCESSING
This notebook covers chapters 22 and 23 from the book Artificial Intelligence
Step1: CONTENTS
Overview
Languages
HITS
Question Answering
CYK Parse
Chart Parsing
OVERVIEW
Natural Language Processing (NLP) is a field of AI concerned with understanding, analyzing and using natural languages. This field is considered a difficult yet intriguing field of study, since it is connected to how humans and their languages work.
Applications of the field include translation, speech recognition, topic segmentation, information extraction and retrieval, and a lot more.
Below we take a look at some algorithms in the field. Before we get right into it though, we will take a look at a very useful form of language, context-free languages. Even though they are a bit restrictive, they have been used a lot in research in natural language processing.
LANGUAGES
Languages can be represented by a set of grammar rules over a lexicon of words. Different languages can be represented by different types of grammar, but in Natural Language Processing we are mainly interested in context-free grammars.
Context-Free Grammars
A lot of natural and programming languages can be represented by a Context-Free Grammar (CFG). A CFG is a grammar that has a single non-terminal symbol on the left-hand side. That means a non-terminal can be replaced by the right-hand side of the rule regardless of context. An example of a CFG
Step2: Let's build a lexicon and a grammar for the above language
Step3: Both the functions return a dictionary with keys the left-hand side of the rules. For the lexicon, the values are the terminals for each left-hand side non-terminal, while for the rules the values are the right-hand sides as lists.
We can now use the variables lexicon and rules to build a grammar. After we've done so, we can find the transformations of a non-terminal (the Noun, Verb and the other basic classes do not count as proper non-terminals in the implementation). We can also check if a word is in a particular class.
Step4: If the grammar is in Chomsky Normal Form, we can call the class function cnf_rules to get all the rules in the form of (X, Y, Z) for each X -> Y Z rule. Since the above grammar is not in CNF though, we have to create a new one.
Step5: Finally, we can generate random phrases using our grammar. Most of them will be complete gibberish, falling under the overgenerated phrases of the grammar. That goes to show that in the grammar the valid phrases are much fewer than the overgenerated ones.
Step6: Probabilistic
The probabilistic grammars follow the same approach. They take as input a string, are assembled from a grammar and a lexicon and can generate random sentences (giving the probability of the sentence). The main difference is that in the lexicon we have tuples (terminal, probability) instead of strings and for the rules we have a list of tuples (list of non-terminals, probability) instead of list of lists of non-terminals.
Execute the cells to read the code
Step7: Let's build a lexicon and rules for the probabilistic grammar
Step8: Let's use the above to assemble our probabilistic grammar and run some simple queries
Step9: If we have a grammar in CNF, we can get a list of all the rules. Let's create a grammar in the form and print the CNF rules
Step10: Lastly, we can generate random sentences from this grammar. The function prob_generation returns a tuple (sentence, probability).
Step11: As with the non-probabilistic grammars, this one mostly overgenerates. You can also see that the probability is very, very low, which means there are a ton of generateable sentences (in this case infinite, since we have recursion; notice how VP can produce another VP, for example).
HITS
Overview
Hyperlink-Induced Topic Search (or HITS for short) is an algorithm for information retrieval and page ranking. You can read more on information retrieval in the text notebook. Essentially, given a collection of documents and a user's query, such systems return to the user the documents most relevant to what the user needs. The HITS algorithm differs from a lot of other similar ranking algorithms (like Google's Pagerank) as the page ratings in this algorithm are dependent on the given query. This means that for each new query the result pages must be computed anew. This cost might be prohibitive for many modern search engines, so a lot steer away from this approach.
HITS first finds a list of relevant pages to the query and then adds pages that link to or are linked from these pages. Once the set is built, we define two values for each page. Authority on the query, the degree of pages from the relevant set linking to it and hub of the query, the degree that it points to authoritative pages in the set. Since we do not want to simply count the number of links from a page to other pages, but we also want to take into account the quality of the linked pages, we update the hub and authority values of a page in the following manner, until convergence
Step13: First we compile the collection of pages as mentioned above. Then, we initialize the authority and hub scores for each page and finally we update and normalize the values until convergence.
A quick overview of the helper functions functions we use
Step14: We can now run the HITS algorithm. Our query will be 'mammals' (note that while the content of the HTML doesn't matter, it should include the query words or else no page will be picked at the first step).
Step15: Let's see how the pages were scored
Step16: The top score is 0.82 by "C". This is the most relevant page according to the algorithm. You can see that the pages it links to, "A" and "D", have the two highest authority scores (therefore "C" has a high hub score) and the pages it is linked from, "B" and "E", have the highest hub scores (so "C" has a high authority score). By combining these two facts, we get that "C" is the most relevant page. It is worth noting that it does not matter if the given page contains the query words, just that it links and is linked from high-quality pages.
QUESTION ANSWERING
Question Answering is a type of Information Retrieval system, where we have a question instead of a query and instead of relevant documents we want the computer to return a short sentence, phrase or word that answers our question. To better understand the concept of question answering systems, you can first read the "Text Models" and "Information Retrieval" section from the text notebook.
A typical example of such a system is AskMSR (Banko et al., 2002), a system for question answering that performed admirably against more sophisticated algorithms. The basic idea behind it is that a lot of questions have already been answered in the web numerous times. The system doesn't know a lot about verbs, or concepts or even what a noun is. It knows about 15 different types of questions and how they can be written as queries. It can rewrite [Who was George Washington's second in command?] as the query [* was George Washington's second in command] or [George Washington's second in command was *].
After rewriting the questions, it issues these queries and retrieves the short text around the query terms. It then breaks the result into 1, 2 or 3-grams. Filters are also applied to increase the chances of a correct answer. If the query starts with "who", we filter for names, if it starts with "how many" we filter for numbers and so on. We can also filter out the words appearing in the query. For the above query, the answer "George Washington" is wrong, even though it is quite possible the 2-gram would appear a lot around the query terms.
Finally, the different results are weighted by the generality of the queries. The result from the general boolean query [George Washington OR second in command] weighs less that the more specific query [George Washington's second in command was *]. As an answer we return the most highly-ranked n-gram.
CYK PARSE
Overview
Syntactic analysis (or parsing) of a sentence is the process of uncovering the phrase structure of the sentence according to the rules of a grammar. There are two main approaches to parsing. Top-down, start with the starting symbol and build a parse tree with the given words as its leaves, and bottom-up, where we start from the given words and build a tree that has the starting symbol as its root. Both approaches involve "guessing" ahead, so it is very possible it will take long to parse a sentence (wrong guess mean a lot of backtracking). Thankfully, a lot of effort is spent in analyzing already analyzed substrings, so we can follow a dynamic programming approach to store and reuse these parses instead of recomputing them. The CYK Parsing Algorithm (named after its inventors, Cocke, Younger and Kasami) utilizes this technique to parse sentences of a grammar in Chomsky Normal Form.
The CYK algorithm returns an M x N x N array (named P), where N is the number of words in the sentence and M the number of non-terminal symbols in the grammar. Each element in this array shows the probability of a substring being transformed from a particular non-terminal. To find the most probable parse of the sentence, a search in the resulting array is required. Search heuristic algorithms work well in this space, and we can derive the heuristics from the properties of the grammar.
The algorithm in short works like this
Step17: When updating the probability of a substring, we pick the max of its current one and the probability of the substring broken into two parts
Step18: Now let's see the probabilities table for the sentence "the robot is good"
Step19: A defaultdict object is returned (defaultdict is basically a dictionary but with a default value/type). Keys are tuples in the form mentioned above and the values are the corresponding probabilities. Most of the items/parses have a probability of 0. Let's filter those out to take a better look at the parses that matter.
Step20: The item ('Article', 0, 1)
Step21: Example
We will use the grammar E0 to parse the sentence "the stench is in 2 2".
First we need to build a Chart object
Step22: And then we simply call the parses function
Step23: You can see which edges get added by setting the optional initialization argument trace to true.
Step24: Let's try and parse a sentence that is not recognized by the grammar | Python Code:
import nlp
from nlp import Page, HITS
from nlp import Lexicon, Rules, Grammar, ProbLexicon, ProbRules, ProbGrammar
from nlp import CYK_parse, Chart
from notebook import psource
Explanation: NATURAL LANGUAGE PROCESSING
This notebook covers chapters 22 and 23 from the book Artificial Intelligence: A Modern Approach, 3rd Edition. The implementations of the algorithms can be found in nlp.py.
Run the below cell to import the code from the module and get started!
End of explanation
psource(Lexicon, Rules, Grammar)
Explanation: CONTENTS
Overview
Languages
HITS
Question Answering
CYK Parse
Chart Parsing
OVERVIEW
Natural Language Processing (NLP) is a field of AI concerned with understanding, analyzing and using natural languages. This field is considered a difficult yet intriguing field of study, since it is connected to how humans and their languages work.
Applications of the field include translation, speech recognition, topic segmentation, information extraction and retrieval, and a lot more.
Below we take a look at some algorithms in the field. Before we get right into it though, we will take a look at a very useful form of language, context-free languages. Even though they are a bit restrictive, they have been used a lot in research in natural language processing.
LANGUAGES
Languages can be represented by a set of grammar rules over a lexicon of words. Different languages can be represented by different types of grammar, but in Natural Language Processing we are mainly interested in context-free grammars.
Context-Free Grammars
A lot of natural and programming languages can be represented by a Context-Free Grammar (CFG). A CFG is a grammar that has a single non-terminal symbol on the left-hand side. That means a non-terminal can be replaced by the right-hand side of the rule regardless of context. An example of a CFG:
S -> aSb | ε
That means S can be replaced by either aSb or ε (with ε we denote the empty string). The lexicon of the language is comprised of the terminals a and b, while with S we denote the non-terminal symbol. In general, non-terminals are capitalized while terminals are not, and we usually name the starting non-terminal S. The language generated by the above grammar is the language a<sup>n</sup>b<sup>n</sup> for n greater or equal than 1.
Probabilistic Context-Free Grammar
While a simple CFG can be very useful, we might want to know the chance of each rule occuring. Above, we do not know if S is more likely to be replaced by aSb or ε. Probabilistic Context-Free Grammars (PCFG) are built to fill exactly that need. Each rule has a probability, given in brackets, and the probabilities of a rule sum up to 1:
S -> aSb [0.7] | ε [0.3]
Now we know it is more likely for S to be replaced by aSb than by e.
An issue with PCFGs is how we will assign the various probabilities to the rules. We could use our knowledge as humans to assign the probabilities, but that is a laborious and prone to error task. Instead, we can learn the probabilities from data. Data is categorized as labeled (with correctly parsed sentences, usually called a treebank) or unlabeled (given only lexical and syntactic category names).
With labeled data, we can simply count the occurences. For the above grammar, if we have 100 S rules and 30 of them are of the form S -> ε, we assign a probability of 0.3 to the transformation.
With unlabeled data we have to learn both the grammar rules and the probability of each rule. We can go with many approaches, one of them the inside-outside algorithm. It uses a dynamic programming approach, that first finds the probability of a substring being generated by each rule, and then estimates the probability of each rule.
Chomsky Normal Form
A grammar is in Chomsky Normal Form (or CNF, not to be confused with Conjunctive Normal Form) if its rules are one of the three:
X -> Y Z
A -> a
S -> ε
Where X, Y, Z, A are non-terminals, a is a terminal, ε is the empty string and S is the start symbol (the start symbol should not be appearing on the right hand side of rules). Note that there can be multiple rules for each left hand side non-terminal, as long they follow the above. For example, a rule for X might be: X -> Y Z | A B | a | b.
Of course, we can also have a CNF with probabilities.
This type of grammar may seem restrictive, but it can be proven that any context-free grammar can be converted to CNF.
Lexicon
The lexicon of a language is defined as a list of allowable words. These words are grouped into the usual classes: verbs, nouns, adjectives, adverbs, pronouns, names, articles, prepositions and conjuctions. For the first five classes it is impossible to list all words, since words are continuously being added in the classes. Recently "google" was added to the list of verbs, and words like that will continue to pop up and get added to the lists. For that reason, these first five categories are called open classes. The rest of the categories have much fewer words and much less development. While words like "thou" were commonly used in the past but have declined almost completely in usage, most changes take many decades or centuries to manifest, so we can safely assume the categories will remain static for the foreseeable future. Thus, these categories are called closed classes.
An example lexicon for a PCFG (note that other classes can also be used according to the language, like digits, or RelPro for relative pronoun):
Verb -> is [0.3] | say [0.1] | are [0.1] | ...
Noun -> robot [0.1] | sheep [0.05] | fence [0.05] | ...
Adjective -> good [0.1] | new [0.1] | sad [0.05] | ...
Adverb -> here [0.1] | lightly [0.05] | now [0.05] | ...
Pronoun -> me [0.1] | you [0.1] | he [0.05] | ...
RelPro -> that [0.4] | who [0.2] | which [0.2] | ...
Name -> john [0.05] | mary [0.05] | peter [0.01] | ...
Article -> the [0.35] | a [0.25] | an [0.025] | ...
Preposition -> to [0.25] | in [0.2] | at [0.1] | ...
Conjuction -> and [0.5] | or [0.2] | but [0.2] | ...
Digit -> 1 [0.3] | 2 [0.2] | 0 [0.2] | ...
Grammar
With grammars we combine words from the lexicon into valid phrases. A grammar is comprised of grammar rules. Each rule transforms the left-hand side of the rule into the right-hand side. For example, A -> B means that A transforms into B. Let's build a grammar for the language we started building with the lexicon. We will use a PCFG.
```
S -> NP VP [0.9] | S Conjuction S [0.1]
NP -> Pronoun [0.3] | Name [0.1] | Noun [0.1] | Article Noun [0.25] |
Article Adjs Noun [0.05] | Digit [0.05] | NP PP [0.1] |
NP RelClause [0.05]
VP -> Verb [0.4] | VP NP [0.35] | VP Adjective [0.05] | VP PP [0.1]
VP Adverb [0.1]
Adjs -> Adjective [0.8] | Adjective Adjs [0.2]
PP -> Preposition NP [1.0]
RelClause -> RelPro VP [1.0]
```
Some valid phrases the grammar produces: "mary is sad", "you are a robot" and "she likes mary and a good fence".
What if we wanted to check if the phrase "mary is sad" is actually a valid sentence? We can use a parse tree to constructively prove that a string of words is a valid phrase in the given language and even calculate the probability of the generation of the sentence.
The probability of the whole tree can be calculated by multiplying the probabilities of each individual rule transormation: 0.9 * 0.1 * 0.05 * 0.05 * 0.4 * 0.05 * 0.3 = 0.00000135.
To conserve space, we can also write the tree in linear form:
[S [NP [Name mary]] [VP [VP [Verb is]] [Adjective sad]]]
Unfortunately, the current grammar overgenerates, that is, it creates sentences that are not grammatically correct (according to the English language), like "the fence are john which say". It also undergenerates, which means there are valid sentences it does not generate, like "he believes mary is sad".
Implementation
In the module we have implementation both for probabilistic and non-probabilistic grammars. Both these implementation follow the same format. There are functions for the lexicon and the rules which can be combined to create a grammar object.
Non-Probabilistic
Execute the cell below to view the implemenations:
End of explanation
lexicon = Lexicon(
Verb = "is | say | are",
Noun = "robot | sheep | fence",
Adjective = "good | new | sad",
Adverb = "here | lightly | now",
Pronoun = "me | you | he",
RelPro = "that | who | which",
Name = "john | mary | peter",
Article = "the | a | an",
Preposition = "to | in | at",
Conjuction = "and | or | but",
Digit = "1 | 2 | 0"
)
print("Lexicon", lexicon)
rules = Rules(
S = "NP VP | S Conjuction S",
NP = "Pronoun | Name | Noun | Article Noun \
| Article Adjs Noun | Digit | NP PP | NP RelClause",
VP = "Verb | VP NP | VP Adjective | VP PP | VP Adverb",
Adjs = "Adjective | Adjective Adjs",
PP = "Preposition NP",
RelClause = "RelPro VP"
)
print("\nRules:", rules)
Explanation: Let's build a lexicon and a grammar for the above language:
End of explanation
grammar = Grammar("A Simple Grammar", rules, lexicon)
print("How can we rewrite 'VP'?", grammar.rewrites_for('VP'))
print("Is 'the' an article?", grammar.isa('the', 'Article'))
print("Is 'here' a noun?", grammar.isa('here', 'Noun'))
Explanation: Both the functions return a dictionary with keys the left-hand side of the rules. For the lexicon, the values are the terminals for each left-hand side non-terminal, while for the rules the values are the right-hand sides as lists.
We can now use the variables lexicon and rules to build a grammar. After we've done so, we can find the transformations of a non-terminal (the Noun, Verb and the other basic classes do not count as proper non-terminals in the implementation). We can also check if a word is in a particular class.
End of explanation
E_Chomsky = Grammar("E_Prob_Chomsky", # A Grammar in Chomsky Normal Form
Rules(
S = "NP VP",
NP = "Article Noun | Adjective Noun",
VP = "Verb NP | Verb Adjective",
),
Lexicon(
Article = "the | a | an",
Noun = "robot | sheep | fence",
Adjective = "good | new | sad",
Verb = "is | say | are"
))
print(E_Chomsky.cnf_rules())
Explanation: If the grammar is in Chomsky Normal Form, we can call the class function cnf_rules to get all the rules in the form of (X, Y, Z) for each X -> Y Z rule. Since the above grammar is not in CNF though, we have to create a new one.
End of explanation
grammar.generate_random('S')
Explanation: Finally, we can generate random phrases using our grammar. Most of them will be complete gibberish, falling under the overgenerated phrases of the grammar. That goes to show that in the grammar the valid phrases are much fewer than the overgenerated ones.
End of explanation
psource(ProbLexicon, ProbRules, ProbGrammar)
Explanation: Probabilistic
The probabilistic grammars follow the same approach. They take as input a string, are assembled from a grammar and a lexicon and can generate random sentences (giving the probability of the sentence). The main difference is that in the lexicon we have tuples (terminal, probability) instead of strings and for the rules we have a list of tuples (list of non-terminals, probability) instead of list of lists of non-terminals.
Execute the cells to read the code:
End of explanation
lexicon = ProbLexicon(
Verb = "is [0.5] | say [0.3] | are [0.2]",
Noun = "robot [0.4] | sheep [0.4] | fence [0.2]",
Adjective = "good [0.5] | new [0.2] | sad [0.3]",
Adverb = "here [0.6] | lightly [0.1] | now [0.3]",
Pronoun = "me [0.3] | you [0.4] | he [0.3]",
RelPro = "that [0.5] | who [0.3] | which [0.2]",
Name = "john [0.4] | mary [0.4] | peter [0.2]",
Article = "the [0.5] | a [0.25] | an [0.25]",
Preposition = "to [0.4] | in [0.3] | at [0.3]",
Conjuction = "and [0.5] | or [0.2] | but [0.3]",
Digit = "0 [0.35] | 1 [0.35] | 2 [0.3]"
)
print("Lexicon", lexicon)
rules = ProbRules(
S = "NP VP [0.6] | S Conjuction S [0.4]",
NP = "Pronoun [0.2] | Name [0.05] | Noun [0.2] | Article Noun [0.15] \
| Article Adjs Noun [0.1] | Digit [0.05] | NP PP [0.15] | NP RelClause [0.1]",
VP = "Verb [0.3] | VP NP [0.2] | VP Adjective [0.25] | VP PP [0.15] | VP Adverb [0.1]",
Adjs = "Adjective [0.5] | Adjective Adjs [0.5]",
PP = "Preposition NP [1]",
RelClause = "RelPro VP [1]"
)
print("\nRules:", rules)
Explanation: Let's build a lexicon and rules for the probabilistic grammar:
End of explanation
grammar = ProbGrammar("A Simple Probabilistic Grammar", rules, lexicon)
print("How can we rewrite 'VP'?", grammar.rewrites_for('VP'))
print("Is 'the' an article?", grammar.isa('the', 'Article'))
print("Is 'here' a noun?", grammar.isa('here', 'Noun'))
Explanation: Let's use the above to assemble our probabilistic grammar and run some simple queries:
End of explanation
E_Prob_Chomsky = ProbGrammar("E_Prob_Chomsky", # A Probabilistic Grammar in CNF
ProbRules(
S = "NP VP [1]",
NP = "Article Noun [0.6] | Adjective Noun [0.4]",
VP = "Verb NP [0.5] | Verb Adjective [0.5]",
),
ProbLexicon(
Article = "the [0.5] | a [0.25] | an [0.25]",
Noun = "robot [0.4] | sheep [0.4] | fence [0.2]",
Adjective = "good [0.5] | new [0.2] | sad [0.3]",
Verb = "is [0.5] | say [0.3] | are [0.2]"
))
print(E_Prob_Chomsky.cnf_rules())
Explanation: If we have a grammar in CNF, we can get a list of all the rules. Let's create a grammar in the form and print the CNF rules:
End of explanation
sentence, prob = grammar.generate_random('S')
print(sentence)
print(prob)
Explanation: Lastly, we can generate random sentences from this grammar. The function prob_generation returns a tuple (sentence, probability).
End of explanation
psource(HITS)
Explanation: As with the non-probabilistic grammars, this one mostly overgenerates. You can also see that the probability is very, very low, which means there are a ton of generateable sentences (in this case infinite, since we have recursion; notice how VP can produce another VP, for example).
HITS
Overview
Hyperlink-Induced Topic Search (or HITS for short) is an algorithm for information retrieval and page ranking. You can read more on information retrieval in the text notebook. Essentially, given a collection of documents and a user's query, such systems return to the user the documents most relevant to what the user needs. The HITS algorithm differs from a lot of other similar ranking algorithms (like Google's Pagerank) as the page ratings in this algorithm are dependent on the given query. This means that for each new query the result pages must be computed anew. This cost might be prohibitive for many modern search engines, so a lot steer away from this approach.
HITS first finds a list of relevant pages to the query and then adds pages that link to or are linked from these pages. Once the set is built, we define two values for each page. Authority on the query, the degree of pages from the relevant set linking to it and hub of the query, the degree that it points to authoritative pages in the set. Since we do not want to simply count the number of links from a page to other pages, but we also want to take into account the quality of the linked pages, we update the hub and authority values of a page in the following manner, until convergence:
Hub score = The sum of the authority scores of the pages it links to.
Authority score = The sum of hub scores of the pages it is linked from.
So the higher quality the pages a page is linked to and from, the higher its scores.
We then normalize the scores by dividing each score by the sum of the squares of the respective scores of all pages. When the values converge, we return the top-valued pages. Note that because we normalize the values, the algorithm is guaranteed to converge.
Implementation
The source code for the algorithm is given below:
End of explanation
testHTML = Like most other male mammals, a man inherits an
X from his mom and a Y from his dad.
testHTML2 = "a mom and a dad"
pA = Page('A', ['B', 'C', 'E'], ['D'])
pB = Page('B', ['E'], ['A', 'C', 'D'])
pC = Page('C', ['B', 'E'], ['A', 'D'])
pD = Page('D', ['A', 'B', 'C', 'E'], [])
pE = Page('E', [], ['A', 'B', 'C', 'D', 'F'])
pF = Page('F', ['E'], [])
nlp.pageDict = {pA.address: pA, pB.address: pB, pC.address: pC,
pD.address: pD, pE.address: pE, pF.address: pF}
nlp.pagesIndex = nlp.pageDict
nlp.pagesContent ={pA.address: testHTML, pB.address: testHTML2,
pC.address: testHTML, pD.address: testHTML2,
pE.address: testHTML, pF.address: testHTML2}
Explanation: First we compile the collection of pages as mentioned above. Then, we initialize the authority and hub scores for each page and finally we update and normalize the values until convergence.
A quick overview of the helper functions functions we use:
relevant_pages: Returns relevant pages from pagesIndex given a query.
expand_pages: Adds to the collection pages linked to and from the given pages.
normalize: Normalizes authority and hub scores.
ConvergenceDetector: A class that checks for convergence, by keeping a history of the pages' scores and checking if they change or not.
Page: The template for pages. Stores the address, authority/hub scores and in-links/out-links.
Example
Before we begin we need to define a list of sample pages to work on. The pages are pA, pB and so on and their text is given by testHTML and testHTML2. The Page class takes as arguments the in-links and out-links as lists. For page "A", the in-links are "B", "C" and "E" while the sole out-link is "D".
We also need to set the nlp global variables pageDict, pagesIndex and pagesContent.
End of explanation
HITS('mammals')
page_list = ['A', 'B', 'C', 'D', 'E', 'F']
auth_list = [pA.authority, pB.authority, pC.authority, pD.authority, pE.authority, pF.authority]
hub_list = [pA.hub, pB.hub, pC.hub, pD.hub, pE.hub, pF.hub]
Explanation: We can now run the HITS algorithm. Our query will be 'mammals' (note that while the content of the HTML doesn't matter, it should include the query words or else no page will be picked at the first step).
End of explanation
for i in range(6):
p = page_list[i]
a = auth_list[i]
h = hub_list[i]
print("{}: total={}, auth={}, hub={}".format(p, a + h, a, h))
Explanation: Let's see how the pages were scored:
End of explanation
psource(CYK_parse)
Explanation: The top score is 0.82 by "C". This is the most relevant page according to the algorithm. You can see that the pages it links to, "A" and "D", have the two highest authority scores (therefore "C" has a high hub score) and the pages it is linked from, "B" and "E", have the highest hub scores (so "C" has a high authority score). By combining these two facts, we get that "C" is the most relevant page. It is worth noting that it does not matter if the given page contains the query words, just that it links and is linked from high-quality pages.
QUESTION ANSWERING
Question Answering is a type of Information Retrieval system, where we have a question instead of a query and instead of relevant documents we want the computer to return a short sentence, phrase or word that answers our question. To better understand the concept of question answering systems, you can first read the "Text Models" and "Information Retrieval" section from the text notebook.
A typical example of such a system is AskMSR (Banko et al., 2002), a system for question answering that performed admirably against more sophisticated algorithms. The basic idea behind it is that a lot of questions have already been answered in the web numerous times. The system doesn't know a lot about verbs, or concepts or even what a noun is. It knows about 15 different types of questions and how they can be written as queries. It can rewrite [Who was George Washington's second in command?] as the query [* was George Washington's second in command] or [George Washington's second in command was *].
After rewriting the questions, it issues these queries and retrieves the short text around the query terms. It then breaks the result into 1, 2 or 3-grams. Filters are also applied to increase the chances of a correct answer. If the query starts with "who", we filter for names, if it starts with "how many" we filter for numbers and so on. We can also filter out the words appearing in the query. For the above query, the answer "George Washington" is wrong, even though it is quite possible the 2-gram would appear a lot around the query terms.
Finally, the different results are weighted by the generality of the queries. The result from the general boolean query [George Washington OR second in command] weighs less that the more specific query [George Washington's second in command was *]. As an answer we return the most highly-ranked n-gram.
CYK PARSE
Overview
Syntactic analysis (or parsing) of a sentence is the process of uncovering the phrase structure of the sentence according to the rules of a grammar. There are two main approaches to parsing. Top-down, start with the starting symbol and build a parse tree with the given words as its leaves, and bottom-up, where we start from the given words and build a tree that has the starting symbol as its root. Both approaches involve "guessing" ahead, so it is very possible it will take long to parse a sentence (wrong guess mean a lot of backtracking). Thankfully, a lot of effort is spent in analyzing already analyzed substrings, so we can follow a dynamic programming approach to store and reuse these parses instead of recomputing them. The CYK Parsing Algorithm (named after its inventors, Cocke, Younger and Kasami) utilizes this technique to parse sentences of a grammar in Chomsky Normal Form.
The CYK algorithm returns an M x N x N array (named P), where N is the number of words in the sentence and M the number of non-terminal symbols in the grammar. Each element in this array shows the probability of a substring being transformed from a particular non-terminal. To find the most probable parse of the sentence, a search in the resulting array is required. Search heuristic algorithms work well in this space, and we can derive the heuristics from the properties of the grammar.
The algorithm in short works like this: There is an external loop that determines the length of the substring. Then the algorithm loops through the words in the sentence. For each word, it again loops through all the words to its right up to the first-loop length. The substring it will work on in this iteration is the words from the second-loop word with first-loop length. Finally, it loops through all the rules in the grammar and updates the substring's probability for each right-hand side non-terminal.
Implementation
The implementation takes as input a list of words and a probabilistic grammar (from the ProbGrammar class detailed above) in CNF and returns the table/dictionary P. An item's key in P is a tuple in the form (Non-terminal, start of substring, length of substring), and the value is a probability. For example, for the sentence "the monkey is dancing" and the substring "the monkey" an item can be ('NP', 0, 2): 0.5, which means the first two words (the substring from index 0 and length 2) have a 0.5 probablity of coming from the NP terminal.
Before we continue, you can take a look at the source code by running the cell below:
End of explanation
E_Prob_Chomsky = ProbGrammar("E_Prob_Chomsky", # A Probabilistic Grammar in CNF
ProbRules(
S = "NP VP [1]",
NP = "Article Noun [0.6] | Adjective Noun [0.4]",
VP = "Verb NP [0.5] | Verb Adjective [0.5]",
),
ProbLexicon(
Article = "the [0.5] | a [0.25] | an [0.25]",
Noun = "robot [0.4] | sheep [0.4] | fence [0.2]",
Adjective = "good [0.5] | new [0.2] | sad [0.3]",
Verb = "is [0.5] | say [0.3] | are [0.2]"
))
Explanation: When updating the probability of a substring, we pick the max of its current one and the probability of the substring broken into two parts: one from the second-loop word with third-loop length, and the other from the first part's end to the remainer of the first-loop length.
Example
Let's build a probabilistic grammar in CNF:
End of explanation
words = ['the', 'robot', 'is', 'good']
grammar = E_Prob_Chomsky
P = CYK_parse(words, grammar)
print(P)
Explanation: Now let's see the probabilities table for the sentence "the robot is good":
End of explanation
parses = {k: p for k, p in P.items() if p >0}
print(parses)
Explanation: A defaultdict object is returned (defaultdict is basically a dictionary but with a default value/type). Keys are tuples in the form mentioned above and the values are the corresponding probabilities. Most of the items/parses have a probability of 0. Let's filter those out to take a better look at the parses that matter.
End of explanation
psource(Chart)
Explanation: The item ('Article', 0, 1): 0.5 means that the first item came from the Article non-terminal with a chance of 0.5. A more complicated item, one with two words, is ('NP', 0, 2): 0.12 which covers the first two words. The probability of the substring "the robot" coming from the NP non-terminal is 0.12. Let's try and follow the transformations from NP to the given words (top-down) to make sure this is indeed the case:
The probability of NP transforming to Article Noun is 0.6.
The probability of Article transforming to "the" is 0.5 (total probability = 0.6*0.5 = 0.3).
The probability of Noun transforming to "robot" is 0.4 (total = 0.3*0.4 = 0.12).
Thus, the total probability of the transformation is 0.12.
Notice how the probability for the whole string (given by the key ('S', 0, 4)) is 0.015. This means the most probable parsing of the sentence has a probability of 0.015.
CHART PARSING
Overview
Let's now take a look at a more general chart parsing algorithm. Given a non-probabilistic grammar and a sentence, this algorithm builds a parse tree in a top-down manner, with the words of the sentence as the leaves. It works with a dynamic programming approach, building a chart to store parses for substrings so that it doesn't have to analyze them again (just like the CYK algorithm). Each non-terminal, starting from S, gets replaced by its right-hand side rules in the chart, until we end up with the correct parses.
Implementation
A parse is in the form [start, end, non-terminal, sub-tree, expected-transformation], where sub-tree is a tree with the corresponding non-terminal as its root and expected-transformation is a right-hand side rule of the non-terminal.
The chart parsing is implemented in a class, Chart. It is initialized with a grammar and can return the list of all the parses of a sentence with the parses function.
The chart is a list of lists. The lists correspond to the lengths of substrings (including the empty string), from start to finish. When we say 'a point in the chart', we refer to a list of a certain length.
A quick rundown of the class functions:
parses: Returns a list of parses for a given sentence. If the sentence can't be parsed, it will return an empty list. Initializes the process by calling parse from the starting symbol.
parse: Parses the list of words and builds the chart.
add_edge: Adds another edge to the chart at a given point. Also, examines whether the edge extends or predicts another edge. If the edge itself is not expecting a transformation, it will extend other edges and it will predict edges otherwise.
scanner: Given a word and a point in the chart, it extends edges that were expecting a transformation that can result in the given word. For example, if the word 'the' is an 'Article' and we are examining two edges at a chart's point, with one expecting an 'Article' and the other a 'Verb', the first one will be extended while the second one will not.
predictor: If an edge can't extend other edges (because it is expecting a transformation itself), we will add to the chart rules/transformations that can help extend the edge. The new edges come from the right-hand side of the expected transformation's rules. For example, if an edge is expecting the transformation 'Adjective Noun', we will add to the chart an edge for each right-hand side rule of the non-terminal 'Adjective'.
extender: Extends edges given an edge (called E). If E's non-terminal is the same as the expected transformation of another edge (let's call it A), add to the chart a new edge with the non-terminal of A and the transformations of A minus the non-terminal that matched with E's non-terminal. For example, if an edge E has 'Article' as its non-terminal and is expecting no transformation, we need to see what edges it can extend. Let's examine the edge N. This expects a transformation of 'Noun Verb'. 'Noun' does not match with 'Article', so we move on. Another edge, A, expects a transformation of 'Article Noun' and has a non-terminal of 'NP'. We have a match! A new edge will be added with 'NP' as its non-terminal (the non-terminal of A) and 'Noun' as the expected transformation (the rest of the expected transformation of A).
You can view the source code by running the cell below:
End of explanation
chart = Chart(nlp.E0)
Explanation: Example
We will use the grammar E0 to parse the sentence "the stench is in 2 2".
First we need to build a Chart object:
End of explanation
print(chart.parses('the stench is in 2 2'))
Explanation: And then we simply call the parses function:
End of explanation
chart_trace = Chart(nlp.E0, trace=True)
chart_trace.parses('the stench is in 2 2')
Explanation: You can see which edges get added by setting the optional initialization argument trace to true.
End of explanation
print(chart.parses('the stench 2 2'))
Explanation: Let's try and parse a sentence that is not recognized by the grammar:
End of explanation |
11,661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Pandas</h1>
Step1: <h2>Imports</h2>
Step2: <h2>The structure of a dataframe</h2>
Step3: <h3>Accessing columns and rows</h3>
Step4: <h3>Getting column data</h3>
Step5: <h3>Getting row data</h3>
Step6: <h3>Getting a row by row number</h3>
Step7: <h3>Getting multiple columns<h3>
Step8: <h3>Getting a specific cell</h3>
Step9: <h3>Slicing</h3>
Step10: <h2>Pandas datareader</h2>
<li>Access data from html tables on any web page</li>
<li>Get data from google finance</li>
<li>Get data from the federal reserve</li>
<h3>HTML Tables</h3>
<li>Pandas datareader can read a table in an html page into a dataframe
<li>the read_html function returns a list of all dataframes with one dataframe for each html table on the page
<h4>Example
Step11: <h4>The page contains only one table so the read_html function returns a list of one element</h4>
Step12: <h4>Note that the read_html function has automatically detected the header columns</h4>
<h4>If an index is necessary, we need to explicitly specify it</h4>
Step13: <h4>Now we can use .loc to extract specific currency rates</h4>
Step14: <h3>Working with views and copies</h3>
<h4>Chained indexing creates a copy and changes to the copy won't be reflected in the original dataframe</h4>
Step15: <h2>Getting historical stock prices from Google financs</h2>
Usage
Step16: <h2>Datareader documentation</h2>
http
Step17: <h3>Get summary statistics</h3>
<li>The "describe" function returns a dataframe containing summary stats for all numerical columns
<li>Columns containing non-numerical data are ignored
Step18: <h4>Calculate the percentage of days that the stock has closed higher than its open</h4>
Step19: <h4>Calculate percent changes</h4>
<li>The function pct_change computes a percent change between successive rows (times in timeseries data)
<li>Defaults to a single time delta
<li>With an argument, the time delta can be changed
Step20: <h3>NaN support</h3>
Pandas functions can ignore NaNs
Step21: <h3>Rolling windows</h3>
<li>"rolling" function extracts rolling windows
<li>For example, the 21 period rolling window of the 13 period percent change
Step22: <h4>Calculate something on the rolling windows</h4>
<h4>Example
Step23: <h4>Calculate several moving averages and graph them</h4>
Step24: <h2>Linear regression with pandas</h2>
<h4>Example
Step25: <h4>Let's calculate returns (the 1 day percent change)</h4>
Step26: <h4>Let's visualize the relationship between each stock and the ETF</h4>
Step27: <h4>The correlation matrix</h4>
Step28: <h3>Basic risk analysis</h3>
<h4>We'll plot the mean and std or returns for each ticker to get a sense of the risk return profile</h4>
Step29: <h2>Regressions</h2>
http
Step30: <h4>Finally plot the fitted line with the actual y values | Python Code:
#installing pandas libraries
!pip install pandas-datareader
!pip install --upgrade html5lib==1.0b8
#There is a bug in the latest version of html5lib so install an earlier version
#Restart kernel after installing html5lib
Explanation: <h1>Pandas</h1>
End of explanation
import pandas as pd #pandas library
from pandas_datareader import data #data readers (google, html, etc.)
#The following line ensures that graphs are rendered in the notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt #Plotting library
import datetime as dt #datetime for timeseries support
Explanation: <h2>Imports</h2>
End of explanation
pd.DataFrame([[1,2,3],[1,2,3]],columns=['A','B','C'])
Explanation: <h2>The structure of a dataframe</h2>
End of explanation
df = pd.DataFrame([['r1','00','01','02'],['r2','10','11','12'],['r3','20','21','22']],columns=['row_label','A','B','C'])
print(id(df))
df.set_index('row_label',inplace=True)
print(id(df))
df
data = {'nationality': ['UK', 'China', 'US', 'UK', 'Japan', 'China', 'UK', 'UK', 'Japan', 'US'],
'age': [25, 30, 15, np.nan, 25, 22, np.nan,45 ,18, 33],
'type': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'diabetes': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
df=pd.DataFrame(data=data,index=labels)
#print(df[df['age'].between(20,30)])
#print(df.groupby('nationality').mean()['age'])
#print(df.sort_values(by=['age','type'],ascending=[False,True]))
df['nationality'] = df['nationality'].replace('US','United States')
print(df)
df.ix[1]
Explanation: <h3>Accessing columns and rows</h3>
End of explanation
df['B']
Explanation: <h3>Getting column data</h3>
End of explanation
df.loc['r1']
Explanation: <h3>Getting row data</h3>
End of explanation
df.iloc[0]
Explanation: <h3>Getting a row by row number</h3>
End of explanation
df[['B','A']] #Note that the column identifiers are in a list
Explanation: <h3>Getting multiple columns<h3>
End of explanation
df.loc['r2','B']
df.loc['r2']['A']
Explanation: <h3>Getting a specific cell</h3>
End of explanation
print(df)
print(df.loc['r1':'r2'])
df.loc['r1':'r2','B':'C']
Explanation: <h3>Slicing</h3>
End of explanation
#df_list = pd.read_html('http://www.bloomberg.com/markets/currencies/major')
df_list = pd.read_html('http://www.waihuipaijia.cn/'
, encoding='utf-8')
print(len(df_list))
Explanation: <h2>Pandas datareader</h2>
<li>Access data from html tables on any web page</li>
<li>Get data from google finance</li>
<li>Get data from the federal reserve</li>
<h3>HTML Tables</h3>
<li>Pandas datareader can read a table in an html page into a dataframe
<li>the read_html function returns a list of all dataframes with one dataframe for each html table on the page
<h4>Example: Read the tables on the google finance page</h4>
End of explanation
df = df_list[0]
print(df)
Explanation: <h4>The page contains only one table so the read_html function returns a list of one element</h4>
End of explanation
df.set_index('Currency',inplace=True)
print(df)
Explanation: <h4>Note that the read_html function has automatically detected the header columns</h4>
<h4>If an index is necessary, we need to explicitly specify it</h4>
End of explanation
df.loc['EUR-CHF','Value']
Explanation: <h4>Now we can use .loc to extract specific currency rates</h4>
End of explanation
eur_usd = df.loc['EUR-USD']['Change'] #This is chained indexing
df.loc['EUR-USD']['Change'] = 1.0 #Here we are changing a value in a copy of the dataframe
print(eur_usd)
print(df.loc['EUR-USD']['Change']) #Neither eur_usd, nor the dataframe are changed
eur_usd = df.loc['EUR-USD','Change'] #eur_usd points to the value inside the dataframe
df.loc['EUR-USD','Change'] = 1.0 #Change the value in the view
print(eur_usd) #eur_usd is changed (because it points to the view)
print(df.loc['EUR-USD']['Change']) #The dataframe has been correctly updated
Explanation: <h3>Working with views and copies</h3>
<h4>Chained indexing creates a copy and changes to the copy won't be reflected in the original dataframe</h4>
End of explanation
from pandas_datareader import data
import datetime as dt
start=dt.datetime(2017, 1, 1)
end=dt.datetime.today()
print(start,end)
df = data.DataReader('IBM', 'google', start, end)
df
Explanation: <h2>Getting historical stock prices from Google financs</h2>
Usage: DataReader(ticker,source,startdate,enddate)<br>
Unfortunately, the Yahoo finance datareader has stopped working because of a change to Yahoo's website
End of explanation
df['UP']=np.where(df['Close']>df['Open'],1,0)
df
Explanation: <h2>Datareader documentation</h2>
http://pandas-datareader.readthedocs.io/en/latest/</h2>
<h3>Working with a timeseries data frame</h3>
<li>The data is organized by time with the index serving as the timeline
<h4>Creating new columns</h4>
<li>Add a column to a dataframe
<li>Base the elements of the column on some combination of data in the existing columns
<h4>Example: Number of Days that the stock closed higher than it opened
<li>We'll create a new column with the header "UP"
<li>And use np.where to decide what to put in the column
End of explanation
df.describe()
Explanation: <h3>Get summary statistics</h3>
<li>The "describe" function returns a dataframe containing summary stats for all numerical columns
<li>Columns containing non-numerical data are ignored
End of explanation
df['UP'].sum()/df['UP'].count()
Explanation: <h4>Calculate the percentage of days that the stock has closed higher than its open</h4>
End of explanation
df['Close'].pct_change() #One timeperiod percent change
n=2
df['Close'].pct_change(n) #n timeperiods percent change
Explanation: <h4>Calculate percent changes</h4>
<li>The function pct_change computes a percent change between successive rows (times in timeseries data)
<li>Defaults to a single time delta
<li>With an argument, the time delta can be changed
End of explanation
n=13
df['Close'].pct_change(n).mean()
Explanation: <h3>NaN support</h3>
Pandas functions can ignore NaNs
End of explanation
df['Close'].pct_change(n).rolling(21)
Explanation: <h3>Rolling windows</h3>
<li>"rolling" function extracts rolling windows
<li>For example, the 21 period rolling window of the 13 period percent change
End of explanation
n=13
df['Close'].pct_change(n).rolling(21).mean()
Explanation: <h4>Calculate something on the rolling windows</h4>
<h4>Example: mean (the 21 day moving average of the 13 day percent change)
End of explanation
ma_8 = df['Close'].pct_change(n).rolling(window=8).mean()
ma_13= df['Close'].pct_change(n).rolling(window=13).mean()
ma_21= df['Close'].pct_change(n).rolling(window=21).mean()
ma_34= df['Close'].pct_change(n).rolling(window=34).mean()
ma_55= df['Close'].pct_change(n).rolling(window=55).mean()
ma_8.plot()
ma_34.plot()
Explanation: <h4>Calculate several moving averages and graph them</h4>
End of explanation
import datetime
import pandas_datareader as data
start = datetime.datetime(2015,7,1)
end = datetime.datetime(2016,6,1)
solar_df = data.DataReader(['FSLR', 'TAN','RGSE','SCTY'],'google', start=start,end=end)['Close']
solar_df
Explanation: <h2>Linear regression with pandas</h2>
<h4>Example: TAN is the ticker for a solar ETF. FSLR, RGSE, and SCTY are tickers of companies that build or lease solar panels. Each has a different business model. We'll use pandas to study the risk reward tradeoff between the 4 investments and also see how correlated they are</h4>
End of explanation
rets = solar_df.pct_change()
print(rets)
Explanation: <h4>Let's calculate returns (the 1 day percent change)</h4>
End of explanation
import matplotlib.pyplot as plt
plt.scatter(rets.FSLR,rets.TAN)
plt.scatter(rets.RGSE,rets.TAN)
plt.scatter(rets.SCTY,rets.TAN)
Explanation: <h4>Let's visualize the relationship between each stock and the ETF</h4>
End of explanation
solar_corr = rets.corr()
print(solar_corr)
Explanation: <h4>The correlation matrix</h4>
End of explanation
plt.scatter(rets.mean(), rets.std())
plt.xlabel('Expected returns')
plt.ylabel('Standard deviations')
for label, x, y in zip(rets.columns, rets.mean(), rets.std()):
plt.annotate(
label,
xy = (x, y), xytext = (20, -20),
textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
plt.show()
Explanation: <h3>Basic risk analysis</h3>
<h4>We'll plot the mean and std or returns for each ticker to get a sense of the risk return profile</h4>
End of explanation
import numpy as np
import statsmodels.api as sm
X=solar_df[['FSLR','RGSE','SCTY']]
X = sm.add_constant(X)
y=solar_df['TAN']
model = sm.OLS(y,X,missing='drop')
result = model.fit()
print(result.summary())
Explanation: <h2>Regressions</h2>
http://statsmodels.sourceforge.net/
<h3>Steps for regression</h3>
<li>Construct y (dependent variable series)
<li>Construct matrix (dataframe) of X (independent variable series)
<li>Add intercept
<li>Model the regression
<li>Get the results
<h3>The statsmodels library contains various regression packages. We'll use the OLS (Ordinary Least Squares) model
End of explanation
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(y)
ax.plot(result.fittedvalues)
Explanation: <h4>Finally plot the fitted line with the actual y values
End of explanation |
11,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NNabla Python API Demonstration Tutorial
Let us import nnabla first, and some additional useful tools.
Step1: NdArray
NdArray is a data container of a multi-dimensional array. NdArray is device (e.g. CPU, CUDA) and type (e.g. uint8, float32) agnostic, in which both type and device are implicitly casted or transferred when it is used. Below, you create a NdArray with a shape of (2, 3, 4).
Step2: You can see the values held inside a by the following. The values are not initialized, and are created as float32 by default.
Step3: The accessor .data returns a reference to the values of NdArray as numpy.ndarray. You can modify these by using the Numpy API as follows.
Step4: Note that the above operation is all done in the host device (CPU). NdArray provides more efficient functions in case you want to fill all values with a constant, .zero and .fill. They are lazily evaluated when the data is requested (when neural network computation requests the data, or when numpy array is requested by Python) The filling operation is executed within a specific device (e.g. CUDA GPU), and more efficient if you specify the device setting, which we explain later.
Step5: You can create an NdArray instance directly from a Numpy array object.
Step6: NdArray is used in Variable class, as well as NNabla's imperative computation of neural networks. We describe them in the later sections.
Variable
Variable class is used when you construct a neural network. The neural network can be described as a graph in which an edge represents a function (a.k.a operator and layer) which defines operation of a minimum unit of computation, and a node represents a variable which holds input/output values of a function (Function class is explained later). The graph is called "Computation Graph".
In NNabla, a Variable, a node of a computation graph, holds two NdArrays, one for storing the input or output values of a function during forward propagation (executing computation graph in the forward order), while another for storing the backward error signal (gradient) during backward propagation (executing computation graph in backward order to propagate error signals down to parameters (weights) of neural networks). The first one is called data, the second is grad in NNabla.
The following line creates a Variable instance with a shape of (2, 3, 4). It has data and grad as NdArray. The flag need_grad is used to omit unnecessary gradient computation during backprop if set to False.
Step7: You can get the shape by
Step8: Since both data and grad are NdArray, you can get a reference to its values as NdArray with the .data accessor, but also it can be referred by .d or .g property for data and grad respectively.
Step9: Like NdArray, a Variable can also be created from Numpy array(s).
Step10: Besides storing values of a computation graph, pointing a parent edge (function) to trace the computation graph is an important role. Here x doesn't have any connection. Therefore, the .parent property returns None.
Step11: Function
A function defines an operation block of a computation graph as we described above. The module nnabla.functions offers various functions (e.g. Convolution, Affine and ReLU). You can see the list of functions available in the API reference guide.
Step12: As an example, here you will defines a computation graph that computes the element-wise Sigmoid function outputs for the input variable and sums up all values into a scalar. (This is simple enough to explain how it behaves but a meaningless example in the context of neural network training. We will show you a neural network example later.)
Step13: The function API in nnabla.functions takes one (or several) Variable(s) and arguments (if any), and returns one (or several) output Variable(s). The .parent points to the function instance which created it.
Note that no computation occurs at this time since we just define the graph. (This is the default behavior of NNabla computation graph API. You can also fire actual computation during graph definition which we call "Dynamic mode" (explained later)).
Step14: The .forward() at a leaf Variable executes the forward pass computation in the computation graph.
Step15: The .backward() does the backward propagation through the graph. Here we initialize the grad values as zero before backprop since the NNabla backprop algorithm always accumulates the gradient in the root variables.
Step16: NNabla is developed by mainly focused on neural network training and inference. Neural networks have parameters to be learned associated with computation blocks such as Convolution, Affine (a.k.a. fully connected, dense etc.). In NNabla, the learnable parameters are also represented as Variable objects. Just like input variables, those parameter variables are also used by passing into Functions. For example, Affine function takes input, weights and biases as inputs.
Step17: The above example takes an input with B=5 (batchsize) and D=2 (dimensions) and maps it to D'=3 outputs, i.e. (B, D') output.
You may also notice that here you set need_grad=True only for parameter variables (w and b). The x is a non-parameter variable and the root of computation graph. Therefore, it doesn't require gradient computation. In this configuration, the gradient computation for x is not executed in the first affine, which will omit the computation of unnecessary backpropagation.
The next block sets data and initializes grad, then applies forward and backward computation.
Step18: You can see that affine_out holds an output of Affine.
Step19: The resulting gradients of weights and biases are as follows.
Step20: The gradient of x is not changed because need_grad is set as False.
Step21: Parametric Function
Considering parameters as inputs of Function enhances expressiveness and flexibility of computation graphs.
However, to define all parameters for each learnable function is annoying for users to define a neural network.
In NNabla, trainable models are usually created by composing functions that have optimizable parameters.
These functions are called "Parametric Functions".
The Parametric Function API provides various parametric functions and an interface for composing trainable models.
To use parametric functions, import
Step22: The function with optimizable parameter can be created as below.
Step23: The first line creates a parameter scope. The second line then applies PF.affine - an affine transform - to x, and creates a variable c1 holding that result. The parameters are created and initialized randomly at function call, and registered by a name "affine1" using parameter_scope context. The function nnabla.get_parameters() allows to get the registered parameters.
Step24: The name= argument of any PF function creates the equivalent parameter space to the above definition of PF.affine transformation as below. It could save the space of your Python code. The nnabla.parameter_scope is more useful when you group multiple parametric functions such as Convolution-BatchNormalization found in a typical unit of CNNs.
Step25: It is worth noting that the shapes of both outputs and parameter variables (as you can see above) are automatically determined by only providing the output size of affine transformation(in the example above the output size is 3). This helps to create a graph in an easy way.
Step26: Parameter scope can be nested as follows (although a meaningless example).
Step27: This creates the following.
Step28: Also, get_parameters() can be used in parameter_scope. For example
Step29: nnabla.clear_parameters() can be used to delete registered parameters under the scope.
Step30: MLP Example For Explanation
The following block creates a computation graph to predict one dimensional output from two dimensional inputs by a 2 layer fully connected neural network (multi-layer perceptron).
Step31: This will create the following parameter variables.
Step32: As described above, you can execute the forward pass by calling forward method at the terminal variable.
Step33: Training a neural networks needs a loss value to be minimized by gradient descent with backprop. In NNabla, loss function is also a just function, and packaged in the functions module.
Step34: As you've seen above, NNabla backward accumulates the gradients at the root variables. You have to initialize the grad of the parameter variables before backprop (We will show you the easiest way with Solver API).
Step35: Imperative Mode
After performing backprop, gradients are held in parameter variable grads. The next block will update the parameters with vanilla gradient descent.
Step36: The above computation is an example of NNabla's "Imperative Mode" for executing neural networks. Normally, NNabla functions (instances of nnabla.functions) take Variables as their input. When at least one NdArray is provided as an input for NNabla functions (instead of Variables), the function computation will be fired immediately, and returns an NdArray as the output, instead of returning a Variable. In the above example, the NNabla functions F.mul_scalar and F.sub2 are called by the overridden operators * and -=, respectively.
In other words, NNabla's "Imperative mode" doesn't create a computation graph, and can be used like NumPy. If device acceleration such as CUDA is enabled, it can be used like NumPy empowered with device acceleration. Parametric functions can also be used with NdArray input(s). The following block demonstrates a simple imperative execution example.
Step37: Note that in-place substitution from the rhs to the lhs cannot be done by the = operator. For example, when x is an NdArray, writing x = x + 1 will not increment all values of x - instead, the expression on the rhs will create a new NdArray object that is different from the one originally bound by x, and binds the new NdArray object to the Python variable x on the lhs.
For in-place editing of NdArrays, the in-place assignment operators +=, -=, *=, and /= can be used. The copy_from method can also be used to copy values of an existing NdArray to another. For example, incrementing 1 to x, an NdArray, can be done by x.copy_from(x+1). The copy is performed with device acceleration if a device context is specified by using nnabla.set_default_context or nnabla.context_scope.
Step38: Solver
NNabla provides stochastic gradient descent algorithms to optimize parameters listed in the nnabla.solvers module. The parameter updates demonstrated above can be replaced with this Solver API, which is easier and usually faster.
Step39: Just call the the following solver method to fill zero grad region, then backprop
Step40: The following block updates parameters with the Vanilla Sgd rule (equivalent to the imperative example above).
Step41: Toy Problem To Demonstrate Training
The following function defines a regression problem which computes the norm of a vector.
Step43: We visualize this mapping with the contour plot by matplotlib as follows.
Step44: We define a deep prediction neural network.
Step45: We created a 5 layers deep MLP using for-loop. Note that only 3 lines of the code potentially create infinitely deep neural networks. The next block adds helper functions to visualize the learned function.
Step46: Next we instantiate a solver object as follows. We use Adam optimizer which is one of the most popular SGD algorithm used in the literature.
Step47: The following function generates data from the true system infinitely.
Step48: In the next block, we run 2000 training steps (SGD updates).
Step49: Memory usage optimization
Step50: We can confirm the prediction performs fairly well by looking at the following visualization of the ground truth and prediction function.
Step51: You can save learned parameters by nnabla.save_parameters and load by nnabla.load_parameters.
Step52: Both save and load functions can also be used in a parameter scope. | Python Code:
!pip install nnabla-ext-cuda100
!git clone https://github.com/sony/nnabla.git
%cd nnabla/tutorial
import nnabla as nn # Abbreviate as nn for convenience.
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: NNabla Python API Demonstration Tutorial
Let us import nnabla first, and some additional useful tools.
End of explanation
a = nn.NdArray((2, 3, 4))
Explanation: NdArray
NdArray is a data container of a multi-dimensional array. NdArray is device (e.g. CPU, CUDA) and type (e.g. uint8, float32) agnostic, in which both type and device are implicitly casted or transferred when it is used. Below, you create a NdArray with a shape of (2, 3, 4).
End of explanation
print(a.data)
Explanation: You can see the values held inside a by the following. The values are not initialized, and are created as float32 by default.
End of explanation
print('[Substituting random values]')
a.data = np.random.randn(*a.shape)
print(a.data)
print('[Slicing]')
a.data[0, :, ::2] = 0
print(a.data)
Explanation: The accessor .data returns a reference to the values of NdArray as numpy.ndarray. You can modify these by using the Numpy API as follows.
End of explanation
a.fill(1) # Filling all values with one.
print(a.data)
Explanation: Note that the above operation is all done in the host device (CPU). NdArray provides more efficient functions in case you want to fill all values with a constant, .zero and .fill. They are lazily evaluated when the data is requested (when neural network computation requests the data, or when numpy array is requested by Python) The filling operation is executed within a specific device (e.g. CUDA GPU), and more efficient if you specify the device setting, which we explain later.
End of explanation
b = nn.NdArray.from_numpy_array(np.ones(a.shape))
print(b.data)
Explanation: You can create an NdArray instance directly from a Numpy array object.
End of explanation
x = nn.Variable([2, 3, 4], need_grad=True)
print('x.data:', x.data)
print('x.grad:', x.grad)
Explanation: NdArray is used in Variable class, as well as NNabla's imperative computation of neural networks. We describe them in the later sections.
Variable
Variable class is used when you construct a neural network. The neural network can be described as a graph in which an edge represents a function (a.k.a operator and layer) which defines operation of a minimum unit of computation, and a node represents a variable which holds input/output values of a function (Function class is explained later). The graph is called "Computation Graph".
In NNabla, a Variable, a node of a computation graph, holds two NdArrays, one for storing the input or output values of a function during forward propagation (executing computation graph in the forward order), while another for storing the backward error signal (gradient) during backward propagation (executing computation graph in backward order to propagate error signals down to parameters (weights) of neural networks). The first one is called data, the second is grad in NNabla.
The following line creates a Variable instance with a shape of (2, 3, 4). It has data and grad as NdArray. The flag need_grad is used to omit unnecessary gradient computation during backprop if set to False.
End of explanation
x.shape
Explanation: You can get the shape by:
End of explanation
print('x.data')
print(x.d)
x.d = 1.2345 # To avoid NaN
assert np.all(x.d == x.data.data), 'd: {} != {}'.format(x.d, x.data.data)
print('x.grad')
print(x.g)
x.g = 1.2345 # To avoid NaN
assert np.all(x.g == x.grad.data), 'g: {} != {}'.format(x.g, x.grad.data)
# Zeroing grad values
x.grad.zero()
print('x.grad (after `.zero()`)')
print(x.g)
Explanation: Since both data and grad are NdArray, you can get a reference to its values as NdArray with the .data accessor, but also it can be referred by .d or .g property for data and grad respectively.
End of explanation
x2 = nn.Variable.from_numpy_array(np.ones((3,)), need_grad=True)
print(x2)
print(x2.d)
x3 = nn.Variable.from_numpy_array(np.ones((3,)), np.zeros((3,)), need_grad=True)
print(x3)
print(x3.d)
print(x3.g)
Explanation: Like NdArray, a Variable can also be created from Numpy array(s).
End of explanation
print(x.parent)
Explanation: Besides storing values of a computation graph, pointing a parent edge (function) to trace the computation graph is an important role. Here x doesn't have any connection. Therefore, the .parent property returns None.
End of explanation
import nnabla.functions as F
Explanation: Function
A function defines an operation block of a computation graph as we described above. The module nnabla.functions offers various functions (e.g. Convolution, Affine and ReLU). You can see the list of functions available in the API reference guide.
End of explanation
sigmoid_output = F.sigmoid(x)
sum_output = F.reduce_sum(sigmoid_output)
Explanation: As an example, here you will defines a computation graph that computes the element-wise Sigmoid function outputs for the input variable and sums up all values into a scalar. (This is simple enough to explain how it behaves but a meaningless example in the context of neural network training. We will show you a neural network example later.)
End of explanation
print("sigmoid_output.parent.name:", sigmoid_output.parent.name)
print("x:", x)
print("sigmoid_output.parent.inputs refers to x:", sigmoid_output.parent.inputs)
print("sum_output.parent.name:", sum_output.parent.name)
print("sigmoid_output:", sigmoid_output)
print("sum_output.parent.inputs refers to sigmoid_output:", sum_output.parent.inputs)
Explanation: The function API in nnabla.functions takes one (or several) Variable(s) and arguments (if any), and returns one (or several) output Variable(s). The .parent points to the function instance which created it.
Note that no computation occurs at this time since we just define the graph. (This is the default behavior of NNabla computation graph API. You can also fire actual computation during graph definition which we call "Dynamic mode" (explained later)).
End of explanation
sum_output.forward()
print("CG output:", sum_output.d)
print("Reference:", np.sum(1.0 / (1.0 + np.exp(-x.d))))
Explanation: The .forward() at a leaf Variable executes the forward pass computation in the computation graph.
End of explanation
x.grad.zero()
sum_output.backward()
print("d sum_o / d sigmoid_o:")
print(sigmoid_output.g)
print("d sum_o / d x:")
print(x.g)
Explanation: The .backward() does the backward propagation through the graph. Here we initialize the grad values as zero before backprop since the NNabla backprop algorithm always accumulates the gradient in the root variables.
End of explanation
x = nn.Variable([5, 2]) # Input
w = nn.Variable([2, 3], need_grad=True) # Weights
b = nn.Variable([3], need_grad=True) # Biases
affine_out = F.affine(x, w, b) # Create a graph including only affine
Explanation: NNabla is developed by mainly focused on neural network training and inference. Neural networks have parameters to be learned associated with computation blocks such as Convolution, Affine (a.k.a. fully connected, dense etc.). In NNabla, the learnable parameters are also represented as Variable objects. Just like input variables, those parameter variables are also used by passing into Functions. For example, Affine function takes input, weights and biases as inputs.
End of explanation
# Set random input and parameters
x.d = np.random.randn(*x.shape)
w.d = np.random.randn(*w.shape)
b.d = np.random.randn(*b.shape)
# Initialize grad
x.grad.zero() # Just for showing gradients are not computed when need_grad=False (default).
w.grad.zero()
b.grad.zero()
# Forward and backward
affine_out.forward()
affine_out.backward()
# Note: Calling backward at non-scalar Variable propagates 1 as error message from all element of outputs. .
Explanation: The above example takes an input with B=5 (batchsize) and D=2 (dimensions) and maps it to D'=3 outputs, i.e. (B, D') output.
You may also notice that here you set need_grad=True only for parameter variables (w and b). The x is a non-parameter variable and the root of computation graph. Therefore, it doesn't require gradient computation. In this configuration, the gradient computation for x is not executed in the first affine, which will omit the computation of unnecessary backpropagation.
The next block sets data and initializes grad, then applies forward and backward computation.
End of explanation
print('F.affine')
print(affine_out.d)
print('Reference')
print(np.dot(x.d, w.d) + b.d)
Explanation: You can see that affine_out holds an output of Affine.
End of explanation
print("dw")
print(w.g)
print("db")
print(b.g)
Explanation: The resulting gradients of weights and biases are as follows.
End of explanation
print(x.g)
Explanation: The gradient of x is not changed because need_grad is set as False.
End of explanation
import nnabla.parametric_functions as PF
Explanation: Parametric Function
Considering parameters as inputs of Function enhances expressiveness and flexibility of computation graphs.
However, to define all parameters for each learnable function is annoying for users to define a neural network.
In NNabla, trainable models are usually created by composing functions that have optimizable parameters.
These functions are called "Parametric Functions".
The Parametric Function API provides various parametric functions and an interface for composing trainable models.
To use parametric functions, import:
End of explanation
with nn.parameter_scope("affine1"):
c1 = PF.affine(x, 3)
Explanation: The function with optimizable parameter can be created as below.
End of explanation
nn.get_parameters()
Explanation: The first line creates a parameter scope. The second line then applies PF.affine - an affine transform - to x, and creates a variable c1 holding that result. The parameters are created and initialized randomly at function call, and registered by a name "affine1" using parameter_scope context. The function nnabla.get_parameters() allows to get the registered parameters.
End of explanation
c1 = PF.affine(x, 3, name='affine1')
nn.get_parameters()
Explanation: The name= argument of any PF function creates the equivalent parameter space to the above definition of PF.affine transformation as below. It could save the space of your Python code. The nnabla.parameter_scope is more useful when you group multiple parametric functions such as Convolution-BatchNormalization found in a typical unit of CNNs.
End of explanation
c1.shape
Explanation: It is worth noting that the shapes of both outputs and parameter variables (as you can see above) are automatically determined by only providing the output size of affine transformation(in the example above the output size is 3). This helps to create a graph in an easy way.
End of explanation
with nn.parameter_scope('foo'):
h = PF.affine(x, 3)
with nn.parameter_scope('bar'):
h = PF.affine(h, 4)
Explanation: Parameter scope can be nested as follows (although a meaningless example).
End of explanation
nn.get_parameters()
Explanation: This creates the following.
End of explanation
with nn.parameter_scope("foo"):
print(nn.get_parameters())
Explanation: Also, get_parameters() can be used in parameter_scope. For example:
End of explanation
with nn.parameter_scope("foo"):
nn.clear_parameters()
print(nn.get_parameters())
Explanation: nnabla.clear_parameters() can be used to delete registered parameters under the scope.
End of explanation
nn.clear_parameters()
batchsize = 16
x = nn.Variable([batchsize, 2])
with nn.parameter_scope("fc1"):
h = F.tanh(PF.affine(x, 512))
with nn.parameter_scope("fc2"):
y = PF.affine(h, 1)
print("Shapes:", h.shape, y.shape)
Explanation: MLP Example For Explanation
The following block creates a computation graph to predict one dimensional output from two dimensional inputs by a 2 layer fully connected neural network (multi-layer perceptron).
End of explanation
nn.get_parameters()
Explanation: This will create the following parameter variables.
End of explanation
x.d = np.random.randn(*x.shape) # Set random input
y.forward()
print(y.d)
Explanation: As described above, you can execute the forward pass by calling forward method at the terminal variable.
End of explanation
# Variable for label
label = nn.Variable([batchsize, 1])
# Set loss
loss = F.reduce_mean(F.squared_error(y, label))
# Execute forward pass.
label.d = np.random.randn(*label.shape) # Randomly generate labels
loss.forward()
print(loss.d)
Explanation: Training a neural networks needs a loss value to be minimized by gradient descent with backprop. In NNabla, loss function is also a just function, and packaged in the functions module.
End of explanation
# Collect all parameter variables and init grad.
for name, param in nn.get_parameters().items():
param.grad.zero()
# Gradients are accumulated to grad of params.
loss.backward()
Explanation: As you've seen above, NNabla backward accumulates the gradients at the root variables. You have to initialize the grad of the parameter variables before backprop (We will show you the easiest way with Solver API).
End of explanation
for name, param in nn.get_parameters().items():
param.data -= param.grad * 0.001 # 0.001 as learning rate
Explanation: Imperative Mode
After performing backprop, gradients are held in parameter variable grads. The next block will update the parameters with vanilla gradient descent.
End of explanation
# A simple example of imperative mode.
xi = nn.NdArray.from_numpy_array(np.arange(4).reshape(2, 2))
yi = F.relu(xi - 1)
print(xi.data)
print(yi.data)
Explanation: The above computation is an example of NNabla's "Imperative Mode" for executing neural networks. Normally, NNabla functions (instances of nnabla.functions) take Variables as their input. When at least one NdArray is provided as an input for NNabla functions (instead of Variables), the function computation will be fired immediately, and returns an NdArray as the output, instead of returning a Variable. In the above example, the NNabla functions F.mul_scalar and F.sub2 are called by the overridden operators * and -=, respectively.
In other words, NNabla's "Imperative mode" doesn't create a computation graph, and can be used like NumPy. If device acceleration such as CUDA is enabled, it can be used like NumPy empowered with device acceleration. Parametric functions can also be used with NdArray input(s). The following block demonstrates a simple imperative execution example.
End of explanation
# The following doesn't perform substitution but assigns a new NdArray object to `xi`.
# xi = xi + 1
# The following copies the result of `xi + 1` to `xi`.
xi.copy_from(xi + 1)
assert np.all(xi.data == (np.arange(4).reshape(2, 2) + 1))
# Inplace operations like `+=`, `*=` can also be used (more efficient).
xi += 1
assert np.all(xi.data == (np.arange(4).reshape(2, 2) + 2))
Explanation: Note that in-place substitution from the rhs to the lhs cannot be done by the = operator. For example, when x is an NdArray, writing x = x + 1 will not increment all values of x - instead, the expression on the rhs will create a new NdArray object that is different from the one originally bound by x, and binds the new NdArray object to the Python variable x on the lhs.
For in-place editing of NdArrays, the in-place assignment operators +=, -=, *=, and /= can be used. The copy_from method can also be used to copy values of an existing NdArray to another. For example, incrementing 1 to x, an NdArray, can be done by x.copy_from(x+1). The copy is performed with device acceleration if a device context is specified by using nnabla.set_default_context or nnabla.context_scope.
End of explanation
from nnabla import solvers as S
solver = S.Sgd(lr=0.00001)
solver.set_parameters(nn.get_parameters())
# Set random data
x.d = np.random.randn(*x.shape)
label.d = np.random.randn(*label.shape)
# Forward
loss.forward()
Explanation: Solver
NNabla provides stochastic gradient descent algorithms to optimize parameters listed in the nnabla.solvers module. The parameter updates demonstrated above can be replaced with this Solver API, which is easier and usually faster.
End of explanation
solver.zero_grad()
loss.backward()
Explanation: Just call the the following solver method to fill zero grad region, then backprop
End of explanation
solver.update()
Explanation: The following block updates parameters with the Vanilla Sgd rule (equivalent to the imperative example above).
End of explanation
def vector2length(x):
# x : [B, 2] where B is number of samples.
return np.sqrt(np.sum(x ** 2, axis=1, keepdims=True))
Explanation: Toy Problem To Demonstrate Training
The following function defines a regression problem which computes the norm of a vector.
End of explanation
# Data for plotting contour on a grid data.
xs = np.linspace(-1, 1, 100)
ys = np.linspace(-1, 1, 100)
grid = np.meshgrid(xs, ys)
X = grid[0].flatten()
Y = grid[1].flatten()
def plot_true():
Plotting contour of true mapping from a grid data created above.
plt.contourf(xs, ys, vector2length(np.hstack([X[:, None], Y[:, None]])).reshape(100, 100))
plt.axis('equal')
plt.colorbar()
plot_true()
Explanation: We visualize this mapping with the contour plot by matplotlib as follows.
End of explanation
def length_mlp(x):
h = x
for i, hnum in enumerate([4, 8, 4, 2]):
h = F.tanh(PF.affine(h, hnum, name="fc{}".format(i)))
y = PF.affine(h, 1, name='fc')
return y
nn.clear_parameters()
batchsize = 100
x = nn.Variable([batchsize, 2])
y = length_mlp(x)
label = nn.Variable([batchsize, 1])
loss = F.reduce_mean(F.squared_error(y, label))
Explanation: We define a deep prediction neural network.
End of explanation
def predict(inp):
ret = []
for i in range(0, inp.shape[0], x.shape[0]):
xx = inp[i:i + x.shape[0]]
# Imperative execution
xi = nn.NdArray.from_numpy_array(xx)
yi = length_mlp(xi)
ret.append(yi.data.copy())
return np.vstack(ret)
def plot_prediction():
plt.contourf(xs, ys, predict(np.hstack([X[:, None], Y[:, None]])).reshape(100, 100))
plt.colorbar()
plt.axis('equal')
Explanation: We created a 5 layers deep MLP using for-loop. Note that only 3 lines of the code potentially create infinitely deep neural networks. The next block adds helper functions to visualize the learned function.
End of explanation
from nnabla import solvers as S
solver = S.Adam(alpha=0.01)
solver.set_parameters(nn.get_parameters())
Explanation: Next we instantiate a solver object as follows. We use Adam optimizer which is one of the most popular SGD algorithm used in the literature.
End of explanation
def random_data_provider(n):
x = np.random.uniform(-1, 1, size=(n, 2))
y = vector2length(x)
return x, y
Explanation: The following function generates data from the true system infinitely.
End of explanation
num_iter = 2000
for i in range(num_iter):
# Sample data and set them to input variables of training.
xx, ll = random_data_provider(batchsize)
x.d = xx
label.d = ll
# Forward propagation given inputs.
loss.forward(clear_no_need_grad=True)
# Parameter gradients initialization and gradients computation by backprop.
solver.zero_grad()
loss.backward(clear_buffer=True)
# Apply weight decay and update by Adam rule.
solver.weight_decay(1e-6)
solver.update()
# Just print progress.
if i % 100 == 0 or i == num_iter - 1:
print("Loss@{:4d}: {}".format(i, loss.d))
Explanation: In the next block, we run 2000 training steps (SGD updates).
End of explanation
loss.forward(clear_buffer=True)
print("The prediction `y` is cleared because it's an intermediate variable.")
print(y.d.flatten()[:4]) # to save space show only 4 values
y.persistent = True
loss.forward(clear_buffer=True)
print("The prediction `y` is kept by the persistent flag.")
print(y.d.flatten()[:4]) # to save space show only 4 value
Explanation: Memory usage optimization: You may notice that, in the above updates, .forward() is called with the clear_no_need_grad= option, and .backward() is called with the clear_buffer= option. Training of neural network in more realistic scenarios usually consumes huge memory due to the nature of backpropagation algorithm, in which all of the forward variable buffer data should be kept in order to compute the gradient of a function. In a naive implementation, we keep all the variable data and grad living until the NdArray objects are not referenced (i.e. the graph is deleted). The clear_* options in .forward() and .backward() enables to save memory consumption due to that by clearing (erasing) memory of data and grad when it is not referenced by any subsequent computation. (More precisely speaking, it doesn't free memory actually. We use our memory pool engine by default to avoid memory alloc/free overhead). The unreferenced buffers can be re-used in subsequent computation. See the document of Variable for more details. Note that the following loss.forward(clear_buffer=True) clears data of any intermediate variables. If you are interested in intermediate variables for some purposes (e.g. debug, log), you can use the .persistent flag to prevent clearing buffer of a specific Variable like below.
End of explanation
plt.subplot(121)
plt.title("Ground truth")
plot_true()
plt.subplot(122)
plt.title("Prediction")
plot_prediction()
Explanation: We can confirm the prediction performs fairly well by looking at the following visualization of the ground truth and prediction function.
End of explanation
path_param = "param-vector2length.h5"
nn.save_parameters(path_param)
# Remove all once
nn.clear_parameters()
nn.get_parameters()
# Load again
nn.load_parameters(path_param)
print('\n'.join(map(str, nn.get_parameters().items())))
Explanation: You can save learned parameters by nnabla.save_parameters and load by nnabla.load_parameters.
End of explanation
with nn.parameter_scope('foo'):
nn.load_parameters(path_param)
print('\n'.join(map(str, nn.get_parameters().items())))
!rm {path_param} # Clean ups
Explanation: Both save and load functions can also be used in a parameter scope.
End of explanation |
11,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note For this to work, you will need the lsst.sims stack to be installed.
- opsimsummary uses healpy which is installed with the sims stack, but also available from pip/conda
- snsims uses the lsst.sims stack.
Step1: This section pertains to how to write a new Tiling class
```
noTile = snsims.Tiling()
TypeError Traceback (most recent call last)
<ipython-input-9-5f6f8a94508e> in <module>()
----> 1 noTile = snsims.Tiling()
TypeError
Step2: ```
Step4: Using the class HealpixTiles
Currently there is only concrete tiling class that has been implemented. This is the snsims.HealpixTiles class.
This shows how to use the HealpixTiles Class | Python Code:
import opsimsummary as oss
from opsimsummary import Tiling, HealpixTiles
# import snsims
import healpy as hp
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Note For this to work, you will need the lsst.sims stack to be installed.
- opsimsummary uses healpy which is installed with the sims stack, but also available from pip/conda
- snsims uses the lsst.sims stack.
End of explanation
class NoTile(Tiling):
pass
noTile = NoTile()
Explanation: This section pertains to how to write a new Tiling class
```
noTile = snsims.Tiling()
TypeError Traceback (most recent call last)
<ipython-input-9-5f6f8a94508e> in <module>()
----> 1 noTile = snsims.Tiling()
TypeError: Can't instantiate abstract class Tiling with abstract methods init, area, pointingSequenceForTile, tileIDSequence, tileIDsForSN
```
The class snsims.Tiling is an abstract Base class. Therefore, this cannot be instantiated. It must be subclassed, and the set of methods outlined have to be implemented for this to work.
End of explanation
class MyTile(Tiling):
def __init__(self):
pass
@property
def tileIDSequence(self):
return np.arange(100)
def tileIDsForSN(self, ra, dec):
x = ra + dec
y = np.remainder(x, 100.)
return np.floor(y)
def area(self, tileID):
return 1.
def pointingSequenceForTile(self, tileID, pointings):
return None
def positions(self):
pass
myTile = MyTile()
Explanation: ```
noTile = NoTile()
TypeError Traceback (most recent call last)
<ipython-input-4-8ddedac7fb97> in <module>()
----> 1 noTile = NoTile()
TypeError: Can't instantiate abstract class NoTile with abstract methods init, area, pointingSequenceForTile, positions, tileIDSequence, tileIDsForSN
```
The above fails because the methods are not implemented. Below is a stupid (ie. not useful) but minimalist class that would work:
End of explanation
issubclass(HealpixTiles, Tiling)
help(HealpixTiles)
datadir = os.path.join(oss.__path__[0], 'example_data')
opsimdb = os.path.join(datadir, 'enigma_1189_micro.db')
NSIDE = 4
hpOpSim = oss.HealPixelizedOpSim.fromOpSimDB(opsimdb, NSIDE=NSIDE)
NSIDE
hpTileshpOpSim = HealpixTiles(healpixelizedOpSim=hpOpSim, nside=NSIDE)
hpTileshpOpSim.pointingSequenceForTile(1, allPointings=None)
phi, theta = hpTileshpOpSim.positions(1, 10000)
mapvals = np.ones(hp.nside2npix(NSIDE)) * hp.UNSEEN
mapvals[1] = 100
hp.ang2pix(NSIDE, np.radians(theta), np.radians(phi), nest=True)
theta_c, phi_c = hp.pix2ang(4, 1, nest=True)
hp.mollview(mapvals, nest=True)
hp.projscatter(np.radians(theta), np.radians(phi), **dict(s=0.0002))
hp.projscatter(theta_c, phi_c, **dict(s=8., c='r'))
%timeit hpTileshpOpSim.pointingSequenceForTile(33, allPointings=None)
preCompMap = os.path.join(oss.__path__[0], 'example_data', 'healpixels_micro.db')
hpTilesMap = HealpixTiles(nside=1, preComputedMap=preCompMap)
hpTilesMap.pointingSequenceForTile(10, allPointings=None)
%timeit hpOpSim.obsHistIdsForTile(34)
hpTiles = HealpixTiles(healpixelizedOpSim=hpOpSim)
hpTiles.pointingSequenceForTile(34, allPointings=None)
Explanation: Using the class HealpixTiles
Currently there is only concrete tiling class that has been implemented. This is the snsims.HealpixTiles class.
This shows how to use the HealpixTiles Class
End of explanation |
11,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
More fun with pandas
Let's use pandas to dive into some more complicated data.
The data
We're going to be working with FDA import refusal data from 2014 to September 2017. From the source
Step1: Convert the date field to native datetime
We'll use the to_datetime() method to convert the REFUSAL_DATE column from string to datetime. (Via this S/O answer)
Why? Later on we might want to do some time-based analysis. | Python Code:
# to avoid errors with the FDA files, we're going to specify the encoding
# as latin_1, which is common with gov't data
# so it's a decent educated guess to start with
# main dataframe
# country code lookup dataframe
# refusal code lookup dataframe
# specify that the 'ASC_ID' column comes in as a string
# because later we're going to join on it
# run `.head()` to check the output
Explanation: More fun with pandas
Let's use pandas to dive into some more complicated data.
The data
We're going to be working with FDA import refusal data from 2014 to September 2017. From the source:
The Food, Drug, and Cosmetic Act (the Act) authorizes FDA to detain a regulated product that appears to be out of compliance with the Act. The FDA district office will then issue a "Notice of FDA Action" specifying the nature of the violation to the owner or consignee. The owner or consignee is entitled to an informal hearing in order to provide testimony regarding the admissibility of the product. If the owner fails to submit evidence that the product is in compliance or fails to submit a plan to bring the product into compliance, FDA will issue another "Notice of FDA Action" refusing admission to the product. The product then has to be exported or destroyed within 90 days.
Here's the layout for the main file:
Column | Description
------ | -----------
MFG_FIRM_FEI_NUM |
LGL_NAME | Name of the Declared Manufacturer
LINE1_ADRS | Manufacturer Address
LINE2_ADRS | Manufacturer Address
CITY_NAME | Manufacturer City
PROVINCE_STATE | Manufacturer Province or State
ISO_CNTRY_CODE | 2 Letter ISO country code
PRODUCT_CODE | 5-7 Character product code
REFUSAL_DATE |
DISTRICT | FDA District where entry was made
ENTRY_NUM | CBP Entry Number
RFRNC_DOC_ID | CBP Line Number
LINE_NUM | FDA Line number
LINE_SFX_ID | FDA Line Suffix
FDA_SAMPLE_ANALYSIS | Y if there are FDA Analytical Results
PRIVATE_LAB_ANALYSIS | Y if there was a Private Lab package
REFUSAL_CHARGES | asc_id’s (1 to many) of the charges for which product was refused. If there are multiple they will be separated by a comma e.g. 320, 328, 321, 482, 218, 3320
PROD_CODE_DESC_TEXT | FDA's or Corrected Description
Come up with a list of questions to ask
As with any tool, your analysis is only as good as your questions. We'll start with a couple easy ones:
In this time period, which country had the most imports refused? (ISO_CNTRY_CODE)
Which company had the most? (MFG_FIRM_FEI_NUM)
What was the most common reason for refusing a product? (REFUSAL_CHARGES)
Let's get started!
Import pandas
Load the data into data frames
We'll use the read_csv() method to read in the data files:
data/import-refusal.csv => the main data file
data/import-refusal-charge-codes.csv => refusal code lookup file
data/country-codes.csv => country code lookup file (via)
End of explanation
# convert the date strings to datetime
# make sure the conversion actually happened
# run `.head()` to check the country code output
# run `.head()` to check the output
Explanation: Convert the date field to native datetime
We'll use the to_datetime() method to convert the REFUSAL_DATE column from string to datetime. (Via this S/O answer)
Why? Later on we might want to do some time-based analysis.
End of explanation |
11,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this exercise, you'll apply target encoding to features in the Ames dataset.
Run this cell to set everything up!
Step1: First you'll need to choose which features you want to apply a target encoding to. Categorical features with a large number of categories are often good candidates. Run this cell to see how many categories each categorical feature in the Ames dataset has.
Step2: We talked about how the M-estimate encoding uses smoothing to improve estimates for rare categories. To see how many times a category occurs in the dataset, you can use the value_counts method. This cell shows the counts for SaleType, but you might want to consider others as well.
Step3: 1) Choose Features for Encoding
Which features did you identify for target encoding? After you've thought about your answer, run the next cell for some discussion.
Step4: Now you'll apply a target encoding to your choice of feature. As we discussed in the tutorial, to avoid overfitting, we need to fit the encoder on data heldout from the training set. Run this cell to create the encoding and training splits
Step5: 2) Apply M-Estimate Encoding
Apply a target encoding to your choice of categorical features. Also choose a value for the smoothing parameter m (any value is okay for a correct answer).
Step6: If you'd like to see how the encoded feature compares to the target, you can run this cell
Step7: From the distribution plots, does it seem like the encoding is informative?
And this cell will show you the score of the encoded set compared to the original set
Step8: Do you think that target encoding was worthwhile in this case? Depending on which feature or features you chose, you may have ended up with a score significantly worse than the baseline. In that case, it's likely the extra information gained by the encoding couldn't make up for the loss of data used for the encoding.
In this question, you'll explore the problem of overfitting with target encodings. This will illustrate this importance of training fitting target encoders on data held-out from the training set.
So let's see what happens when we fit the encoder and the model on the same dataset. To emphasize how dramatic the overfitting can be, we'll mean-encode a feature that should have no relationship with SalePrice, a count
Step9: Almost a perfect score!
Step10: And the distributions are almost exactly the same, too.
3) Overfitting with Target Encoders
Based on your understanding of how mean-encoding works, can you explain how XGBoost was able to get an almost a perfect fit after mean-encoding the count feature? | Python Code:
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.feature_engineering_new.ex6 import *
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import warnings
from category_encoders import MEstimateEncoder
from sklearn.model_selection import cross_val_score
from xgboost import XGBRegressor
# Set Matplotlib defaults
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=14,
titlepad=10,
)
warnings.filterwarnings('ignore')
def score_dataset(X, y, model=XGBRegressor()):
# Label encoding for categoricals
for colname in X.select_dtypes(["category", "object"]):
X[colname], _ = X[colname].factorize()
# Metric for Housing competition is RMSLE (Root Mean Squared Log Error)
score = cross_val_score(
model, X, y, cv=5, scoring="neg_mean_squared_log_error",
)
score = -1 * score.mean()
score = np.sqrt(score)
return score
df = pd.read_csv("../input/fe-course-data/ames.csv")
Explanation: Introduction
In this exercise, you'll apply target encoding to features in the Ames dataset.
Run this cell to set everything up!
End of explanation
df.select_dtypes(["object"]).nunique()
Explanation: First you'll need to choose which features you want to apply a target encoding to. Categorical features with a large number of categories are often good candidates. Run this cell to see how many categories each categorical feature in the Ames dataset has.
End of explanation
df["SaleType"].value_counts()
Explanation: We talked about how the M-estimate encoding uses smoothing to improve estimates for rare categories. To see how many times a category occurs in the dataset, you can use the value_counts method. This cell shows the counts for SaleType, but you might want to consider others as well.
End of explanation
# View the solution (Run this cell to receive credit!)
q_1.check()
Explanation: 1) Choose Features for Encoding
Which features did you identify for target encoding? After you've thought about your answer, run the next cell for some discussion.
End of explanation
# Encoding split
X_encode = df.sample(frac=0.20, random_state=0)
y_encode = X_encode.pop("SalePrice")
# Training split
X_pretrain = df.drop(X_encode.index)
y_train = X_pretrain.pop("SalePrice")
Explanation: Now you'll apply a target encoding to your choice of feature. As we discussed in the tutorial, to avoid overfitting, we need to fit the encoder on data heldout from the training set. Run this cell to create the encoding and training splits:
End of explanation
# YOUR CODE HERE: Create the MEstimateEncoder
# Choose a set of features to encode and a value for m
encoder = ____
# Fit the encoder on the encoding split
____
# Encode the training split
#_UNCOMMENT_IF(PROD)_
#X_train = encoder.transform(X_pretrain, y_train)
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
#%%RM_IF(PROD)%%
encoder = MEstimateEncoder(
cols=["Neighborhood"],
m=1.0,
)
# Fit the encoder on the encoding split
encoder.fit(X_encode, y_encode)
# Encode the training split
X_train = encoder.transform(X_pretrain, y_train)
q_2.assert_check_passed()
#%%RM_IF(PROD)%%
encoder = MEstimateEncoder(
cols=["MSSubClass"],
m=3.0,
)
# Fit the encoder on the encoding split
encoder.fit(X_encode, y_encode)
# Encode the training split
X_train = encoder.transform(X_pretrain, y_train)
q_2.assert_check_passed()
Explanation: 2) Apply M-Estimate Encoding
Apply a target encoding to your choice of categorical features. Also choose a value for the smoothing parameter m (any value is okay for a correct answer).
End of explanation
feature = encoder.cols
plt.figure(dpi=90)
ax = sns.distplot(y_train, kde=True, hist=False)
ax = sns.distplot(X_train[feature], color='r', ax=ax, hist=True, kde=False, norm_hist=True)
ax.set_xlabel("SalePrice");
Explanation: If you'd like to see how the encoded feature compares to the target, you can run this cell:
End of explanation
X = df.copy()
y = X.pop("SalePrice")
score_base = score_dataset(X, y)
score_new = score_dataset(X_train, y_train)
print(f"Baseline Score: {score_base:.4f} RMSLE")
print(f"Score with Encoding: {score_new:.4f} RMSLE")
Explanation: From the distribution plots, does it seem like the encoding is informative?
And this cell will show you the score of the encoded set compared to the original set:
End of explanation
# Try experimenting with the smoothing parameter m
# Try 0, 1, 5, 50
m = 0
X = df.copy()
y = X.pop('SalePrice')
# Create an uninformative feature
X["Count"] = range(len(X))
X["Count"][1] = 0 # actually need one duplicate value to circumvent error-checking in MEstimateEncoder
# fit and transform on the same dataset
encoder = MEstimateEncoder(cols="Count", m=m)
X = encoder.fit_transform(X, y)
# Results
score = score_dataset(X, y)
print(f"Score: {score:.4f} RMSLE")
Explanation: Do you think that target encoding was worthwhile in this case? Depending on which feature or features you chose, you may have ended up with a score significantly worse than the baseline. In that case, it's likely the extra information gained by the encoding couldn't make up for the loss of data used for the encoding.
In this question, you'll explore the problem of overfitting with target encodings. This will illustrate this importance of training fitting target encoders on data held-out from the training set.
So let's see what happens when we fit the encoder and the model on the same dataset. To emphasize how dramatic the overfitting can be, we'll mean-encode a feature that should have no relationship with SalePrice, a count: 0, 1, 2, 3, 4, 5, ....
End of explanation
plt.figure(dpi=90)
ax = sns.distplot(y, kde=True, hist=False)
ax = sns.distplot(X["Count"], color='r', ax=ax, hist=True, kde=False, norm_hist=True)
ax.set_xlabel("SalePrice");
Explanation: Almost a perfect score!
End of explanation
# View the solution (Run this cell to receive credit!)
q_3.check()
# Uncomment this if you'd like a hint before seeing the answer
#_COMMENT_IF(PROD)_
q_3.hint()
Explanation: And the distributions are almost exactly the same, too.
3) Overfitting with Target Encoders
Based on your understanding of how mean-encoding works, can you explain how XGBoost was able to get an almost a perfect fit after mean-encoding the count feature?
End of explanation |
11,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab Session
Step1: 1. Introduction
In this notebook we explore an application of clustering algorithms to shape segmentation from binary images. We will carry out some exploratory work with a small set of images provided with this notebook. Most of them are not binary images, so we must do some preliminary work to extract he binary shape images and apply the clustering algorithms to them. We will have the opportunity to test the differences between $k$-means and spectral clustering in this problem.
1.1. Load Image
Several images are provided with this notebook
Step2: 2. Thresholding
Select an intensity threshold by manual inspection of the image histogram
Step3: Plot the binary image after thresholding.
Step4: 3. Dataset generation
Extract pixel coordinates dataset from image and plot them in a scatter plot.
Step5: 4. k-means clustering algorithm
Use the pixel coordinates as the input data for a k-means algorithm. Plot the result of the clustering by means of a scatter plot, showing each cluster with a different colour.
Step6: 5. Spectral clustering algorithm
5.1. Affinity matrix
Compute and visualize the affinity matrix for the given dataset, using a rbf kernel with $\gamma=5$.
Step7: 5.2. Spectral clusering
Apply the spectral clustering algorithm, and show the clustering results using a scatter plot. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import imread
Explanation: Lab Session: Clustering algorithms for Image Segmentation
Author: Jesús Cid Sueiro
Jan. 2017
End of explanation
name = "birds.jpg"
name = "Seeds.jpg"
birds = imread("Images/" + name)
birdsG = np.sum(birds, axis=2)
# <SOL>
plt.imshow(birdsG, cmap=plt.get_cmap('gray'))
plt.grid(False)
plt.axis('off')
plt.show()
# </SOL>
Explanation: 1. Introduction
In this notebook we explore an application of clustering algorithms to shape segmentation from binary images. We will carry out some exploratory work with a small set of images provided with this notebook. Most of them are not binary images, so we must do some preliminary work to extract he binary shape images and apply the clustering algorithms to them. We will have the opportunity to test the differences between $k$-means and spectral clustering in this problem.
1.1. Load Image
Several images are provided with this notebook:
BinarySeeds.png
birds.jpg
blood_frog_1.jpg
cKyDP.jpg
Matricula.jpg
Matricula2.jpg
Seeds.png
Select and visualize image birds.jpg from file and plot it in grayscale
End of explanation
# <SOL>
plt.hist(birdsG.ravel(), bins=256)
plt.show()
# </SOL>
Explanation: 2. Thresholding
Select an intensity threshold by manual inspection of the image histogram
End of explanation
# <SOL>
if name == "birds.jpg":
th = 256
elif name == "Seeds.jpg":
th = 650
birdsBN = birdsG > th
# If there are more white than black pixels, reverse the image
if np.sum(birdsBN) > float(np.prod(birdsBN.shape)/2):
birdsBN = 1-birdsBN
plt.imshow(birdsBN, cmap=plt.get_cmap('gray'))
plt.grid(False)
plt.axis('off')
plt.show()
# </SOL>
Explanation: Plot the binary image after thresholding.
End of explanation
# <SOL>
(h, w) = birdsBN.shape
bW = birdsBN * range(w)
bH = birdsBN * np.array(range(h))[:,np.newaxis]
pSet = [t for t in zip(bW.ravel(), bH.ravel()) if t!=(0,0)]
X = np.array(pSet)
# </SOL>
print X
plt.scatter(X[:, 0], X[:, 1], s=5);
plt.axis('equal')
plt.show()
Explanation: 3. Dataset generation
Extract pixel coordinates dataset from image and plot them in a scatter plot.
End of explanation
from sklearn.cluster import KMeans
# <SOL>
est = KMeans(100) # 4 clusters
est.fit(X)
y_kmeans = est.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=5, cmap='rainbow',
linewidth=0.0)
plt.axis('equal')
plt.show()
# </SOL>
Explanation: 4. k-means clustering algorithm
Use the pixel coordinates as the input data for a k-means algorithm. Plot the result of the clustering by means of a scatter plot, showing each cluster with a different colour.
End of explanation
from sklearn.metrics.pairwise import rbf_kernel
# <SOL>
gamma = 5
sf = 4
Xsub = X[0::sf]
print Xsub.shape
gamma = 0.001
K = rbf_kernel(Xsub, Xsub, gamma=gamma)
# </SOL>
# Visualization
# <SOL>
plt.imshow(K, cmap='hot')
plt.colorbar()
plt.title('RBF Affinity Matrix for gamma = ' + str(gamma))
plt.grid('off')
plt.show()
# </SOL>
Explanation: 5. Spectral clustering algorithm
5.1. Affinity matrix
Compute and visualize the affinity matrix for the given dataset, using a rbf kernel with $\gamma=5$.
End of explanation
# <SOL>
from sklearn.cluster import SpectralClustering
spc = SpectralClustering(n_clusters=100, gamma=gamma, affinity='rbf')
y_kmeans = spc.fit_predict(Xsub)
# </SOL>
plt.scatter(Xsub[:,0], Xsub[:,1], c=y_kmeans, s=5, cmap='rainbow', linewidth=0.0)
plt.axis('equal')
plt.show()
Explanation: 5.2. Spectral clusering
Apply the spectral clustering algorithm, and show the clustering results using a scatter plot.
End of explanation |
11,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
로지스틱 회귀 분석
로지스틱 회귀(Logistic Regression) 분석은 회귀 분석이라는 명칭을 가지고 있지만 분류(classsification) 방법의 일종이다.
로지스틱 회귀 모형에서는 베르누이 확률 변수(Bernoilli random variable)의 모수(parameter) $\theta$가 독립 변수 $x$에 의존한다고 가정한다.
* 과거에는 로지스틱 함수를 쓰는 이유가 계산하는데 부담이 없어서 컴퓨팅 사양이 낮아도 가능했으나 요즘에는 컴퓨터 사양이 좋아졌기 때문에 큰 메리트가 없다.
$$ p(y \mid x, \theta) = \text{Ber} (y \mid \theta(x) )$$
여기에서 모수 $\theta$ 는 0과 1사이의 실수이며 다음과 같이 $x$의 값에 의존하는 함수이다.
$$
\theta = f(w^Tx)
$$
시그모이드 함수
모수 $\theta$는 일반적인 회귀 분석의 종속 변수와 달리 0 부터 1까지의 실수값만 가질 수 있기 때문에 시그모이드 함수(sigmoid function)이라 불리는 특별한 형태의 함수 $f$를 사용해야 한다.
시그모이드 함수는 종속 변수의 모든 실수 값에 대해 유한한 구간 $(a,b)$ 사이의 한정된(bounded) 값과 양의 기울기를 가지는 함수를 말하며 다음과 같은 함수들이 주로 사용된다.
로지스틱 함수 (Logistic Function)
$$ \text{logitstic}(z) = \dfrac{1}{1+\exp{(-z)}} $$
오차 함수 (Error Function)
$$ \text{erf}(z) = \frac{2}{\sqrt\pi}\int_0^z e^{-t^2}\,dt $$
하이퍼볼릭 탄젠트 함수 (Hyperbolic tangent)
$$ \tanh(z) = \frac{\sinh z}{\cosh z} = \frac {e^z - e^{-z}} {e^z + e^{-z}} $$
역 탄젠트 함수 (Arc-tangent)
$$ \arctan(z) = \tan^{-1}(z) $$
Step1: 로지스틱 함수
여러가지 시그모이드 중 로지스틱 함수는 다음과 같은 물리적인 의미를 부여할 수 있기 때문에 많이 사용된다.
우선 Bernoulli 시도에서 1이 나올 확률 $\theta$ 와 0이 나올 확률 $1-\theta$ 의 비(ratio)는 다음과 같은 수식이 되며 odds ratio 라고 한다.
$$ \text{odds ratio} = \dfrac{\theta}{1-\theta} $$
이 odds ratio 를 로그 변환한 것이 로지트 함수(Logit function)이다.
$$ z = \text{logit}(\text{odds ratio}) = \log \left(\dfrac{\theta}{1-\theta}\right) $$
로지스틱 함수(Logistic function) 는 이 로지트 함수의 역함수이다.
$$ \text{logitstic}(z) = \theta(z) = \dfrac{1}{1+\exp{(-z)}} $$
로지스틱 모형의 모수 추정
로지스틱 모형은 일종의 비선형 회귀 모형이지만 다음과 같이 MLE(Maximum Likelihood Estimation) 방법으로 모수 $w$를 추정할 수 있다.
여기에서는 종속 변수 $y$가 베르누이 확률 변수라고 가정한다.
$$ p(y \mid x, \theta) = \text{Ber} (y \mid \theta(x) )$$
데이터 표본이 ${ x_i, y_i }$일 경우 Log Likelihood $\text{LL}$ 를 구하면 다음과 같다.
$$
\begin{eqnarray}
\text{LL}
&=& \log \prod_{i=1}^N \theta_i(x_i)^{y_i} (1-\theta_i(x_i))^{1-y_i} \
&=& \sum_{i=1}^N \left( y_i \log\theta_i(x_i) + (1-y_i)\log(1-\theta_i(x_i)) \right) \
\end{eqnarray}
$$
$\theta$가 로지스틱 함수 형태로 표현된다면
$$
\log \left(\dfrac{\theta(x)}{1-\theta(x)}\right) = w^T x
$$
$$
\theta(x) = \dfrac{1}{1 + \exp{(-w^Tx)}}
$$
가 되고 이를 Log Likelihood 에 적용하면 다음과 같다.
$$
\begin{eqnarray}
\text{LL}
&=& \sum_{i=1}^N \left( y_i \log\theta_i(x_i) + (1-y_i)\log(1-\theta_i(x_i)) \right) \
&=& \sum_{i=1}^N \left( y_i \log\left(\dfrac{1}{1 + \exp{(-w^Tx_i)}}\right) - (1-y_i)\log\left(\dfrac{\exp{(-w^Tx_i)}}{1 + \exp{(-w^Tx_i)}}\right) \right) \
\end{eqnarray}
$$
이 값의 최대화하는 값을 구하기 위해 chain rule를 사용하여 $w$로 미분해야 한다.
우선 $\theta$를 $w$로 미분하면
$$ \dfrac{\partial \theta}{\partial w}
= \dfrac{\partial}{\partial w} \dfrac{1}{1 + \exp{(-w^Tx)}} \
= \dfrac{\exp{(-w^Tx)}}{(1 + \exp{(-w^Tx)})^2} x \
= \theta(1-\theta) x $$
chain rule를 적용하면
$$
\begin{eqnarray}
\dfrac{\partial \text{LL}}{\partial w}
&=& \sum_{i=1}^N \left( y_i \dfrac{1}{\theta_i(x_i;w)} - (1-y_i)\dfrac{1}{1-\theta_i(x_i;w)} \right) \dfrac{\partial \theta}{\partial w} \
&=& \sum_{i=1}^N \big( y_i (1-\theta_i(x_i;w)) - (1-y_i)\theta_i(x_i;w) \big) x_i \
&=& \sum_{i=1}^N \big( y_i - \theta_i(x_i;w) \big) x_i \
\end{eqnarray}
$$
이 값은 $w$에 대한 비선형 함수이므로 선형 모형과 같이 간단하게 그레디언트가 0이 되는 모수 $w$ 값에 대한 수식을 구할 수 없으며 수치적인 최적화 방법(numerical optimization)을 통해 최적 모수 $w$의 값을 구해야 한다.
수치적 최적화
단순한 Steepest Gradient 방법을 사용한다면 최적화 알고리즘은 다음과 같다.
그레디언트 벡터는
$$
g_k = \dfrac{d}{dw}(-LL)
$$
이 방향으로 step size $\eta_k$ 만큼 움직이면 다음과 같이 반복적으로 최적 모수값을 구할 수 있다.
$$
\begin{eqnarray}
w_{k+1}
&=& w_{k} - \eta_k g_k \
&=& w_{k} + \eta_k \sum_{i=1}^N \big( y_i - \theta_i(x_i) \big) x_i\
\end{eqnarray}
$$
공식에서 y랑 x와 세타의 곱이랑 뺀 것은 오차
Scikit-Learn 패키지의 로지스틱 회귀
Scikit-Learn 패키지는 로지스틱 회귀 모형 LogisticRegression 를 제공한다.
Step2: statsmodels 패키지의 로지스틱 회귀
statsmodels 패키지는 로지스틱 회귀 모형 Logit 를 제공한다. 사용방법은 OLS 와 동일하다. Scikit-Learn 패키지와 달리 Logit 클래스는 classification 되기 전의 값을 출력한다
Step3: converged
Step4: 예제 1
Step5: 예제 2
Step6: 예제 3 | Python Code:
xx = np.linspace(-10, 10, 1000)
plt.plot(xx, (1/(1+np.exp(-xx)))*2-1, label="logistic (scaled)")
plt.plot(xx, sp.special.erf(0.5*np.sqrt(np.pi)*xx), label="erf (scaled)")
plt.plot(xx, np.tanh(xx), label="tanh")
plt.ylim([-1.1, 1.1])
plt.legend(loc=2)
plt.show()
Explanation: 로지스틱 회귀 분석
로지스틱 회귀(Logistic Regression) 분석은 회귀 분석이라는 명칭을 가지고 있지만 분류(classsification) 방법의 일종이다.
로지스틱 회귀 모형에서는 베르누이 확률 변수(Bernoilli random variable)의 모수(parameter) $\theta$가 독립 변수 $x$에 의존한다고 가정한다.
* 과거에는 로지스틱 함수를 쓰는 이유가 계산하는데 부담이 없어서 컴퓨팅 사양이 낮아도 가능했으나 요즘에는 컴퓨터 사양이 좋아졌기 때문에 큰 메리트가 없다.
$$ p(y \mid x, \theta) = \text{Ber} (y \mid \theta(x) )$$
여기에서 모수 $\theta$ 는 0과 1사이의 실수이며 다음과 같이 $x$의 값에 의존하는 함수이다.
$$
\theta = f(w^Tx)
$$
시그모이드 함수
모수 $\theta$는 일반적인 회귀 분석의 종속 변수와 달리 0 부터 1까지의 실수값만 가질 수 있기 때문에 시그모이드 함수(sigmoid function)이라 불리는 특별한 형태의 함수 $f$를 사용해야 한다.
시그모이드 함수는 종속 변수의 모든 실수 값에 대해 유한한 구간 $(a,b)$ 사이의 한정된(bounded) 값과 양의 기울기를 가지는 함수를 말하며 다음과 같은 함수들이 주로 사용된다.
로지스틱 함수 (Logistic Function)
$$ \text{logitstic}(z) = \dfrac{1}{1+\exp{(-z)}} $$
오차 함수 (Error Function)
$$ \text{erf}(z) = \frac{2}{\sqrt\pi}\int_0^z e^{-t^2}\,dt $$
하이퍼볼릭 탄젠트 함수 (Hyperbolic tangent)
$$ \tanh(z) = \frac{\sinh z}{\cosh z} = \frac {e^z - e^{-z}} {e^z + e^{-z}} $$
역 탄젠트 함수 (Arc-tangent)
$$ \arctan(z) = \tan^{-1}(z) $$
End of explanation
from sklearn.datasets import make_classification
X0, y = make_classification(n_features=1, n_redundant=0, n_informative=1, n_clusters_per_class=1, random_state=4)
X = sm.add_constant(X0)
#redundant는 informative한 것이 몇개가 있는가? 더 자세한 것은 classfication 메뉴얼을 보면 된다.
from sklearn.linear_model import LogisticRegression
model = LogisticRegression().fit(X0, y) #model을 만들 때 상수항이 끼면 안 된다? 왜 그렇지?
xx = np.linspace(-3, 3, 100)
sigm = 1.0/(1+np.exp(-model.coef_[0][0]*xx-model.intercept_[0])) #여기 model들어가는데 대체 왜 상수항?? ㅠㅠ
plt.plot(xx, sigm)
plt.scatter(X0, y, marker='o', c=y, s=100)
plt.scatter(X0, model.predict(X0), marker='x', c=y, s=200, lw=2, alpha=0.5, cmap=mpl.cm.jet)
plt.xlim(-3, 3)
plt.show()
Explanation: 로지스틱 함수
여러가지 시그모이드 중 로지스틱 함수는 다음과 같은 물리적인 의미를 부여할 수 있기 때문에 많이 사용된다.
우선 Bernoulli 시도에서 1이 나올 확률 $\theta$ 와 0이 나올 확률 $1-\theta$ 의 비(ratio)는 다음과 같은 수식이 되며 odds ratio 라고 한다.
$$ \text{odds ratio} = \dfrac{\theta}{1-\theta} $$
이 odds ratio 를 로그 변환한 것이 로지트 함수(Logit function)이다.
$$ z = \text{logit}(\text{odds ratio}) = \log \left(\dfrac{\theta}{1-\theta}\right) $$
로지스틱 함수(Logistic function) 는 이 로지트 함수의 역함수이다.
$$ \text{logitstic}(z) = \theta(z) = \dfrac{1}{1+\exp{(-z)}} $$
로지스틱 모형의 모수 추정
로지스틱 모형은 일종의 비선형 회귀 모형이지만 다음과 같이 MLE(Maximum Likelihood Estimation) 방법으로 모수 $w$를 추정할 수 있다.
여기에서는 종속 변수 $y$가 베르누이 확률 변수라고 가정한다.
$$ p(y \mid x, \theta) = \text{Ber} (y \mid \theta(x) )$$
데이터 표본이 ${ x_i, y_i }$일 경우 Log Likelihood $\text{LL}$ 를 구하면 다음과 같다.
$$
\begin{eqnarray}
\text{LL}
&=& \log \prod_{i=1}^N \theta_i(x_i)^{y_i} (1-\theta_i(x_i))^{1-y_i} \
&=& \sum_{i=1}^N \left( y_i \log\theta_i(x_i) + (1-y_i)\log(1-\theta_i(x_i)) \right) \
\end{eqnarray}
$$
$\theta$가 로지스틱 함수 형태로 표현된다면
$$
\log \left(\dfrac{\theta(x)}{1-\theta(x)}\right) = w^T x
$$
$$
\theta(x) = \dfrac{1}{1 + \exp{(-w^Tx)}}
$$
가 되고 이를 Log Likelihood 에 적용하면 다음과 같다.
$$
\begin{eqnarray}
\text{LL}
&=& \sum_{i=1}^N \left( y_i \log\theta_i(x_i) + (1-y_i)\log(1-\theta_i(x_i)) \right) \
&=& \sum_{i=1}^N \left( y_i \log\left(\dfrac{1}{1 + \exp{(-w^Tx_i)}}\right) - (1-y_i)\log\left(\dfrac{\exp{(-w^Tx_i)}}{1 + \exp{(-w^Tx_i)}}\right) \right) \
\end{eqnarray}
$$
이 값의 최대화하는 값을 구하기 위해 chain rule를 사용하여 $w$로 미분해야 한다.
우선 $\theta$를 $w$로 미분하면
$$ \dfrac{\partial \theta}{\partial w}
= \dfrac{\partial}{\partial w} \dfrac{1}{1 + \exp{(-w^Tx)}} \
= \dfrac{\exp{(-w^Tx)}}{(1 + \exp{(-w^Tx)})^2} x \
= \theta(1-\theta) x $$
chain rule를 적용하면
$$
\begin{eqnarray}
\dfrac{\partial \text{LL}}{\partial w}
&=& \sum_{i=1}^N \left( y_i \dfrac{1}{\theta_i(x_i;w)} - (1-y_i)\dfrac{1}{1-\theta_i(x_i;w)} \right) \dfrac{\partial \theta}{\partial w} \
&=& \sum_{i=1}^N \big( y_i (1-\theta_i(x_i;w)) - (1-y_i)\theta_i(x_i;w) \big) x_i \
&=& \sum_{i=1}^N \big( y_i - \theta_i(x_i;w) \big) x_i \
\end{eqnarray}
$$
이 값은 $w$에 대한 비선형 함수이므로 선형 모형과 같이 간단하게 그레디언트가 0이 되는 모수 $w$ 값에 대한 수식을 구할 수 없으며 수치적인 최적화 방법(numerical optimization)을 통해 최적 모수 $w$의 값을 구해야 한다.
수치적 최적화
단순한 Steepest Gradient 방법을 사용한다면 최적화 알고리즘은 다음과 같다.
그레디언트 벡터는
$$
g_k = \dfrac{d}{dw}(-LL)
$$
이 방향으로 step size $\eta_k$ 만큼 움직이면 다음과 같이 반복적으로 최적 모수값을 구할 수 있다.
$$
\begin{eqnarray}
w_{k+1}
&=& w_{k} - \eta_k g_k \
&=& w_{k} + \eta_k \sum_{i=1}^N \big( y_i - \theta_i(x_i) \big) x_i\
\end{eqnarray}
$$
공식에서 y랑 x와 세타의 곱이랑 뺀 것은 오차
Scikit-Learn 패키지의 로지스틱 회귀
Scikit-Learn 패키지는 로지스틱 회귀 모형 LogisticRegression 를 제공한다.
End of explanation
logit_mod = sm.Logit(y, X)
logit_res = logit_mod.fit(disp=0) #(disp=0) 그레디언트로 몇 번 리터레이션해서 찾아가는 과정 때문에 쓰는 값
print(logit_res.summary())
Explanation: statsmodels 패키지의 로지스틱 회귀
statsmodels 패키지는 로지스틱 회귀 모형 Logit 를 제공한다. 사용방법은 OLS 와 동일하다. Scikit-Learn 패키지와 달리 Logit 클래스는 classification 되기 전의 값을 출력한다
End of explanation
xx = np.linspace(-3, 3, 100)
sigmoid = logit_res.predict(sm.add_constant(xx))
plt.plot(xx, sigmoid, lw=5, alpha=0.5)
plt.scatter(X0, y, marker='o', c=y, s=100)
plt.scatter(X0, logit_res.predict(X), marker='x', c=y, s=200, lw=2, alpha=0.5, cmap=mpl.cm.jet)
plt.xlim(-3, 3)
plt.show()
Explanation: converged: True
이게 True여야 전체가 의미가 있는 것. 만약 트루가 아니면 컨버진을 시켜야 한다.
End of explanation
df = pd.read_table("MichelinFood.txt")
df
df.plot(kind="scatter", x="Food", y="proportion", s=100)
plt.show()
X = sm.add_constant(df.Food)
y = df.proportion
model = sm.Logit(y, X)
result = model.fit()
print(result.summary())
df.plot(kind="scatter", x="Food", y="proportion", s=50, alpha=0.5)
xx = np.linspace(10, 35, 100)
plt.plot(xx, result.predict(sm.add_constant(xx)), "r", lw=4)
plt.xlim(10, 35)
plt.show()
Explanation: 예제 1: Michelin and Zagat 가이드 비교
다음 데이터는 뉴욕시의 레스토랑에 대한 두 개의 가이드북에서 발취한 것이다.
Food: Zagat Survey 2006 의 고객 평가 점수
InMichelin: 해당 고객 평가 점수를 받은 레스토랑 중 2006 Michelin Guide New York City 에 실린 레스토랑의 수
NotInMichelin: 해당 고객 평가 점수를 받은 레스토랑 중 2006 Michelin Guide New York City 에 실리지 않은 레스토랑의 수
mi: 해당 고객 평가 점수를 받은 레스토랑의 수
proportion: 해당 고객 평가 점수를 받은 레스토랑 중 2006 Michelin Guide New York City 에 실린 레스토랑의 비율
End of explanation
import sys
print(sys.getdefaultencoding())
print(sys.stdin.encoding)
print(sys.stdout.encoding)
import locale
print(locale.getpreferredencoding())
df = pd.read_csv("MichelinNY.csv", encoding = "ISO-8859-1")
df.tail()
sns.stripplot(x="Food", y="InMichelin", data=df, jitter=True, orient='h', order=[1, 0])
plt.grid(True)
plt.show()
X = sm.add_constant(df.Food)
y = df.InMichelin
model = sm.Logit(y, X)
result = model.fit()
print(result.summary())
xx = np.linspace(10, 35, 100)
pred = result.predict(sm.add_constant(xx))
decision_value = xx[np.argmax(pred > 0.5)]
print(decision_value)
plt.plot(xx, pred, "r", lw=4)
plt.axvline(decision_value)
plt.xlim(10, 35)
plt.show()
Explanation: 예제 2: Michelin 가이드 예측
다음 데이터는 뉴욕시의 개별 레스토랑의 고객 평가 점수와 Michelin 가이드 수록 여부를 보인 것이다.
InMichelin: Michelin 가이드 수록 여부
Restaurant Name: 레스토랑 이름
Food: 식사에 대한 고객 평가 점수 (1~30)
Decor: 인테리어에 대한 고객 평가 점수 (1~30)
Service: 서비스에 대한 고객 평가 점수 (1~30)
Price: 저녁 식사 가격 (US$)
End of explanation
print(sm.datasets.fair.SOURCE)
print(sm.datasets.fair.NOTE)
df = sm.datasets.fair.load_pandas().data
df.head()
sns.factorplot(x="affairs", y="children", row="yrs_married", data=df,
orient="h", size=2, aspect=5, kind='box')
plt.show()
df['affair'] = (df['affairs'] > 0).astype(float)
model = sm.formula.logit("affair ~ rate_marriage + religious + yrs_married + age + educ + children", df).fit()
print(model.summary())
Explanation: 예제 3: Fair's Affair Dataset
End of explanation |
11,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Bipartite graphs are graphs that have two (bi-) partitions (-partite) of nodes. Nodes within each partition are not allowed to be connected to one another; rather, they can only be connected to nodes in the other partition.
Bipartite graphs can be useful for modelling relations between two sets of entities. We will explore the construction and analysis of bipartite graphs here.
Let's load a crime data bipartite graph and quickly explore it.
This bipartite network contains persons who appeared in at least one crime case as either a suspect, a victim, a witness or both a suspect and victim at the same time. A left node represents a person and a right node represents a crime. An edge between two nodes shows that the left node was involved in the crime represented by the right node.
Step1: Projections
Bipartite graphs can be projected down to one of the projections. For example, we can generate a person-person graph from the person-crime graph, by declaring that two nodes that share a crime node are in fact joined by an edge.
Exercise
Find the bipartite projection function in the NetworkX bipartite module docs, and use it to obtain the unipartite projection of the bipartite graph.
Step2: Exercise
Try visualizing the person-person crime network by using a Circos plot. Ensure that the nodes are grouped by gender and then by number of connections.
Step3: Exercise
Use a similar logic to extract crime links.
Step4: Exercise
Can you plot how the crimes are connected, using a Circos plot? Try ordering it by number of connections.
Step5: Exercise
NetworkX also implements centrality measures for bipartite graphs, which allows you to obtain their metrics without first converting to a particular projection. This is useful for exploratory data analysis.
Try the following challenges, referring to the API documentation to help you | Python Code:
G = cf.load_crime_network()
G.edges(data=True)[0:5]
G.nodes(data=True)[0:10]
Explanation: Introduction
Bipartite graphs are graphs that have two (bi-) partitions (-partite) of nodes. Nodes within each partition are not allowed to be connected to one another; rather, they can only be connected to nodes in the other partition.
Bipartite graphs can be useful for modelling relations between two sets of entities. We will explore the construction and analysis of bipartite graphs here.
Let's load a crime data bipartite graph and quickly explore it.
This bipartite network contains persons who appeared in at least one crime case as either a suspect, a victim, a witness or both a suspect and victim at the same time. A left node represents a person and a right node represents a crime. An edge between two nodes shows that the left node was involved in the crime represented by the right node.
End of explanation
person_nodes =
pG =
pG.nodes(data=True)[0:5]
Explanation: Projections
Bipartite graphs can be projected down to one of the projections. For example, we can generate a person-person graph from the person-crime graph, by declaring that two nodes that share a crime node are in fact joined by an edge.
Exercise
Find the bipartite projection function in the NetworkX bipartite module docs, and use it to obtain the unipartite projection of the bipartite graph.
End of explanation
nodes = sorted(____, key=lambda x: (____________, ___________))
edges = pG.edges()
edgeprops = dict(alpha=0.1)
node_cmap = {0:'blue', 1:'red'}
nodecolor = [__________________ for n in nodes]
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
c.fig.savefig('images/crime-person.png', dpi=300)
Explanation: Exercise
Try visualizing the person-person crime network by using a Circos plot. Ensure that the nodes are grouped by gender and then by number of connections.
End of explanation
crime_nodes = _________
cG = _____________ # cG stands for "crime graph"
Explanation: Exercise
Use a similar logic to extract crime links.
End of explanation
nodes = sorted(___________, key=lambda x: __________)
edges = cG.edges()
edgeprops = dict(alpha=0.1)
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/crime-crime.png', dpi=300)
Explanation: Exercise
Can you plot how the crimes are connected, using a Circos plot? Try ordering it by number of connections.
End of explanation
# Degree Centrality
bpdc = _______________________
sorted(___________, key=lambda x: ___, reverse=True)
Explanation: Exercise
NetworkX also implements centrality measures for bipartite graphs, which allows you to obtain their metrics without first converting to a particular projection. This is useful for exploratory data analysis.
Try the following challenges, referring to the API documentation to help you:
Which crimes have the most number of people involved?
Which people are involved in the most number of crimes?
End of explanation |
11,669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bubble Sort
Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted.
Complexity
Time
Worst-case
Step1: work
Step2: Code2 - ChartTracer
Step3: work | Python Code:
def bubble_sort(unsorted_list):
x = ipytracer.List1DTracer(unsorted_list)
display(x)
length = len(x)-1
for i in range(length):
for j in range(length-i):
if x[j] > x[j+1]:
x[j], x[j+1] = x[j+1], x[j]
return x.data
Explanation: Bubble Sort
Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted.
Complexity
Time
Worst-case: $O(n^2)$
Bast-case: $O(n)$
Average: $O(n^2)$
Reference
Wikipedea
Code1 - List1DTracer
End of explanation
bubble_sort([6,4,7,9,3,5,1,8,2])
Explanation: work
End of explanation
def bubble_sort(unsorted_list):
x = ipytracer.ChartTracer(unsorted_list)
display(x)
length = len(x)-1
for i in range(length):
for j in range(length-i):
if x[j] > x[j+1]:
x[j], x[j+1] = x[j+1], x[j]
return x.data
Explanation: Code2 - ChartTracer
End of explanation
bubble_sort([6,4,7,9,3,5,1,8,2])
Explanation: work
End of explanation |
11,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Algebra in NumPy
Unit 9, Lecture 2
Numerical Methods and Statistics
Prof. Andrew White, March 30, 2020
Step1: Working with Matrices in Numpy
We saw earlier in the class how to create numpy matrices. Let's review that and learn about explicit initialization
Explicit Initialization
You can explitily set the values in your matrix by first creating a list and then converting it into a numpy array
Step2: You can use multiple lines in python to specify your list. This can make the formatting cleaner
Step3: Create and Set
You can also create an array and then set the elements.
Step4: Linear Algebra
The linear algebra routines for python are in the numpy.linalg library. See here
Matrix Multiplication
Matrix multiplication is done with the dot method. Let's compare that with *
Step5: So, dot correctly gives us a 2x1 matrix as expected for the two shapes
Using the special @ character
Step6: The element-by-element multiplication, *, doesn't work on different sized arrays.
Step7: Method vs Function
Instead of using dot as a method (it comes after a .), you can use the dot function as well. Let's see an example
Step8: Matrix Rank
The rank of a matrix can be found with singular value decomposition. In numpy, we can do this simply with a call to linalg.matrix_rank
Step9: Matrix Inverse
The inverse of a matrix can be found using the linalg.inverse command. Consider the following system of equations
Step10: Computation cost for inverse
Computing a matrix inverse can be VERY expensive for large matrices. Do not exceed about 500 x 500 matrices
Eigenvectors/Eigenvalues
Before trying to understand what an eigenvector is, let's try to understand their analogue, a stationary point.
A stationary point of a function $f(x)$ is an $x$ such that
Step11: Eigenvectors/Eigenvalues
Matrices are analogues of functions. They take in a vector and return a vector.
$$\mathbf{A}\mathbf{x} = \mathbf{y}$$
Just like stationary points, there is sometimes a special vector which has this property
Step12: So that means $v_1 = [0.7, 0.7]$ and $v_2 = [-0.7, 0.7]$. Let's find out
Step13: Yes, that is the same direction! And notice that it's 4 times as much as the input vector, which is what the eigenvalue is telling us.
A random matrix will almost never be Hermitian, so look out for complex numbers. In engineering, your matrices commonly be Hermitian. | Python Code:
import random
import numpy as np
import matplotlib.pyplot as plt
from math import sqrt, pi, erf
import scipy.stats
import numpy.linalg
Explanation: Linear Algebra in NumPy
Unit 9, Lecture 2
Numerical Methods and Statistics
Prof. Andrew White, March 30, 2020
End of explanation
matrix = [ [4,3], [6, 2] ]
print('As Python list:')
print(matrix)
np_matrix = np.array(matrix)
print('The shape of the array:', np.shape(np_matrix))
print('The numpy matrix/array:')
print(np_matrix)
Explanation: Working with Matrices in Numpy
We saw earlier in the class how to create numpy matrices. Let's review that and learn about explicit initialization
Explicit Initialization
You can explitily set the values in your matrix by first creating a list and then converting it into a numpy array
End of explanation
np_matrix_2 = np.array([
[ 4, 3],
[ 1, 2],
[-1, 4],
[ 4, 2]
])
print(np_matrix_2)
Explanation: You can use multiple lines in python to specify your list. This can make the formatting cleaner
End of explanation
np_matrix_3 = np.zeros( (2, 10) )
print(np_matrix_3)
np_matrix_3[:, 1] = 2
print(np_matrix_3)
np_matrix_3[0, :] = -1
print(np_matrix_3)
np_matrix_3[1, 6] = 43
print(np_matrix_3)
rows, columns = np.shape(np_matrix_3) #get the number of rows and columns
for i in range(columns): #Do a for loop over columns
np_matrix_3[1, i] = i ** 2 #Set the value of the 2nd row, ith column to be i^2
print(np_matrix_3)
Explanation: Create and Set
You can also create an array and then set the elements.
End of explanation
np_matrix_1 = np.random.random( (2, 4) ) #create a random 2 x 4 array
np_matrix_2 = np.random.random( (4, 1) ) #create a random 4 x 1 array
print(np_matrix_1.dot(np_matrix_2))
Explanation: Linear Algebra
The linear algebra routines for python are in the numpy.linalg library. See here
Matrix Multiplication
Matrix multiplication is done with the dot method. Let's compare that with *
End of explanation
print(np_matrix_1 @ np_matrix_2)
Explanation: So, dot correctly gives us a 2x1 matrix as expected for the two shapes
Using the special @ character:
End of explanation
print(np_matrix_1 * np_matrix_2)
Explanation: The element-by-element multiplication, *, doesn't work on different sized arrays.
End of explanation
print(np_matrix_1.dot(np_matrix_2))
print(np.dot(np_matrix_1, np_matrix_2))
Explanation: Method vs Function
Instead of using dot as a method (it comes after a .), you can use the dot function as well. Let's see an example:
End of explanation
import numpy.linalg as linalg
matrix = [ [1, 0], [0, 0] ]
np_matrix = np.array(matrix)
print(linalg.matrix_rank(np_matrix))
Explanation: Matrix Rank
The rank of a matrix can be found with singular value decomposition. In numpy, we can do this simply with a call to linalg.matrix_rank
End of explanation
#Enter the data as lists
a_matrix = [[3, 2, 1],
[2,-1,0],
[1,1,-2]]
b_matrix = [5, 4, 12]
#convert them to numpy arrays/matrices
np_a_matrix = np.array(a_matrix)
np_b_matrix = np.array(b_matrix).transpose()
#Solve the problem
np_a_inv = linalg.inv(np_a_matrix)
np_x_matrix = np_a_inv @ np_b_matrix
#print the solution
print(np_x_matrix)
#check to make sure the answer works
print(np_a_matrix @ np_x_matrix)
Explanation: Matrix Inverse
The inverse of a matrix can be found using the linalg.inverse command. Consider the following system of equations:
$$\begin{array}{lr}
3 x + 2 y + z & = 5\
2 x - y & = 4 \
x + y - 2z & = 12 \
\end{array}$$
We can encode it as a matrix equation:
$$\left[\begin{array}{lcr}
3 & 2 & 1\
2 & -1 & 0\
1 & 1 & -2\
\end{array}\right]
\left[\begin{array}{l}
x\
y\
z\
\end{array}\right]
=
\left[\begin{array}{l}
5\
4\
12\
\end{array}\right]$$
$$\mathbf{A}\mathbf{x} = \mathbf{b}$$
$$\mathbf{A}^{-1}\mathbf{b} = \mathbf{x}$$
End of explanation
x = 1
for i in range(10):
x = x - (x**2 - 612) / (2 * x)
print(i, x)
Explanation: Computation cost for inverse
Computing a matrix inverse can be VERY expensive for large matrices. Do not exceed about 500 x 500 matrices
Eigenvectors/Eigenvalues
Before trying to understand what an eigenvector is, let's try to understand their analogue, a stationary point.
A stationary point of a function $f(x)$ is an $x$ such that:
$$x = f(x)$$
Consider this function:
$$f(x) = x - \frac{x^2 - 612}{2x}$$
If we found a stationary point, that would be mean that
$$x = x - \frac{x^2 - 612}{2x} $$
or
$$ x^2 = 612 $$
More generally, you can find a square root of $A$ by finding a stationary point to:
$$f(x) = x - \frac{x^2 - A}{2x} $$
In this case, you can find the stationary point by just doing $x_{i+1} = f(x_i)$ until you are stationary
End of explanation
A = np.array([[3,1], [1,3]])
e_values, e_vectors = np.linalg.eig(A)
print(e_vectors)
print(e_values)
Explanation: Eigenvectors/Eigenvalues
Matrices are analogues of functions. They take in a vector and return a vector.
$$\mathbf{A}\mathbf{x} = \mathbf{y}$$
Just like stationary points, there is sometimes a special vector which has this property:
$$\mathbf{A}\mathbf{x} = \mathbf{x}$$
Such a vector is called an eigenvector. It turns out such vectors are rarely always exists. If we instead allow a scalar, we can find a whole bunch like this:
$$\mathbf{A}\mathbf{v} =\lambda\mathbf{v}$$
These are like the stationary points above, except we are getting back our input times a constant. That means it's a particular direction that is unchanged, not the value.
Finding Eigenvectors/Eigenvalues
Eigenvalues/eigenvectors can be found easily as well in python, including for complex numbers and sparse matrices. The command linalg.eigh will return only the real eigenvalues/eigenvectors. That assumes your matrix is Hermitian, meaning it is symmetric (if your matrix is real numbers). Use eig to get general possibly complex eigenvalues Here's an easy example:
Let's consider this matrix:
$$
A = \left[\begin{array}{lr}
3 & 1\
1 & 3\
\end{array}\right]$$
Imagine it as a geometry operator. It takes in a 2D vector and morphs it into another 2D vector.
$$\vec{x} = [1, 0]$$
$$A \,\vec{x}^T = [3, 1]^T$$
Now is there a particular direction where $\mathbf{A}$ cannot affect it?
End of explanation
v1 = e_vectors[:,0]
v2 = e_vectors[:,1]
A @ v1
Explanation: So that means $v_1 = [0.7, 0.7]$ and $v_2 = [-0.7, 0.7]$. Let's find out:
End of explanation
A = np.random.normal(size=(3,3))
e_values, e_vectors = linalg.eig(A)
print(e_values)
print(e_vectors)
Explanation: Yes, that is the same direction! And notice that it's 4 times as much as the input vector, which is what the eigenvalue is telling us.
A random matrix will almost never be Hermitian, so look out for complex numbers. In engineering, your matrices commonly be Hermitian.
End of explanation |
11,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
New to Plotly?
Plotly's Python library is free and open source! Get started by downloading the client and reading the primer.
<br>You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
<br>We also have a quick-reference cheatsheet (new!) to help you get started!
Version Check
Note
Step1: Basic Histogram
Step2: Normalized Histogram
Step3: Horizontal Histogram
Step4: Overlaid Histgram
Step5: Stacked Histograms ###
Step6: Colored and Styled Histograms | Python Code:
import plotly
plotly.__version__
Explanation: New to Plotly?
Plotly's Python library is free and open source! Get started by downloading the client and reading the primer.
<br>You can set up Plotly to work in online or offline mode, or in jupyter notebooks.
<br>We also have a quick-reference cheatsheet (new!) to help you get started!
Version Check
Note: Histograms are available in version <b>1.9.12+</b><br>
Run pip install plotly --upgrade to update your Plotly version
End of explanation
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.randn(500)
data = [
go.Histogram(
x=x
)
]
py.iplot(data)
Explanation: Basic Histogram
End of explanation
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.randn(500)
data = [
go.Histogram(
x=x,
histnorm='probability'
)
]
py.iplot(data)
Explanation: Normalized Histogram
End of explanation
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
y = np.random.randn(500)
data = [
go.Histogram(
y=y
)
]
py.iplot(data)
Explanation: Horizontal Histogram
End of explanation
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
trace1 = go.Histogram(
x=x0,
opacity=0.75
)
trace2 = go.Histogram(
x=x1,
opacity=0.75
)
data = [trace1, trace2]
layout = go.Layout(
barmode='overlay'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
Explanation: Overlaid Histgram
End of explanation
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
trace1 = go.Histogram(
x=x0
)
trace2 = go.Histogram(
x=x1
)
data = [trace1, trace2]
layout = go.Layout(
barmode='stack'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
Explanation: Stacked Histograms ###
End of explanation
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
trace1 = go.Histogram(
x=x0,
histnorm='count',
name='control',
autobinx=False,
xbins=dict(
start=-3.2,
end=2.8,
size=0.2
),
marker=dict(
color='fuchsia',
line=dict(
color='grey',
width=0
)
),
opacity=0.75
)
trace2 = go.Histogram(
x=x1,
name='experimental',
autobinx=False,
xbins=dict(
start=-1.8,
end=4.2,
size=0.2
),
marker=dict(
color='rgb(255, 217, 102)'
),
opacity=0.75
)
data = [trace1, trace2]
layout = go.Layout(
title='Sampled Results',
xaxis=dict(
title='Value'
),
yaxis=dict(
title='Count'
),
barmode='overlay',
bargap=0.25,
bargroupgap=0.3
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
Explanation: Colored and Styled Histograms
End of explanation |
11,672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. This is a Jupyter notebook!
<p>A <em>Jupyter notebook</em> is a document that contains text cells (what you're reading right now) and code cells. What is special with a notebook is that it's <em>interactive</em>
Step1: 2. Put any code in code cells
<p>But a code cell can contain much more than a simple one-liner! This is a notebook running python and you can put <em>any</em> python code in a code cell (but notebooks can run other languages too, like R). Below is a code cell where we define a whole new function (<code>greet</code>). To show the output of <code>greet</code> we run it last in the code cell as the last value is always printed out. </p>
Step2: 3. Jupyter notebooks ♡ data
<p>We've seen that notebooks can display basic objects such as numbers and strings. But notebooks also support the objects used in data science, which makes them great for interactive data analysis!</p>
<p>For example, below we create a <code>pandas</code> DataFrame by reading in a <code>csv</code>-file with the average global temperature for the years 1850 to 2016. If we look at the <code>head</code> of this DataFrame the notebook will render it as a nice-looking table.</p>
Step3: 4. Jupyter notebooks ♡ plots
<p>Tables are nice but — as the saying goes — <em>"a plot can show a thousand data points"</em>. Notebooks handle plots as well, but it requires a bit of magic. Here <em>magic</em> does not refer to any arcane rituals but to so-called "magic commands" that affect how the Jupyter notebook works. Magic commands start with either <code>%</code> or <code>%%</code> and the command we need to nicely display plots inline is <code>%matplotlib inline</code>. With this <em>magic</em> in place, all plots created in code cells will automatically be displayed inline. </p>
<p>Let's take a look at the global temperature for the last 150 years.</p>
Step4: 5. Jupyter notebooks ♡ a lot more
<p>Tables and plots are the most common outputs when doing data analysis, but Jupyter notebooks can render many more types of outputs such as sound, animation, video, etc. Yes, almost anything that can be shown in a modern web browser. This also makes it possible to include <em>interactive widgets</em> directly in the notebook!</p>
<p>For example, this (slightly complicated) code will create an interactive map showing the locations of the three largest smartphone companies in 2016. You can move and zoom the map, and you can click the markers for more info! </p>
Step5: 6. Goodbye for now!
<p>This was just a short introduction to Jupyter notebooks, an open source technology that is increasingly used for data science and analysis. I hope you enjoyed it! | Python Code:
# I'm a code cell, click me, then run me!
256 * 60 * 24 # Children × minutes × hours
Explanation: 1. This is a Jupyter notebook!
<p>A <em>Jupyter notebook</em> is a document that contains text cells (what you're reading right now) and code cells. What is special with a notebook is that it's <em>interactive</em>: You can change or add code cells, and then <em>run</em> a cell by first selecting it and then clicking the <em>run cell</em> button above ( <strong>▶|</strong> Run ) or hitting <code>ctrl + enter</code>. </p>
<p><img src="https://s3.amazonaws.com/assets.datacamp.com/production/project_33/datasets/run_code_cell_image.png" alt=""></p>
<p>The result will be displayed directly in the notebook. You <em>could</em> use a notebook as a simple calculator. For example, it's estimated that on average 256 children were born every minute in 2016. The code cell below calculates how many children were born on average on a day. </p>
End of explanation
def greet(first_name, last_name):
greeting = 'My name is ' + last_name + ', ' + first_name + ' ' + last_name + '!'
return greeting
# Replace with your first and last name.
# That is, unless your name is already James Bond.
greet('Mohan', 'Prasath')
Explanation: 2. Put any code in code cells
<p>But a code cell can contain much more than a simple one-liner! This is a notebook running python and you can put <em>any</em> python code in a code cell (but notebooks can run other languages too, like R). Below is a code cell where we define a whole new function (<code>greet</code>). To show the output of <code>greet</code> we run it last in the code cell as the last value is always printed out. </p>
End of explanation
# Importing the pandas module
import pandas as pd
# Reading in the global temperature data
global_temp = pd.read_csv('datasets/global_temperature.csv')
# Take a look at the first datapoints
global_temp.head()
Explanation: 3. Jupyter notebooks ♡ data
<p>We've seen that notebooks can display basic objects such as numbers and strings. But notebooks also support the objects used in data science, which makes them great for interactive data analysis!</p>
<p>For example, below we create a <code>pandas</code> DataFrame by reading in a <code>csv</code>-file with the average global temperature for the years 1850 to 2016. If we look at the <code>head</code> of this DataFrame the notebook will render it as a nice-looking table.</p>
End of explanation
# Setting up inline plotting using jupyter notebook "magic"
%matplotlib inline
import matplotlib.pyplot as plt
# Plotting global temperature in degrees celsius by year
plt.plot(global_temp['year'], global_temp['degrees_celsius'])
# Adding some nice labels
plt.xlabel('year')
plt.ylabel('degree_celsius')
plt.show()
Explanation: 4. Jupyter notebooks ♡ plots
<p>Tables are nice but — as the saying goes — <em>"a plot can show a thousand data points"</em>. Notebooks handle plots as well, but it requires a bit of magic. Here <em>magic</em> does not refer to any arcane rituals but to so-called "magic commands" that affect how the Jupyter notebook works. Magic commands start with either <code>%</code> or <code>%%</code> and the command we need to nicely display plots inline is <code>%matplotlib inline</code>. With this <em>magic</em> in place, all plots created in code cells will automatically be displayed inline. </p>
<p>Let's take a look at the global temperature for the last 150 years.</p>
End of explanation
# Making a map using the folium module
import folium
phone_map = folium.Map()
# Top three smart phone companies by market share in 2016
companies = [
{'loc': [37.4970, 127.0266], 'label': 'Samsung: ...%'},
{'loc': [37.3318, -122.0311], 'label': 'Apple: ...%'},
{'loc': [22.5431, 114.0579], 'label': 'Huawei: ...%'}]
# Adding markers to the map
for company in companies:
marker = folium.Marker(location=company['loc'], popup=company['label'])
marker.add_to(phone_map)
# The last object in the cell always gets shown in the notebook
phone_map
Explanation: 5. Jupyter notebooks ♡ a lot more
<p>Tables and plots are the most common outputs when doing data analysis, but Jupyter notebooks can render many more types of outputs such as sound, animation, video, etc. Yes, almost anything that can be shown in a modern web browser. This also makes it possible to include <em>interactive widgets</em> directly in the notebook!</p>
<p>For example, this (slightly complicated) code will create an interactive map showing the locations of the three largest smartphone companies in 2016. You can move and zoom the map, and you can click the markers for more info! </p>
End of explanation
# Are you ready to get started with DataCamp projects?
I_am_ready = True
# Ps.
# Feel free to try out any other stuff in this notebook.
# It's all yours!
Explanation: 6. Goodbye for now!
<p>This was just a short introduction to Jupyter notebooks, an open source technology that is increasingly used for data science and analysis. I hope you enjoyed it! :)</p>
End of explanation |
11,673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dealing with spectrum data
This tutorial demonstrates how to use Spectrum class to do various arithmetic operations of Spectrum. This demo uses the Jsc calculation as an example, namely
\begin{equation}
J_{sc}=\int \phi(E)QE(E) dE
\end{equation}
where $\phi$ is the illumination spectrum in photon flux, $E$ is the photon energy and $QE$ is the quantum efficiency.
Step1: Quantum efficiency
We first use a function gen_step_qe_array to generate a quantum efficiency spectrum. This spectrum is a step function with a cut-off at the band gap of 1.42 eV.
Step2: qe is a numpy array. The recommeneded way to handle it is converting it to Spectrum class
Step3: Unit conversion
When we want to retrieve the value of qe_sp we have to specicify the unit of the wavelength. For example, say, converting the wavelength to nanometer
Step4: Arithmetic operation
We can do arithmetic operation directly with Spectrum class such as
Step5: Illumination spectrum
pypvcell has a class Illumination that is inherited from Spectrum to handle the illumination. It inherits all the capability of Spectrum but has several methods specifically for sun illumination.
Some default standard spectrum is embedded in the pypvcell
Step6: Show the values of the data
Step7: Calcuate the total intensity in W/m^2
Step8: Unit conversion of illumination spectrum
It requires a bit of attention of converting spectrum that is in the form of $\phi(E)dE$, i.e., the value of integration is a meaningful quantitfy such as total power. This has been also handled by Illumination class. In the following case, we convert the wavelength to eV. Please note that the units of intensity is also changed to W/m^2-eV.
Step9: Spectrum multiplication
To calcualte the overall photocurrent, we have to calculate $\phi(E)QE(E) dE$ first. This would involves some unit conversion and interpolation between two spectrum. However, this is easily dealt by Spectrum class
Step10: Here's a more delicate point. We should convert the unit to photon flux in order to calculate Jsc.
Step11: Integrate it yields the total photocurrent density in A/m^2
Step12: In fact, pypvcell already provides a function calc_jsc() for calculating Jsc from given spectrum and QE | Python Code:
%matplotlib inline
import numpy as np
import scipy.constants as sc
import matplotlib.pyplot as plt
from pypvcell.spectrum import Spectrum
from pypvcell.illumination import Illumination
from pypvcell.photocurrent import gen_step_qe_array
Explanation: Dealing with spectrum data
This tutorial demonstrates how to use Spectrum class to do various arithmetic operations of Spectrum. This demo uses the Jsc calculation as an example, namely
\begin{equation}
J_{sc}=\int \phi(E)QE(E) dE
\end{equation}
where $\phi$ is the illumination spectrum in photon flux, $E$ is the photon energy and $QE$ is the quantum efficiency.
End of explanation
qe=gen_step_qe_array(1.42,0.9)
plt.plot(qe[:,0],qe[:,1])
plt.xlabel('photon energy (eV)')
plt.ylabel('QE')
Explanation: Quantum efficiency
We first use a function gen_step_qe_array to generate a quantum efficiency spectrum. This spectrum is a step function with a cut-off at the band gap of 1.42 eV.
End of explanation
qe_sp=Spectrum(x_data=qe[:,0],y_data=qe[:,1],x_unit='eV')
Explanation: qe is a numpy array. The recommeneded way to handle it is converting it to Spectrum class:
End of explanation
qe=qe_sp.get_spectrum(to_x_unit='nm')
plt.plot(qe[0,:],qe[1,:])
plt.xlabel('wavelength (nm)')
plt.ylabel('QE')
plt.xlim([300,1100])
Explanation: Unit conversion
When we want to retrieve the value of qe_sp we have to specicify the unit of the wavelength. For example, say, converting the wavelength to nanometer:
End of explanation
# Calulate the portion of "non-absorbed" photons, assuming QE is equivalent to absorptivity
tr_sp=1-qe_sp
tr=tr_sp.get_spectrum(to_x_unit='nm')
plt.plot(tr[0,:],tr[1,:])
plt.xlabel('wavelength (nm)')
plt.ylabel('QE')
plt.xlim([300,1100])
Explanation: Arithmetic operation
We can do arithmetic operation directly with Spectrum class such as
End of explanation
std_ill=Illumination("AM1.5g")
Explanation: Illumination spectrum
pypvcell has a class Illumination that is inherited from Spectrum to handle the illumination. It inherits all the capability of Spectrum but has several methods specifically for sun illumination.
Some default standard spectrum is embedded in the pypvcell:
End of explanation
ill=std_ill.get_spectrum('nm')
plt.plot(*ill)
plt.xlabel("wavelength (nm)")
plt.ylabel("intensity (W/m^2-nm)")
fig, ax1= plt.subplots()
ax1.plot(*ill)
ax2 = ax1.twinx()
ax2.plot(*qe)
ax1.set_xlim([400,1600])
ax2.set_ylabel('sin', color='r')
ax2.tick_params('y', colors='r')
ill[:,-1]
qe[:,-1]
Explanation: Show the values of the data
End of explanation
std_ill.total_power()
Explanation: Calcuate the total intensity in W/m^2
End of explanation
ill=std_ill.get_spectrum('eV')
plt.plot(*ill)
plt.xlabel("wavelength (eV)")
plt.ylabel("intensity (W/m^2-eV)")
Explanation: Unit conversion of illumination spectrum
It requires a bit of attention of converting spectrum that is in the form of $\phi(E)dE$, i.e., the value of integration is a meaningful quantitfy such as total power. This has been also handled by Illumination class. In the following case, we convert the wavelength to eV. Please note that the units of intensity is also changed to W/m^2-eV.
End of explanation
# calculate \phi(E)QE(E) dE.
# Spectrum class automatically convert the units and align the x-data by interpolating std_ill
jsc_e=std_ill*qe_sp
Explanation: Spectrum multiplication
To calcualte the overall photocurrent, we have to calculate $\phi(E)QE(E) dE$ first. This would involves some unit conversion and interpolation between two spectrum. However, this is easily dealt by Spectrum class:
End of explanation
jsc_e_a=jsc_e.get_spectrum('nm',to_photon_flux=True)
plt.plot(*jsc_e_a)
plt.xlim([300,1100])
Explanation: Here's a more delicate point. We should convert the unit to photon flux in order to calculate Jsc.
End of explanation
sc.e*np.trapz(y=jsc_e_a[1,:],x=jsc_e_a[0,:])
Explanation: Integrate it yields the total photocurrent density in A/m^2
End of explanation
from pypvcell.photocurrent import calc_jsc
calc_jsc(std_ill,qe_sp)
Explanation: In fact, pypvcell already provides a function calc_jsc() for calculating Jsc from given spectrum and QE:
End of explanation |
11,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data
Step2: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19.000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step3: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
Step4: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint
Step6: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
Step8: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint
Step9: Problem 3
Another check
Step10: There are only minor gaps, so the classes are well balanced.
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
Step11: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step12: Problem 4
Convince yourself that the data is still good after shuffling!
To be sure that the data are still fine after the merger and the randomization, I will select one item and display the image alongside the label. Note
Step13: Finally, let's save the data for later reuse
Step14: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions
Step15: The display_overlap function above display one of the duplicate, the first element is from the first dataset, and the next ones are from the dataset used for the comparison.
Now that exact duplicates have been found, let's look for near duplicates. How to define near identical images? That's a tricky question. My first thought has been to use the allclose numpy matrix comparison. This is too restrictive, since two images can vary by one pyxel, and still be very similar even if the variation on the pyxel is large. A better solution involves some kind of average.
To keep is simple and still relevant, I will use a Manhattan norm (sum of absolute values) of the difference matrix. Since the images of the dataset have all the same size, I will not normalize the norm value. Note that it is pyxel by pyxel comparison, and therefore it will not scale to the whole dataset, but it will help to understand image similarities.
Step16: The techniques above work well, but the performance is very low and the methods are poorly scalable to the full dataset. Let's try to improve the performance. Let's take some reference times on a small dataset.
Here are some ideas
Step17: It is a faster, and only one duplicate from the second dataset is displayed. This is still not scalable.
Step18: The built-in numpy function provides some improvement either, but this algorithm is still not scalable to the dataset to its full extend.
To make it work at scale, the best option is to use a hash function. To find exact duplicates, the hash functions used for the cryptography will work just fine.
Step19: More overlapping values could be found, this is due to the hash collisions. Several images can have the same hash but are actually different differents. This is not noticed here, and even if it happens, this is acceptable. All duplicates will be removed for sure.
We can make the processing a but faster by using the built-in numpy wherefunction.
Step20: From my perspective near duplicates should also be removed in the sanitized datasets. My assumption is that "near" duplicates are very very close (sometimes just there is a one pyxel border of difference), and penalyze the training the same way the true duplicates do.
That's being said, finding near duplicates with a hash function is not obvious. There are techniques for that, like "locally sensitive hashing", "perceptual hashing" or "difference hashing". There even are Python library available. Unfortunatly I did not have time to try them. The sanitized dataset generated below are based on true duplicates found with a cryptography hash function.
For sanitizing the dataset, I change the function above by returning the clean dataset directly.
Step21: The same value is found, so we can now sanetize the test and the train datasets.
Step22: Since I did not have time to generate clean sanitized datasets, I did not use the datasets generated above in the training of the my NN in the next assignments.
Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint
Step23: To train the model on all the data, we have to use another solver. SAG is the faster one. | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19.000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
import random
import hashlib
%matplotlib inline
def disp_samples(data_folders, sample_size):
for folder in data_folders:
print(folder)
image_files = os.listdir(folder)
image_sample = random.sample(image_files, sample_size)
for image in image_sample:
image_file = os.path.join(folder, image)
i = Image(filename=image_file)
display(i)
disp_samples(train_folders, 1)
disp_samples(test_folders, 1)
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
First of all, let's import some libraries that I will use later on and activate online display of matplotlib outputs:
End of explanation
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
image_index = 0
print(folder)
for image in os.listdir(folder):
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
image_index += 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
num_images = image_index
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
Explanation: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
End of explanation
def disp_8_img(imgs, titles):
Display subplot with 8 images or less
for i, img in enumerate(imgs):
plt.subplot(2, 4, i+1)
plt.title(titles[i])
plt.axis('off')
plt.imshow(img)
def disp_sample_pickles(data_folders):
folder = random.sample(data_folders, 1)
pickle_filename = ''.join(folder) + '.pickle'
try:
with open(pickle_filename, 'rb') as f:
dataset = pickle.load(f)
except Exception as e:
print('Unable to read data from', pickle_filename, ':', e)
return
# display
plt.suptitle(''.join(folder)[-1])
for i, img in enumerate(random.sample(list(dataset), 8)):
plt.subplot(2, 4, i+1)
plt.axis('off')
plt.imshow(img)
disp_sample_pickles(train_folders)
disp_sample_pickles(test_folders)
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
End of explanation
def disp_number_images(data_folders):
for folder in data_folders:
pickle_filename = ''.join(folder) + '.pickle'
try:
with open(pickle_filename, 'rb') as f:
dataset = pickle.load(f)
except Exception as e:
print('Unable to read data from', pickle_filename, ':', e)
return
print('Number of images in ', folder, ' : ', len(dataset))
disp_number_images(train_folders)
disp_number_images(test_folders)
Explanation: Problem 3
Another check: we expect the data to be balanced across classes. Verify that.
Data is balanced across classes if the classes have about the same number of items. Let's check the number of images by class.
End of explanation
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
Explanation: There are only minor gaps, so the classes are well balanced.
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
End of explanation
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
pretty_labels = {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F', 6: 'G', 7: 'H', 8: 'I', 9: 'J'}
def disp_sample_dataset(dataset, labels):
items = random.sample(range(len(labels)), 8)
for i, item in enumerate(items):
plt.subplot(2, 4, i+1)
plt.axis('off')
plt.title(pretty_labels[labels[item]])
plt.imshow(dataset[item])
disp_sample_dataset(train_dataset, train_labels)
disp_sample_dataset(valid_dataset, valid_labels)
disp_sample_dataset(test_dataset, test_labels)
Explanation: Problem 4
Convince yourself that the data is still good after shuffling!
To be sure that the data are still fine after the merger and the randomization, I will select one item and display the image alongside the label. Note: 0 = A, 1 = B, 2 = C, 3 = D, 4 = E, 5 = F, 6 = G, 7 = H, 8 = I, 9 = J.
End of explanation
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
Explanation: Finally, let's save the data for later reuse:
End of explanation
def display_overlap(overlap, source_dataset, target_dataset):
item = random.choice(overlap.keys())
imgs = np.concatenate(([source_dataset[item]], target_dataset[overlap[item][0:7]]))
plt.suptitle(item)
for i, img in enumerate(imgs):
plt.subplot(2, 4, i+1)
plt.axis('off')
plt.imshow(img)
def extract_overlap(dataset_1, dataset_2):
overlap = {}
for i, img_1 in enumerate(dataset_1):
for j, img_2 in enumerate(dataset_2):
if np.array_equal(img_1, img_2):
if not i in overlap.keys():
overlap[i] = []
overlap[i].append(j)
return overlap
%time overlap_test_train = extract_overlap(test_dataset[:200], train_dataset)
print('Number of overlaps:', len(overlap_test_train.keys()))
display_overlap(overlap_test_train, test_dataset[:200], train_dataset)
Explanation: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
In this part, I will explore the datasets and understand better the overlap cases. There are overlaps, but there are also duplicates in the same dataset! Processing time is also critical. I will first use nested loops and matrix comparison, which is slow and then use hash function to accelerate and process the whole dataset.
End of explanation
MAX_MANHATTAN_NORM = 10
def extract_overlap_near(dataset_1, dataset_2):
overlap = {}
for i, img_1 in enumerate(dataset_1):
for j, img_2 in enumerate(dataset_2):
diff = img_1 - img_2
m_norm = np.sum(np.abs(diff))
if m_norm < MAX_MANHATTAN_NORM:
if not i in overlap.keys():
overlap[i] = []
overlap[i].append(j)
return overlap
%time overlap_test_train_near = extract_overlap_near(test_dataset[:200], train_dataset)
print('Number of near overlaps:', len(overlap_test_train_near.keys()))
display_overlap(overlap_test_train_near, test_dataset[:200], train_dataset)
Explanation: The display_overlap function above display one of the duplicate, the first element is from the first dataset, and the next ones are from the dataset used for the comparison.
Now that exact duplicates have been found, let's look for near duplicates. How to define near identical images? That's a tricky question. My first thought has been to use the allclose numpy matrix comparison. This is too restrictive, since two images can vary by one pyxel, and still be very similar even if the variation on the pyxel is large. A better solution involves some kind of average.
To keep is simple and still relevant, I will use a Manhattan norm (sum of absolute values) of the difference matrix. Since the images of the dataset have all the same size, I will not normalize the norm value. Note that it is pyxel by pyxel comparison, and therefore it will not scale to the whole dataset, but it will help to understand image similarities.
End of explanation
def extract_overlap_stop(dataset_1, dataset_2):
overlap = {}
for i, img_1 in enumerate(dataset_1):
for j, img_2 in enumerate(dataset_2):
if np.array_equal(img_1, img_2):
overlap[i] = [j]
break
return overlap
%time overlap_test_train = extract_overlap_stop(test_dataset[:200], train_dataset)
print('Number of overlaps:', len(overlap_test_train.keys()))
display_overlap(overlap_test_train, test_dataset[:200], train_dataset)
Explanation: The techniques above work well, but the performance is very low and the methods are poorly scalable to the full dataset. Let's try to improve the performance. Let's take some reference times on a small dataset.
Here are some ideas:
+ stop a the first occurence
+ nympy function where in diff dataset
+ hash comparison
End of explanation
MAX_MANHATTAN_NORM = 10
def extract_overlap_where(dataset_1, dataset_2):
overlap = {}
for i, img_1 in enumerate(dataset_1):
diff = dataset_2 - img_1
norm = np.sum(np.abs(diff), axis=1)
duplicates = np.where(norm < MAX_MANHATTAN_NORM)
if len(duplicates[0]):
overlap[i] = duplicates[0]
return overlap
test_flat = test_dataset.reshape(test_dataset.shape[0], 28 * 28)
train_flat = train_dataset.reshape(train_dataset.shape[0], 28 * 28)
%time overlap_test_train = extract_overlap_where(test_flat[:200], train_flat)
print('Number of overlaps:', len(overlap_test_train.keys()))
display_overlap(overlap_test_train, test_dataset[:200], train_dataset)
Explanation: It is a faster, and only one duplicate from the second dataset is displayed. This is still not scalable.
End of explanation
def extract_overlap_hash(dataset_1, dataset_2):
dataset_hash_1 = [hashlib.sha256(img).hexdigest() for img in dataset_1]
dataset_hash_2 = [hashlib.sha256(img).hexdigest() for img in dataset_2]
overlap = {}
for i, hash1 in enumerate(dataset_hash_1):
for j, hash2 in enumerate(dataset_hash_2):
if hash1 == hash2:
if not i in overlap.keys():
overlap[i] = []
overlap[i].append(j) ## use np.where
return overlap
%time overlap_test_train = extract_overlap_hash(test_dataset[:200], train_dataset)
print('Number of overlaps:', len(overlap_test_train.keys()))
display_overlap(overlap_test_train, test_dataset[:200], train_dataset)
Explanation: The built-in numpy function provides some improvement either, but this algorithm is still not scalable to the dataset to its full extend.
To make it work at scale, the best option is to use a hash function. To find exact duplicates, the hash functions used for the cryptography will work just fine.
End of explanation
def extract_overlap_hash_where(dataset_1, dataset_2):
dataset_hash_1 = np.array([hashlib.sha256(img).hexdigest() for img in dataset_1])
dataset_hash_2 = np.array([hashlib.sha256(img).hexdigest() for img in dataset_2])
overlap = {}
for i, hash1 in enumerate(dataset_hash_1):
duplicates = np.where(dataset_hash_2 == hash1)
if len(duplicates[0]):
overlap[i] = duplicates[0]
return overlap
%time overlap_test_train = extract_overlap_hash_where(test_dataset[:200], train_dataset)
print('Number of overlaps:', len(overlap_test_train.keys()))
display_overlap(overlap_test_train, test_dataset[:200], train_dataset)
Explanation: More overlapping values could be found, this is due to the hash collisions. Several images can have the same hash but are actually different differents. This is not noticed here, and even if it happens, this is acceptable. All duplicates will be removed for sure.
We can make the processing a but faster by using the built-in numpy wherefunction.
End of explanation
def sanetize(dataset_1, dataset_2, labels_1):
dataset_hash_1 = np.array([hashlib.sha256(img).hexdigest() for img in dataset_1])
dataset_hash_2 = np.array([hashlib.sha256(img).hexdigest() for img in dataset_2])
overlap = [] # list of indexes
for i, hash1 in enumerate(dataset_hash_1):
duplicates = np.where(dataset_hash_2 == hash1)
if len(duplicates[0]):
overlap.append(i)
return np.delete(dataset_1, overlap, 0), np.delete(labels_1, overlap, None)
%time test_dataset_sanit, test_labels_sanit = sanetize(test_dataset[:200], train_dataset, test_labels[:200])
print('Overlapping images removed: ', len(test_dataset[:200]) - len(test_dataset_sanit))
Explanation: From my perspective near duplicates should also be removed in the sanitized datasets. My assumption is that "near" duplicates are very very close (sometimes just there is a one pyxel border of difference), and penalyze the training the same way the true duplicates do.
That's being said, finding near duplicates with a hash function is not obvious. There are techniques for that, like "locally sensitive hashing", "perceptual hashing" or "difference hashing". There even are Python library available. Unfortunatly I did not have time to try them. The sanitized dataset generated below are based on true duplicates found with a cryptography hash function.
For sanitizing the dataset, I change the function above by returning the clean dataset directly.
End of explanation
%time test_dataset_sanit, test_labels_sanit = sanetize(test_dataset, train_dataset, test_labels)
print('Overlapping images removed: ', len(test_dataset) - len(test_dataset_sanit))
%time valid_dataset_sanit, valid_labels_sanit = sanetize(valid_dataset, train_dataset, valid_labels)
print('Overlapping images removed: ', len(valid_dataset) - len(valid_dataset_sanit))
pickle_file_sanit = 'notMNIST_sanit.pickle'
try:
f = open(pickle_file_sanit, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset_sanit,
'valid_labels': valid_labels_sanit,
'test_dataset': test_dataset_sanit,
'test_labels': test_labels_sanit,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file_sanit)
print('Compressed pickle size:', statinfo.st_size)
Explanation: The same value is found, so we can now sanetize the test and the train datasets.
End of explanation
regr = LogisticRegression()
X_test = test_dataset.reshape(test_dataset.shape[0], 28 * 28)
y_test = test_labels
sample_size = 50
X_train = train_dataset[:sample_size].reshape(sample_size, 784)
y_train = train_labels[:sample_size]
%time regr.fit(X_train, y_train)
regr.score(X_test, y_test)
pred_labels = regr.predict(X_test)
disp_sample_dataset(test_dataset, pred_labels)
sample_size = 100
X_train = train_dataset[:sample_size].reshape(sample_size, 784)
y_train = train_labels[:sample_size]
%time regr.fit(X_train, y_train)
regr.score(X_test, y_test)
sample_size = 1000
X_train = train_dataset[:sample_size].reshape(sample_size, 784)
y_train = train_labels[:sample_size]
%time regr.fit(X_train, y_train)
regr.score(X_test, y_test)
X_valid = valid_dataset[:sample_size].reshape(sample_size, 784)
y_valid = valid_labels[:sample_size]
regr.score(X_valid, y_valid)
pred_labels = regr.predict(X_valid)
disp_sample_dataset(valid_dataset, pred_labels)
sample_size = 5000
X_train = train_dataset[:sample_size].reshape(sample_size, 784)
y_train = train_labels[:sample_size]
%time regr.fit(X_train, y_train)
regr.score(X_test, y_test)
Explanation: Since I did not have time to generate clean sanitized datasets, I did not use the datasets generated above in the training of the my NN in the next assignments.
Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
I have already used scikit-learn in a previous MOOC. It is a great tool, very easy to use!
End of explanation
regr2 = LogisticRegression(solver='sag')
sample_size = len(train_dataset)
X_train = train_dataset[:sample_size].reshape(sample_size, 784)
y_train = train_labels[:sample_size]
%time regr2.fit(X_train, y_train)
regr2.score(X_test, y_test)
pred_labels = regr.predict(X_test)
disp_sample_dataset(test_dataset, pred_labels)
Explanation: To train the model on all the data, we have to use another solver. SAG is the faster one.
End of explanation |
11,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducing the Keras Sequential API
Learning Objectives
1. Build a DNN model using the Keras Sequential API
1. Learn how to use feature columns in a Keras model
1. Learn how to train a model with Keras
1. Learn how to save/load, and deploy a Keras model on GCP
1. Learn how to deploy and make predictions with the Keras model
Introduction
The Keras sequential API allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs.
In this lab, we'll see how to build a simple deep neural network model using the Keras sequential api and feature columns. Once we have trained our model, we will deploy it using Vertex AI and see how to call our model for online prediciton.
Start by importing the necessary libraries for this lab.
Step1: Load raw data
We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.
Step2: Use tf.data to read the CSV files
We wrote these functions for reading data from the csv files above in the previous notebook.
Step3: Build a simple keras DNN model
We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow feature columns guide.
In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use tf.feature_column.numeric_column()
We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop.
Exercise. Create a feature column dictionary that we will use when building our deep neural network below. The keys should be the element of the INPUT_COLS list, while the values should be numeric feature columns.
Step4: Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model.
Exercise. Create a deep neural network using Keras's Sequential API. In the cell below, use the tf.keras.layers library to create all the layers for your deep neural network.
Step5: Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments
Step6: Train the model
To train your model, Keras provides two functions that can be used
Step7: There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training.
Exercise. In the cell below, you will train your model. First, define the steps_per_epoch then train your model using .fit(), saving the model training output to a variable called history.
Step8: High-level model evaluation
Once we've run data through the model, we can call .summary() on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
Step9: Running .fit (or .fit_generator) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
Step10: Making predictions with our model
To make predictions with our trained model, we can call the predict method, passing to it a dictionary of values. The steps parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set steps=1 (setting steps=None would also work). Note, however, that if x is a tf.data dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
Step11: Export and deploy our model
Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
Exercise. Use tf.saved_model.save to export the trained model to a Tensorflow SavedModel format. Reference the documentation for tf.saved_model.save as you fill in the code for the cell below.
Next, print the signature of your saved model using the SavedModel Command Line Interface command saved_model_cli. You can read more about the command line interface and the show and run commands it supports in the documentation here.
Step12: Deploy our model to Vertex AI
Finally, we will deploy our trained model to Vertex AI and see how we can make online predicitons.
Step13: Exercise. Complete the code in the cell below to upload and deploy your trained model to Vertex AI using the Model.upload method. Have a look at the documentation.
Step14: Exercise. Complete the code in the cell below to call prediction on your deployed model for the example you just created in the instance variable above.
Step15: Cleanup
When deploying a model to an endpoint for online prediction, the minimum min-replica-count is 1, and it is charged per node hour. So let's delete the endpoint to reduce unnecessary charges. Before we can delete the endpoint, we first undeploy all attached models...
Step16: ...then delete the endpoint. | Python Code:
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
%matplotlib inline
Explanation: Introducing the Keras Sequential API
Learning Objectives
1. Build a DNN model using the Keras Sequential API
1. Learn how to use feature columns in a Keras model
1. Learn how to train a model with Keras
1. Learn how to save/load, and deploy a Keras model on GCP
1. Learn how to deploy and make predictions with the Keras model
Introduction
The Keras sequential API allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs.
In this lab, we'll see how to build a simple deep neural network model using the Keras sequential api and feature columns. Once we have trained our model, we will deploy it using Vertex AI and see how to call our model for online prediciton.
Start by importing the necessary libraries for this lab.
End of explanation
!ls -l ../data/*.csv
!head ../data/taxi*.csv
Explanation: Load raw data
We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.
End of explanation
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
UNWANTED_COLS = ["pickup_datetime", "key"]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels)
if mode == "train":
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
Explanation: Use tf.data to read the CSV files
We wrote these functions for reading data from the csv files above in the previous notebook.
End of explanation
INPUT_COLS = [
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
]
# Create input layer of feature columns
feature_columns = # TODO: Your code here
Explanation: Build a simple keras DNN model
We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow feature columns guide.
In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use tf.feature_column.numeric_column()
We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop.
Exercise. Create a feature column dictionary that we will use when building our deep neural network below. The keys should be the element of the INPUT_COLS list, while the values should be numeric feature columns.
End of explanation
# Build a keras DNN model using Sequential API
model = # TODO: Your code here
Explanation: Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model.
Exercise. Create a deep neural network using Keras's Sequential API. In the cell below, use the tf.keras.layers library to create all the layers for your deep neural network.
End of explanation
# Create a custom evalution metric
def rmse(y_true, y_pred):
return # TODO: Your code here
# Compile the keras model
# TODO: Your code here
Explanation: Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments:
An optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class.
A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the Losses class (such as categorical_crossentropy or mse), or it can be a custom objective function.
A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function.
We will add an additional custom metric called rmse to our list of metrics which will return the root mean square error.
Exercise. Compile the model you created above. Create a custom loss function called rmse which computes the root mean squared error between y_true and y_pred. Pass this function to the model as an evaluation metric.
End of explanation
TRAIN_BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern="../data/taxi-train*", batch_size=TRAIN_BATCH_SIZE, mode="train"
)
evalds = create_dataset(
pattern="../data/taxi-valid*", batch_size=1000, mode="eval"
).take(NUM_EVAL_EXAMPLES // 1000)
Explanation: Train the model
To train your model, Keras provides two functions that can be used:
1. .fit() for training a model for a fixed number of epochs (iterations on a dataset).
2. .train_on_batch() runs a single gradient update on a single batch of data.
The .fit() function works for various formats of data such as Numpy array, list of Tensors tf.data and Python generators. The .train_on_batch() method is for more fine-grained control over training and accepts only a single batch of data.
Our create_dataset function above generates batches of training examples, so we can use .fit.
We start by setting up some parameters for our training job and create the data generators for the training and validation data.
We refer you the the blog post ML Design Pattern #3: Virtual Epochs for further details on why express the training in terms of NUM_TRAIN_EXAMPLES and NUM_EVALS and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.
End of explanation
%%time
steps_per_epoch = # TODO: Your code here
LOGDIR = "./taxi_trained"
history = # TODO: Your code here
Explanation: There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training.
Exercise. In the cell below, you will train your model. First, define the steps_per_epoch then train your model using .fit(), saving the model training output to a variable called history.
End of explanation
model.summary()
Explanation: High-level model evaluation
Once we've run data through the model, we can call .summary() on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
End of explanation
RMSE_COLS = ["rmse", "val_rmse"]
pd.DataFrame(history.history)[RMSE_COLS].plot()
LOSS_COLS = ["loss", "val_loss"]
pd.DataFrame(history.history)[LOSS_COLS].plot()
Explanation: Running .fit (or .fit_generator) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
End of explanation
model.predict(
x={
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0]),
},
steps=1,
)
Explanation: Making predictions with our model
To make predictions with our trained model, we can call the predict method, passing to it a dictionary of values. The steps parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set steps=1 (setting steps=None would also work). Note, however, that if x is a tf.data dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
End of explanation
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
TIMESTAMP = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
EXPORT_PATH = os.path.join(OUTPUT_DIR, TIMESTAMP)
tf.saved_model.save(
# TODO: Your code here
)
!saved_model_cli show \
--tag_set # TODO: Your code here
--signature_def # TODO: Your code here
--dir # TODO: Your code here
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
Explanation: Export and deploy our model
Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
Exercise. Use tf.saved_model.save to export the trained model to a Tensorflow SavedModel format. Reference the documentation for tf.saved_model.save as you fill in the code for the cell below.
Next, print the signature of your saved model using the SavedModel Command Line Interface command saved_model_cli. You can read more about the command line interface and the show and run commands it supports in the documentation here.
End of explanation
PROJECT = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
MODEL_DISPLAYNAME = f"taxifare-{TIMESTAMP}"
print(f"MODEL_DISPLAYNAME: {MODEL_DISPLAYNAME}")
# from https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
)
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
# Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
!gsutil cp -R $EXPORT_PATH gs://$BUCKET/$MODEL_DISPLAYNAME
Explanation: Deploy our model to Vertex AI
Finally, we will deploy our trained model to Vertex AI and see how we can make online predicitons.
End of explanation
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_DISPLAYNAME,
artifact_uri= # TODO: Your code here
serving_container_image_uri= # TODO: Your code here
)
MACHINE_TYPE = "n1-standard-2"
endpoint = uploaded_model.deploy(
machine_type=MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
instance = {
"pickup_longitude": -73.982683,
"pickup_latitude": 40.742104,
"dropoff_longitude": -73.983766,
"dropoff_latitude": 40.755174,
"passenger_count": 3.0,
}
Explanation: Exercise. Complete the code in the cell below to upload and deploy your trained model to Vertex AI using the Model.upload method. Have a look at the documentation.
End of explanation
endpoint.predict(
# TODO: Your code here
)
Explanation: Exercise. Complete the code in the cell below to call prediction on your deployed model for the example you just created in the instance variable above.
End of explanation
endpoint.undeploy_all()
Explanation: Cleanup
When deploying a model to an endpoint for online prediction, the minimum min-replica-count is 1, and it is charged per node hour. So let's delete the endpoint to reduce unnecessary charges. Before we can delete the endpoint, we first undeploy all attached models...
End of explanation
endpoint.delete()
Explanation: ...then delete the endpoint.
End of explanation |
11,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with relational data using Pandas
Testing the waters with sample relational data
Based on well defined theory and availability of highly mature, scalable and accessible relational database systems like Postgres, MariaDB and other commercial alternatives, relational data is pervasive in modern software development. Though, off late, the dominance of SQL systems is being challenged by flexibility of some No-SQL datastores, relational data and datastore continue to be an important source of raw datasets for many data analysis projects.
In this part we start-off with a simple relational dataset which will be augmented with more complexity as we proceed through the section. This dataset is then analysed using Pandas - a very nifty python package for working with various kinds of data, especially, tabular and relational data.
Why not use one of the many popular datasets
Being able to mentally replicate and cross check the result of an algorithm is pretty important in gaining confidence in data analysis. This is not always possible with, say, the Petals dataset or Reuters dataset for that matter. We therefore construct a small dataset of a nature which could very easily be found in many modern codebases and each time we arrive at a result, we can manually and independently compute the result and compare with that of our approach in code.
Step1: Basic Aggregation Operations
Step2: QUICK NOTE ABOUT sort_values
By default sort_values allocates new memory each time it is called. While working with larger production data we can be limited by the available memory on our machines vis-a-vis the dataset size (and really we do not wish to hit the SWAP partition even on SSDs). In such a situation, we can set the keyword argument inplace=True, this will modify the current DataFrame it self instead of allocating new memory.
Beware though mutation, while memory efficient, can be a risky affair leading to complex code paths and hard to reason about code.
Step3: Working with visualisations and charts
We use the python package - matplotlib - to generate visualisations and charts for our analysis. The command %matplotlib inline is a handy option which embeds the charts directly into our ipython/jupyter notebook.
While we can directly configure and call matplotlib functions to generate charts. Pandas, via the DataFrame object, exposes some very convenient methods to quickly generate plots.
Step4: matplotlib does provide many more options to generate complex and we will explore more of them as we proceed.
We assemble data and massage it with the sole purpose seeking insights, getting our questions answered - exactly where pandas shines.
Asking questions of our data
Pandas supports boolean indexing using the square bracket notation - []. Boolean indexing enables us to pass a predicate which can be used for among other things for filtering. Pandas also provides negation operator ~ to filter based on opposite of our predicate.
Step5: Since pandas DataFrame is a column based abstraction (as against row) we need to reset_index after an aggregation operation in order retrieve flat DataFrame which is convenient to query.
Step6: In the above we used sort_index instead of sort_values because the groupby operation creates a MultiIndex
on columns user_id, name and age and since age is a part of an index sort_values cannot operate on it.
The head(n) function on a DataFrame returns first n records from the frame and the equivalent function tail(n) returns last n records from the frame.
Step7: Eating your own dog food
The above data has been copy-pasted and hand edited. A problem with this approach is the possibility of data containing more than one like for the same product by the same user. While we can manually check the data the approach will be tedious and untractable as the size of the data increases. Instead we employ pandas itself to indentify duplicate likes by the same person and fix the data accordingly.
Step8: Lets figure out where are the duplicates
Step9: So there are in all 6 duplicate records. User#2 and Pepsi is recorded twice so that is 1 extra, 2 extra for User#3 and Cola and 1 extra for rest of the three pairs, which equals, 1 + 2 + 1 + 1 + 1 = 6.
Step10: We replay our previous aggregation to verify no more duplicates indeed exist.
Step11: Lets continue with asking more questions of our data and gloss over some more convenience methods exposed by Pandas for aggregation.
Step12: In the above code snippet we created a computed column percent in the likes_count DataFrame. Column operations in pandas are vectorized and execute significantly faster than row operations; always a good idea to express computations as column operations as against row operations.
Step13: In our fictitious database, Cola and Pepsi seem to be popular among the users who like Coconut.
Step14: Most of our audience seem to fall in the 25 - 40 years age group. But this visualisation has one flaw - if records are stacked on top of each other, only one of them will be visible. Lets try an alternative plot.
Step15: Anything surprising? Coconut - gray color - was not represented in the histogram. But from this visualisation, we can notice that coconut is popular among the 25 - 35 years age group only.
On the other hand, if we want to plot a specific "likable" object, we can simply filter our dataframe before groupby operation. | Python Code:
import pandas as pd
# Some basic data
users = [
{ 'name': 'John', 'age': 29, 'id': 1 },
{ 'name': 'Doe', 'age': 19, 'id': 2 },
{ 'name': 'Alex', 'age': 32, 'id': 3 },
{ 'name': 'Rahul', 'age': 27, 'id': 4 },
{ 'name': 'Ellen', 'age': 23, 'id': 5},
{ 'name': 'Shristy', 'age': 30, 'id': 6}
]
users
# Using the above data as Foreign Key (FK)
likes = [
{ 'user_id': 1, 'likes': 'Mango' },
{ 'user_id': 1, 'likes': 'Pepsi' },
{ 'user_id': 2, 'likes': 'Burger' },
{ 'user_id': 2, 'likes': 'Mango' },
{ 'user_id': 3, 'likes': 'Cola' },
{ 'user_id': 4, 'likes': 'Orange' },
{ 'user_id': 3, 'likes': 'Cola' },
{ 'user_id': 2, 'likes': 'Pepsi' },
{ 'user_id': 3, 'likes': 'Carrot' },
{ 'user_id': 4, 'likes': 'Mango' },
{ 'user_id': 6, 'likes': 'Pepsi' },
]
likes
# Create Pandas DataFrame object and set
# appropriate index
df_users = pd.DataFrame(users)
df_users.set_index('id')
df_users
df_likes = pd.DataFrame(likes)
df_likes.set_index('user_id')
df_likes
# Using the FK relation to create a join
users_likes_join = df_users.merge(df_likes, left_on='id', right_on='user_id')
users_likes_join.set_index('user_id')
users_likes_join
# Changing left and right hand side of the relationship
likes_users_join = df_likes.merge(df_users, left_on='user_id', right_on='id')
likes_users_join.set_index('user_id')
likes_users_join
Explanation: Working with relational data using Pandas
Testing the waters with sample relational data
Based on well defined theory and availability of highly mature, scalable and accessible relational database systems like Postgres, MariaDB and other commercial alternatives, relational data is pervasive in modern software development. Though, off late, the dominance of SQL systems is being challenged by flexibility of some No-SQL datastores, relational data and datastore continue to be an important source of raw datasets for many data analysis projects.
In this part we start-off with a simple relational dataset which will be augmented with more complexity as we proceed through the section. This dataset is then analysed using Pandas - a very nifty python package for working with various kinds of data, especially, tabular and relational data.
Why not use one of the many popular datasets
Being able to mentally replicate and cross check the result of an algorithm is pretty important in gaining confidence in data analysis. This is not always possible with, say, the Petals dataset or Reuters dataset for that matter. We therefore construct a small dataset of a nature which could very easily be found in many modern codebases and each time we arrive at a result, we can manually and independently compute the result and compare with that of our approach in code.
End of explanation
# Food wise count of likes
food_wise = users_likes_join.groupby('likes')['likes'].count()
food_wise
# Lets sort our data. Default order is ascending
asc_sort = food_wise.sort_values()
asc_sort
# An example for descending
dsc_sort = food_wise.sort_values(ascending=False)
dsc_sort
Explanation: Basic Aggregation Operations
End of explanation
# Using in_place sort for memory efficiency
# Notice there is no left hand side value
food_wise.sort_values(ascending=False, inplace=True)
# food_wise itself has changed
food_wise
Explanation: QUICK NOTE ABOUT sort_values
By default sort_values allocates new memory each time it is called. While working with larger production data we can be limited by the available memory on our machines vis-a-vis the dataset size (and really we do not wish to hit the SWAP partition even on SSDs). In such a situation, we can set the keyword argument inplace=True, this will modify the current DataFrame it self instead of allocating new memory.
Beware though mutation, while memory efficient, can be a risky affair leading to complex code paths and hard to reason about code.
End of explanation
%matplotlib inline
import matplotlib
# ggplot is theme of matplotlib which adds
# some visual asthetics to our charts. It is
# inspired from the eponymous charting package
# of the R programming language
matplotlib.style.use('ggplot')
# Every DataFrame object exposes a plot object
# which can be used to generate different plots
# A pie chart, figsize allows us to define size of the
# plot as a tuple of (width, height) in inches
food_wise.plot.pie(figsize=(7, 7))
# A bar chart
food_wise.plot.bar(figsize=(7, 7))
# Horizontal bar chart
food_wise.plot.barh(figsize=(7, 7))
# Lets plot the most active users - those who hit like
# very often using the above techniques
# Get the users by number of likes they have
user_agg = users_likes_join.groupby('name')['likes'].count()
# Here we go: Our most active users in a different color
user_agg.plot.barh(figsize=(6, 6), color='#10d3f6')
Explanation: Working with visualisations and charts
We use the python package - matplotlib - to generate visualisations and charts for our analysis. The command %matplotlib inline is a handy option which embeds the charts directly into our ipython/jupyter notebook.
While we can directly configure and call matplotlib functions to generate charts. Pandas, via the DataFrame object, exposes some very convenient methods to quickly generate plots.
End of explanation
# Users who never interact with our data
df_users[~df_users.id.isin(df_likes['user_id'])]
Explanation: matplotlib does provide many more options to generate complex and we will explore more of them as we proceed.
We assemble data and massage it with the sole purpose seeking insights, getting our questions answered - exactly where pandas shines.
Asking questions of our data
Pandas supports boolean indexing using the square bracket notation - []. Boolean indexing enables us to pass a predicate which can be used for among other things for filtering. Pandas also provides negation operator ~ to filter based on opposite of our predicate.
End of explanation
# Oldest user who has exactly 2 likes
agg_values = (
users_likes_join
.groupby(['user_id', 'name', 'age'])
.agg({ 'likes': 'count' })
.sort_index(level=['age'], sort_remaining=False, ascending=False)
)
agg_values[agg_values['likes'] == 2].head(1)
Explanation: Since pandas DataFrame is a column based abstraction (as against row) we need to reset_index after an aggregation operation in order retrieve flat DataFrame which is convenient to query.
End of explanation
# Oldest user who has at least 2 likes
agg_values[agg_values['likes'] >= 2].head(1)
# Lets augment our data a little more
users = users + [
{ 'id': 7, 'name': 'Yeti', 'age': 40 },
{ 'id': 8, 'name': 'Commander', 'age': 31 },
{ 'id': 9, 'name': 'Jonnah', 'age': 26 },
{ 'id': 10, 'name': 'Hex', 'age': 28 },
{ 'id': 11, 'name': 'Sam', 'age': 33 },
{ 'id': 12, 'name': 'Madan', 'age': 53 },
{ 'id': 13, 'name': 'Harry', 'age': 38 },
{ 'id': 14, 'name': 'Tom', 'age': 29 },
{ 'id': 15, 'name': 'Daniel', 'age': 23 },
{ 'id': 16, 'name': 'Virat', 'age': 24 },
{ 'id': 17, 'name': 'Nathan', 'age': 16 },
{ 'id': 18, 'name': 'Stepheny', 'age': 26 },
{ 'id': 19, 'name': 'Lola', 'age': 31 },
{ 'id': 20, 'name': 'Amy', 'age': 25 },
]
users, len(users)
likes = likes + [
{ 'user_id': 17, 'likes': 'Mango' },
{ 'user_id': 14, 'likes': 'Orange'},
{ 'user_id': 18, 'likes': 'Burger'},
{ 'user_id': 19, 'likes': 'Blueberry'},
{ 'user_id': 7, 'likes': 'Cola'},
{ 'user_id': 11, 'likes': 'Burger'},
{ 'user_id': 13, 'likes': 'Mango'},
{ 'user_id': 1, 'likes': 'Coconut'},
{ 'user_id': 6, 'likes': 'Pepsi'},
{ 'user_id': 8, 'likes': 'Cola'},
{ 'user_id': 17, 'likes': 'Mango'},
{ 'user_id': 19, 'likes': 'Coconut'},
{ 'user_id': 15, 'likes': 'Blueberry'},
{ 'user_id': 20, 'likes': 'Soda'},
{ 'user_id': 3, 'likes': 'Cola'},
{ 'user_id': 4, 'likes': 'Pepsi'},
{ 'user_id': 14, 'likes': 'Coconut'},
{ 'user_id': 11, 'likes': 'Mango'},
{ 'user_id': 12, 'likes': 'Soda'},
{ 'user_id': 16, 'likes': 'Orange'},
{ 'user_id': 2, 'likes': 'Pepsi'},
{ 'user_id': 19, 'likes': 'Cola'},
{ 'user_id': 15, 'likes': 'Carrot'},
{ 'user_id': 18, 'likes': 'Carrot'},
{ 'user_id': 14, 'likes': 'Soda'},
{ 'user_id': 13, 'likes': 'Cola'},
{ 'user_id': 9, 'likes': 'Pepsi'},
{ 'user_id': 10, 'likes': 'Blueberry'},
{ 'user_id': 7, 'likes': 'Soda'},
{ 'user_id': 12, 'likes': 'Burger'},
{ 'user_id': 6, 'likes': 'Cola'},
{ 'user_id': 4, 'likes': 'Burger'},
{ 'user_id': 14, 'likes': 'Orange'},
{ 'user_id': 18, 'likes': 'Blueberry'},
{ 'user_id': 20, 'likes': 'Cola'},
{ 'user_id': 9, 'likes': 'Soda'},
{ 'user_id': 14, 'likes': 'Pepsi'},
{ 'user_id': 6, 'likes': 'Mango'},
{ 'user_id': 3, 'likes': 'Coconut'},
]
likes, len(likes)
Explanation: In the above we used sort_index instead of sort_values because the groupby operation creates a MultiIndex
on columns user_id, name and age and since age is a part of an index sort_values cannot operate on it.
The head(n) function on a DataFrame returns first n records from the frame and the equivalent function tail(n) returns last n records from the frame.
End of explanation
# DataFrames from native python dictionaries
df_users = pd.DataFrame(users)
df_likes = pd.DataFrame(likes)
Explanation: Eating your own dog food
The above data has been copy-pasted and hand edited. A problem with this approach is the possibility of data containing more than one like for the same product by the same user. While we can manually check the data the approach will be tedious and untractable as the size of the data increases. Instead we employ pandas itself to indentify duplicate likes by the same person and fix the data accordingly.
End of explanation
_duplicate_likes = (
df_likes
.groupby(['user_id', 'likes'])
.agg({ 'likes': 'count' })
)
duplicate_likes = _duplicate_likes[_duplicate_likes['likes'] > 1]
duplicate_likes
Explanation: Lets figure out where are the duplicates
End of explanation
# Now remove the duplicates
df_unq_likes = df_likes.drop_duplicates()
# The difference should be 6 since 6 records should be eliminated
len(df_unq_likes), len(df_likes)
Explanation: So there are in all 6 duplicate records. User#2 and Pepsi is recorded twice so that is 1 extra, 2 extra for User#3 and Cola and 1 extra for rest of the three pairs, which equals, 1 + 2 + 1 + 1 + 1 = 6.
End of explanation
# Join the datasets
users_likes_join = df_users.merge(df_unq_likes, left_on='id', right_on='user_id')
users_likes_join.set_index('id')
# We aggregate the likes column and rename it to `Records`
unq_user_likes_group = (
users_likes_join
.groupby(['id', 'name', 'likes'])
.agg({'likes': 'count'})
.rename(columns={ 'likes': 'num_likes' })
)
# Should return empty if duplicates are removed
unq_user_likes_group[unq_user_likes_group['num_likes'] > 1]
Explanation: We replay our previous aggregation to verify no more duplicates indeed exist.
End of explanation
# What percent of audience likes each fruit?
likes_count = (
users_likes_join
.groupby('likes')
.agg({ 'user_id': 'count' })
)
likes_count['percent'] = likes_count['user_id'] * 100 / len(df_users)
likes_count.sort_values('percent', ascending=False)
Explanation: Lets continue with asking more questions of our data and gloss over some more convenience methods exposed by Pandas for aggregation.
End of explanation
# What do people who like Coconut also like?
coconut_likers = users_likes_join[users_likes_join['likes'] == 'Coconut'].user_id
likes_among_coconut_likers = users_likes_join[(users_likes_join['user_id'].isin(coconut_likers)) & (users_likes_join['likes'] != 'Coconut')]
likes_among_coconut_likers.groupby('likes').agg({ 'user_id': pd.Series.nunique }).sort_values('user_id', ascending=False)
Explanation: In the above code snippet we created a computed column percent in the likes_count DataFrame. Column operations in pandas are vectorized and execute significantly faster than row operations; always a good idea to express computations as column operations as against row operations.
End of explanation
# What is the age group distribution of likes?
users_likes_join.groupby('likes').age.plot(kind='hist', legend=True, figsize=(10, 6))
Explanation: In our fictitious database, Cola and Pepsi seem to be popular among the users who like Coconut.
End of explanation
users_likes_join.groupby('likes').age.plot(kind='kde', legend=True, figsize=(10, 6))
Explanation: Most of our audience seem to fall in the 25 - 40 years age group. But this visualisation has one flaw - if records are stacked on top of each other, only one of them will be visible. Lets try an alternative plot.
End of explanation
# Age distribution only of people who like Soda
users_likes_join[users_likes_join['likes'] == 'Soda'].groupby('likes').age.plot(kind='hist', legend=True, figsize=(10, 6))
Explanation: Anything surprising? Coconut - gray color - was not represented in the histogram. But from this visualisation, we can notice that coconut is popular among the 25 - 35 years age group only.
On the other hand, if we want to plot a specific "likable" object, we can simply filter our dataframe before groupby operation.
End of explanation |
11,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> 2. Creating a sampled dataset </h1>
In this notebook, you will implement
Step1: <h2> Create ML dataset by sampling using BigQuery </h2>
<p>
Sample the BigQuery table publicdata.samples.natality to create a smaller dataset of approximately 10,000 training and 3,000 evaluation records. Restrict your samples to data after the year 2000.
</p>
Step2: Preprocess data using Pandas
Carry out the following preprocessing operations
Step3: <h2> Write out </h2>
<p>
In the final versions, we want to read from files, not Pandas dataframes. So, write the Pandas dataframes out as CSV files.
Using CSV files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
Modify this code appropriately (i.e. change the name of the Pandas dataframe to reflect your variable names) | Python Code:
# TODO: change these to reflect your environment
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: <h1> 2. Creating a sampled dataset </h1>
In this notebook, you will implement:
<ol>
<li> Sampling a BigQuery dataset to create datasets for ML
<li> Preprocessing with Pandas
</ol>
End of explanation
# TODO
Explanation: <h2> Create ML dataset by sampling using BigQuery </h2>
<p>
Sample the BigQuery table publicdata.samples.natality to create a smaller dataset of approximately 10,000 training and 3,000 evaluation records. Restrict your samples to data after the year 2000.
</p>
End of explanation
## TODO
Explanation: Preprocess data using Pandas
Carry out the following preprocessing operations:
Add extra rows to simulate the lack of ultrasound.
Change the plurality column to be one of the following strings:
<pre>
['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)']
</pre>
Remove rows where any of the important numeric fields are missing.
End of explanation
traindf.to_csv('train.csv', index=False, header=False)
evaldf.to_csv('eval.csv', index=False, header=False)
%%bash
wc -l *.csv
head *.csv
tail *.csv
Explanation: <h2> Write out </h2>
<p>
In the final versions, we want to read from files, not Pandas dataframes. So, write the Pandas dataframes out as CSV files.
Using CSV files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
Modify this code appropriately (i.e. change the name of the Pandas dataframe to reflect your variable names)
End of explanation |
11,678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Empircally observed SaaS churn
A subscribtion-as-a-service company has a typical customer churn pattern. During periods of no billing, the churn is relatively low compared to periods of billing (typically every 30 or 365 days). This results in a distinct survival function for customers. See below
Step1: To borrow a term from finance, we clearly have different regimes that a customer goes through
Step2: Above we fit the regression model. We supplied a list of breakpoints that we inferred from the survival function and from our domain knowledge.
Let's first look at the average hazard in each interval, over time. We should see that during periods of high customer churn, we also have a high hazard. We should also see that the hazard is constant in each interval.
Step3: It's obvious that the highest average churn is in the first few days, and then high again in the latter billing periods.
So far, we have only been looking at the aggregated population - that is, we haven't looked at what variables are associated with churning. Let's first start with investigating what is causing (or associated with) the drop at the second billing event (~day 30).
Step4: From this forest plot, we can see that the var1 has a protective effect, that is, customers with a high var1 are much less likely to churn in the second billing periods. var2 has little effect, but possibly negative. From a business point of view, maximizing var1 for customers would be a good move (assuming it's a causal relationship).
We can look at all the coefficients in one large forest plot, see below. We see a distinct alternating pattern in the _intercepts variable. This makes sense, as our hazard rate shifts between high and low churn regimes. The influence of var1 seems to spike in the 3rd regime (lambda_2_), and then decays back to zero. The influence of var2 looks like it starts to become more negative over time, that is, is associated with more churn over time.
Step5: If we suspect there is some parameter sharing between intervals, or we want to regularize (and hence share information) between intervals, we can include a penalizer which penalizes the variance of the estimates per covariate.
Note
Step6: As we suspected, a very high penalizer will constrain the same parameter between intervals to be equal (and hence 0 variance). This is the same as the model
Step7: We can see that | Python Code:
kmf = KaplanMeierFitter().fit(df['T'], df['E'])
kmf.plot(figsize=(11,6));
Explanation: Empircally observed SaaS churn
A subscribtion-as-a-service company has a typical customer churn pattern. During periods of no billing, the churn is relatively low compared to periods of billing (typically every 30 or 365 days). This results in a distinct survival function for customers. See below:
End of explanation
pew = PiecewiseExponentialRegressionFitter(
breakpoints=breakpoints)\
.fit(df, "T", "E")
Explanation: To borrow a term from finance, we clearly have different regimes that a customer goes through: periods of low churn and periods of high churn, both of which are predictable. This predictability and "sharp" changes in hazards suggests that a piecewise hazard model may work well: hazard is constant during intervals, but varies over different intervals.
Furthermore, we can imagine that inidivdual customer variables influence their likelihood to churn as well. Since we have baseline information, we can fit a regression model. For simplicity, let's assume that a customer's hazard is constant in each period, however it varies over each customer (heterogenity in customers).
Our hazard model looks like¹:
$$
h(t\;|\;x) = \begin{cases}
\lambda_0(x)^{-1}, & t \le \tau_0 \
\lambda_1(x)^{-1} & \tau_0 < t \le \tau_1 \
\lambda_2(x)^{-1} & \tau_1 < t \le \tau_2 \
...
\end{cases}
$$
and $\lambda_i(x) = \exp(\mathbf{\beta}i x^T), \;\; \mathbf{\beta}_i = (\beta{i,1}, \beta_{i,2}, ...)$. That is, each period has a hazard rate, $\lambda_i$ the is the exponential of a linear model. The parameters of each linear model are unique to that period - different periods have different parameters (later we will generalize this).
Why do I want a model like this? Well, it offers lots of flexibilty (at the cost of efficiency though), but importantly I can see:
Influence of variables over time.
Looking at important variables at specific "drops" (or regime changes). For example, what variables cause the large drop at the start? What variables prevent death at the second billing?
Predictive power: since we model the hazard more accurately (we hope) than a simpler parametric form, we have better estimates of a subjects survival curve.
¹ I specifiy the reciprocal because that follows lifelines convention for exponential and Weibull hazards. In practice, it means the interpretation of the sign is possibly different.
End of explanation
fig, ax = plt.subplots(1,1)
kmf.plot(figsize=(11,6), ax=ax);
ax.legend(loc="upper left")
ax.set_ylabel("Survival")
ax2 = ax.twinx()
pew.predict_cumulative_hazard(
pew._central_values,
times=np.arange(0, 110),
).diff().plot(ax=ax2, c='k', alpha=0.80)
ax2.legend(loc="upper right")
ax2.set_ylabel("Hazard")
Explanation: Above we fit the regression model. We supplied a list of breakpoints that we inferred from the survival function and from our domain knowledge.
Let's first look at the average hazard in each interval, over time. We should see that during periods of high customer churn, we also have a high hazard. We should also see that the hazard is constant in each interval.
End of explanation
fig, ax = plt.subplots(figsize=(10, 4))
pew.plot(parameter=['lambda_2_'], ax=ax);
Explanation: It's obvious that the highest average churn is in the first few days, and then high again in the latter billing periods.
So far, we have only been looking at the aggregated population - that is, we haven't looked at what variables are associated with churning. Let's first start with investigating what is causing (or associated with) the drop at the second billing event (~day 30).
End of explanation
fig, ax = plt.subplots(figsize=(10, 4))
pew.plot(columns=['intercept'], ax=ax);
fig, ax = plt.subplots(figsize=(10, 5))
pew.plot(columns=['var2'], ax=ax);
fig, ax = plt.subplots(figsize=(10, 5))
pew.plot(columns=['var1'], ax=ax);
Explanation: From this forest plot, we can see that the var1 has a protective effect, that is, customers with a high var1 are much less likely to churn in the second billing periods. var2 has little effect, but possibly negative. From a business point of view, maximizing var1 for customers would be a good move (assuming it's a causal relationship).
We can look at all the coefficients in one large forest plot, see below. We see a distinct alternating pattern in the _intercepts variable. This makes sense, as our hazard rate shifts between high and low churn regimes. The influence of var1 seems to spike in the 3rd regime (lambda_2_), and then decays back to zero. The influence of var2 looks like it starts to become more negative over time, that is, is associated with more churn over time.
End of explanation
# Extreme case, note that all the covariates' parameters are almost identical.
pew = PiecewiseExponentialRegressionFitter(
breakpoints=breakpoints,
penalizer=20.0)\
.fit(df, "T", "E")
fig, ax = plt.subplots(figsize=(10, 5))
pew.plot(columns=['var1'], ax=ax);
fig, ax = plt.subplots(figsize=(10, 5))
pew.plot(columns=['var2'], ax=ax);
Explanation: If we suspect there is some parameter sharing between intervals, or we want to regularize (and hence share information) between intervals, we can include a penalizer which penalizes the variance of the estimates per covariate.
Note: we do not penalize the intercept, currently. This is a modellers decision, but I think it's better not too.
Specifically, our penalized log-likelihood, $PLL$, looks like:
$$
PLL = LL - \alpha \sum_j \hat{\sigma}_j^2
$$
where $\hat{\sigma}j$ is the standard deviation of $\beta{i, j}$ over all periods $i$. This acts as a regularizer and much like a multilevel component in Bayesian statistics. In the above inference, we implicitly set $\alpha$ equal to 0. Below we examine some more cases of varying $\alpha$. First we set $\alpha$ to an extremely large value, which should push the variances of the estimates to zero.
End of explanation
# less extreme case
pew = PiecewiseExponentialRegressionFitter(
breakpoints=breakpoints,
penalizer=.25)\
.fit(df, "T", "E")
fig, ax = plt.subplots(figsize=(10, 5))
pew.plot(columns=['var1'], ax=ax, fmt="s", label="small penalty on variance")
# compare this to the no penalizer case
pew_no_penalty = PiecewiseExponentialRegressionFitter(
breakpoints=breakpoints,
penalizer=0)\
.fit(df, "T", "E")
pew_no_penalty.plot(columns=['var1'], ax=ax, c="r", fmt="o", label="no penalty on variance")
plt.legend();
Explanation: As we suspected, a very high penalizer will constrain the same parameter between intervals to be equal (and hence 0 variance). This is the same as the model:
$$
h(t\;|\;x) = \begin{cases}
\lambda_0(x)^{-1}, & t \le \tau_0 \
\lambda_1(x)^{-1} & \tau_0 < t \le \tau_1 \
\lambda_2(x)^{-1} & \tau_1 < t \le \tau_2 \
...
\end{cases}
$$
and $\lambda_i(x) = \exp(\mathbf{\beta} x^T), \;\; \mathbf{\beta} = (\beta_{1}, \beta_{2}, ...)$. Note the resuse of the $\beta$s between intervals.
This model is the same model proposed in "Piecewise Exponential Models for Survival Data with Covariates".
One nice property of this model is that because of the extreme information sharing between intervals, we have maximum information for inferences, and hence small standard errors per parameter. However, if the parameters effect is truely time-varying (and not constant), then the standard error will be inflated and a less constrained model is better.
Below we examine a inbetween penalty, and compare it to the zero penalty.
End of explanation
# Some prediction methods
pew.predict_survival_function(df.loc[0:3]).plot(figsize=(10, 5));
pew.predict_cumulative_hazard(df.loc[0:3]).plot(figsize=(10, 5));
pew.predict_median(df.loc[0:5])
Explanation: We can see that:
1) on average, the standard errors are smaller in the penalty case
2) parameters are pushed closer together (the will converge to their average if we keep increasing the penalty)
3) the intercepts are barely effected.
I think, in practice, adding a small penalty is the right thing to do. It's extremely unlikely that intervals are independent, and extremely unlikely that parameters are constant over intervals.
Like all lifelines models, we have prediction methods too. This is where we can see customer heterogenity vividly.
End of explanation |
11,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Test Cases
Case 1.1 - Biomedical Device for Parkinson's Disease Progression Monitoring
The dataset used in this test case is the Oxford Parkinson's Disease Telemonitoring Dataset.
Reference
Step1: Basic Statistics results suggest
Step2: Dotplots with grouping by Subject, Age and Sex | Python Code:
import numpy as np
import pandas as pd
import os
from pandas import DataFrame
from pandas import read_csv
from numpy import mean
from numpy import std
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')
import seaborn as sns
results = read_csv('parkinsons_updrs.csv')
results.head()
data = [results['motor_UPDRS'].describe(),results['total_UPDRS'].describe()]
df = pd.DataFrame(data)
df.round(2)
Explanation: Machine Learning Test Cases
Case 1.1 - Biomedical Device for Parkinson's Disease Progression Monitoring
The dataset used in this test case is the Oxford Parkinson's Disease Telemonitoring Dataset.
Reference: A Tsanas, MA Little, PE McSharry, LO Ramig (2009);
'Accurate telemonitoring of Parkinson’s disease progression by non-invasive speech tests',
IEEE Transactions on Biomedical Engineering (to appear).
Import Data and Visualize Table Structure
The dataset is comprised of 16 biomedical voice measurements taken from 42 patients; approximately 200 measurements were taken per patient.
Vocal impairment after the onset the disease is prevalent in 70 - 90% of the pacients based on some studies.
The aim of the data is to predict the clinician's Parkinson's disease symptom score on the UPDRS (Unified Parkinson's Disease Rating)scale, which reflects .
End of explanation
other_Stats= {'Median': [results['motor_UPDRS'].median(),results['total_UPDRS'].median()], 'Skew':[results['motor_UPDRS'].skew(),
...:results['total_UPDRS'].skew()],'Kurtosis':[results['motor_UPDRS'].kurt(), results['total_UPDRS'].kurt()]}
df1 = pd.DataFrame(other_Stats, index=['motor_UPDRS', 'total_UPDRS'])
df1.round(2)
plt.subplot(1, 2, 1)
plt.hist(results["motor_UPDRS"],color = "skyblue")
plt.xlabel('Motor_UPDRS Index')
plt.ylabel('Frequency')
plt.subplot(1, 2, 2)
plt.hist(results["total_UPDRS"],color = "green")
plt.xlabel('Total_UPDRS Index')
plt.show()
data1 = [results['motor_UPDRS'],results['total_UPDRS']]
fig, ax = plt.subplots(figsize=(5, 5))
plt.boxplot(data1)
ax.set_xlabel('motor_UPDRS, total_UPDRS')
ax.set_ylabel('Response')
plt.show()
ax=sns.factorplot(x="age", y="motor_UPDRS", col="sex", data = results, kind="box", size=3, aspect=2)
ax=sns.factorplot(x="age", y="total_UPDRS", col="sex", data = results, kind="box", size=3, aspect=2)
Explanation: Basic Statistics results suggest: larger variability in the total_UPDRS index.
The objective of the Python code below is just to have other statistical parameters on a single table format.
End of explanation
#groupby_subject= results.groupby('subject#')
sns.factorplot(x= 'subject#', y= 'motor_UPDRS', hue='age', col='sex', data=results, kind="swarm", size=3, aspect=3);
sns.factorplot(x= 'subject#', y= 'total_UPDRS', hue='age', col='sex', data=results, kind="swarm", size=3, aspect=3);
tab_1 = pd.crosstab(index=results["subject#"], columns="count")
print(tab_1)
tab_2 = pd.crosstab(index=results["age"], columns="count")
plt.hist(results['age'], color="violet")
plt.ylabel('Qty of observations');
plt.xlabel('Age')
plt.show()
print(tab_2)
Explanation: Dotplots with grouping by Subject, Age and Sex
End of explanation |
11,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The csv module can be used to work with data exported from spreadsheets and databases into text files formatted with fields and records, commonly referred to as comma-separated value (CSV) format because commas are often used to separate the fields in a record.
writing
Step1: reading
Step2: Dialect
Step3: Creating Dialect | Python Code:
import csv
import sys
unicode_chars = 'å∫ç'
with open('data.csv', 'wt') as f:
writer = csv.writer(f)
writer.writerow(('Title 1', 'Title 2', 'Title 3', 'Title 4'))
for i in range(3):
row = (
i + 1,
chr(ord('a') + i),
'08/{:02d}/07'.format(i + 1),
unicode_chars[i],
)
writer.writerow(row)
print(open("data.csv", 'rt').read())
Explanation: The csv module can be used to work with data exported from spreadsheets and databases into text files formatted with fields and records, commonly referred to as comma-separated value (CSV) format because commas are often used to separate the fields in a record.
writing
End of explanation
import csv
import sys
with open('data.csv', 'rt') as f:
reader = csv.reader(f)
for row in reader:
print(row)
Explanation: reading
End of explanation
import csv
print(csv.list_dialects())
Explanation: Dialect
End of explanation
import csv
csv.register_dialect('pipes', delimiter='|')
with open('testdata.pipes', 'r') as f:
reader = csv.reader(f, dialect='pipes')
for row in reader:
print(row)
Explanation: Creating Dialect
End of explanation |
11,681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Previous
1.18 映射名称到序列元素
问题
你有一段通过下标访问列表或者元组中元素的代码,但是这样有时候会使得你的代码难以阅读, 于是你想通过名称来访问元素。
解决方案
collections.namedtuple() 函数通过使用一个普通的元组对象来帮你解决这个问题。 这个函数实际上是一个返回 Python 中标准元组类型子类的一个工厂方法。 你需要传递一个类型名和你需要的字段给它,然后它就会返回一个类,你可以初始化这个类,为你定义的字段传递值等。 代码示例:
Step1: 尽管 namedtuple 的实例看起来像一个普通的类实例,但是它跟元组类型是可交换的,支持所有的普通元组操作,比如索引和解压。 比如:
Step2: 命名元组的一个主要用途是将你的代码从下标操作中解脱出来。 因此,如果你从数据库调用中返回了一个很大的元组列表,通过下标去操作其中的元素, 当你在表中添加了新的列的时候你的代码可能就会出错了。但是如果你使用了命名元组,那么就不会有这样的顾虑。
为了说明清楚,下面是使用普通元组的代码:
Step3: 下标操作通常会让代码表意不清晰,并且非常依赖记录的结构。 下面是使用命名元组的版本:
Step4: 讨论
命名元组另一个用途就是作为字典的替代,因为字典存储需要更多的内存空间。 如果你需要构建一个非常大的包含字典的数据结构,那么使用命名元组会更加高效。 但是需要注意的是,不像字典那样,一个命名元组是不可更改的。比如:
Step5: 如果你真的需要改变然后的属性,那么可以使用命名元组实例的 _replace() 方法, 它会创建一个全新的命名元组并将对应的字段用新的值取代。比如:
Step6: _replace() 方法还有一个很有用的特性就是当你的命名元组拥有可选或者缺失字段时候, 它是一个非常方便的填充数据的方法。 你可以先创建一个包含缺省值的原型元组,然后使用 _replace() 方法创建新的值被更新过的实例。比如:
Step7: 下面是它的使用方法: | Python Code:
from collections import namedtuple
Subscriber = namedtuple("Subscriber", ["addr", "joined"])
sub = Subscriber("[email protected]", "2012-10-19")
sub
sub.addr
sub.joined
Explanation: Previous
1.18 映射名称到序列元素
问题
你有一段通过下标访问列表或者元组中元素的代码,但是这样有时候会使得你的代码难以阅读, 于是你想通过名称来访问元素。
解决方案
collections.namedtuple() 函数通过使用一个普通的元组对象来帮你解决这个问题。 这个函数实际上是一个返回 Python 中标准元组类型子类的一个工厂方法。 你需要传递一个类型名和你需要的字段给它,然后它就会返回一个类,你可以初始化这个类,为你定义的字段传递值等。 代码示例:
End of explanation
len(sub)
addr, joined = sub
addr
joined
Explanation: 尽管 namedtuple 的实例看起来像一个普通的类实例,但是它跟元组类型是可交换的,支持所有的普通元组操作,比如索引和解压。 比如:
End of explanation
def compute_cost(records):
total = 0.0
for rec in records:
total += rec[1] * rec[2]
return total
Explanation: 命名元组的一个主要用途是将你的代码从下标操作中解脱出来。 因此,如果你从数据库调用中返回了一个很大的元组列表,通过下标去操作其中的元素, 当你在表中添加了新的列的时候你的代码可能就会出错了。但是如果你使用了命名元组,那么就不会有这样的顾虑。
为了说明清楚,下面是使用普通元组的代码:
End of explanation
from collections import namedtuple
Stock = namedtuple("Stock", ["name", "shares", "price"])
def compute_cost(records):
total = 0.0
for rec in records:
s = Stock(*rec)
total += s.shares * s.price
return total
Explanation: 下标操作通常会让代码表意不清晰,并且非常依赖记录的结构。 下面是使用命名元组的版本:
End of explanation
s = Stock("ACME", 100, 123.45)
s
s.shares
s.shares = 75
Explanation: 讨论
命名元组另一个用途就是作为字典的替代,因为字典存储需要更多的内存空间。 如果你需要构建一个非常大的包含字典的数据结构,那么使用命名元组会更加高效。 但是需要注意的是,不像字典那样,一个命名元组是不可更改的。比如:
End of explanation
s = s._replace(shares = 75)
s
Explanation: 如果你真的需要改变然后的属性,那么可以使用命名元组实例的 _replace() 方法, 它会创建一个全新的命名元组并将对应的字段用新的值取代。比如:
End of explanation
from collections import namedtuple
Stock = namedtuple("Stock", ["name", "shares", "price", "date", "time"])
# Create a prototype instance
stock_prototype = Stock("", 0, 0.0, None, None)
# Function to convert a dictionary to a Stock
def dict_to_stock(s):
return stock_prototype._replace(**s)
Explanation: _replace() 方法还有一个很有用的特性就是当你的命名元组拥有可选或者缺失字段时候, 它是一个非常方便的填充数据的方法。 你可以先创建一个包含缺省值的原型元组,然后使用 _replace() 方法创建新的值被更新过的实例。比如:
End of explanation
a = {"name": "ACME", "shares": 100, "price": 123.45}
dict_to_stock(a)
b = {'name': 'ACME', 'shares': 100, 'price': 123.45, 'date': '12/17/2012'}
dict_to_stock(b)
Explanation: 下面是它的使用方法:
End of explanation |
11,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compile and deploy the TFX pipeline to Kubeflow Pipelines
This notebook is the second of two notebooks that guide you through automating the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution with a pipeline.
Use this notebook to compile the TFX pipeline to a Kubeflow Pipelines (KFP) package. This process creates an Argo YAML file in a .tar.gz package, and is accomplished through the following steps
Step1: Set environment variables
Update the following variables to reflect the values for your GCP environment
Step2: Run the Pipeline locally by using the Beam runner
Step3: Build the container image
The pipeline uses a custom container image, which is a derivative of the tensorflow/tfx
Step4: Compile the TFX pipeline using the TFX CLI
Use the TFX CLI to compile the TFX pipeline to the KFP format, which allows the pipeline to be deployed and executed on AI Platform Pipelines. The output is a .tar.gz package containing an Argo definition of your pipeline.
Step5: Deploy the compiled pipeline to KFP
Use the KFP CLI to deploy the pipeline to a hosted instance of KFP on AI Platform Pipelines | Python Code:
%load_ext autoreload
%autoreload 2
!pip install -q -U kfp
Explanation: Compile and deploy the TFX pipeline to Kubeflow Pipelines
This notebook is the second of two notebooks that guide you through automating the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution with a pipeline.
Use this notebook to compile the TFX pipeline to a Kubeflow Pipelines (KFP) package. This process creates an Argo YAML file in a .tar.gz package, and is accomplished through the following steps:
Build a custom container image that includes the solution modules.
Compile the TFX Pipeline using the TFX command-line interface (CLI).
Deploy the compiled pipeline to KFP.
The pipeline workflow is implemented in the pipeline.py module. The runner.py module reads the configuration settings from the config.py module, defines the runtime parameters of the pipeline, and creates a KFP format that is executable on AI Platform pipelines.
Before starting this notebook, you must run the tfx01_interactive notebook to create the TFX pipeline.
Install required libraries
End of explanation
import os
os.environ['PROJECT_ID'] = 'yourProject' # Set your project.
os.environ['BUCKET'] = 'yourBucket' # Set your bucket.
os.environ['GKE_CLUSTER_NAME'] = 'yourCluster' # Set your GKE cluster name.
os.environ['GKE_CLUSTER_ZONE'] = 'yourClusterZone' # Set your GKE cluster zone.
os.environ['IMAGE_NAME'] = 'tfx-ml'
os.environ['TAG'] = 'tfx0.25.0'
os.environ['ML_IMAGE_URI']=f'gcr.io/{os.environ.get("PROJECT_ID")}/{os.environ.get("IMAGE_NAME")}:{os.environ.get("TAG")}'
os.environ['NAMESPACE'] = 'kubeflow-pipelines'
os.environ['ARTIFACT_STORE_URI'] = f'gs://{os.environ.get("BUCKET")}/tfx_artifact_store'
os.environ['GCS_STAGING_PATH'] = f'{os.environ.get("ARTIFACT_STORE_URI")}/staging'
os.environ['RUNTIME_VERSION'] = '2.2'
os.environ['PYTHON_VERSION'] = '3.7'
os.environ['BEAM_RUNNER'] = 'DirectRunner'
os.environ['MODEL_REGISTRY_URI'] = f'{os.environ.get("ARTIFACT_STORE_URI")}/model_registry'
os.environ['PIPELINE_NAME'] = 'tfx_bqml_scann'
from tfx_pipeline import config
for key, value in config.__dict__.items():
if key.isupper(): print(f'{key}: {value}')
Explanation: Set environment variables
Update the following variables to reflect the values for your GCP environment:
PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.
BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.
GKE_CLUSTER_NAME: The name of the Kubernetes Engine cluster used by the AI Platform pipeline. You can find this by looking at the Cluster column of the kubeflow-pipelines pipeline instance on the AI Platform Pipelines page.
GKE_CLUSTER_ZONE: The zone of the Kubernetes Engine cluster used by the AI Platform pipeline. You can find this by looking at the Zone column of the kubeflow-pipelines pipeline instance on the AI Platform Pipelines page.
End of explanation
import kfp
import tfx
from tfx.orchestration.beam.beam_dag_runner import BeamDagRunner
from tfx_pipeline import pipeline as pipeline_module
import tensorflow as tf
import ml_metadata as mlmd
from ml_metadata.proto import metadata_store_pb2
import logging
logging.getLogger().setLevel(logging.INFO)
print("TFX Version:", tfx.__version__)
pipeline_root = f'{config.ARTIFACT_STORE_URI}/{config.PIPELINE_NAME}_beamrunner'
model_regisrty_uri = f'{config.MODEL_REGISTRY_URI}_beamrunner'
local_mlmd_sqllite = 'mlmd/mlmd.sqllite'
print(f'Pipeline artifacts root: {pipeline_root}')
print(f'Model registry location: {model_regisrty_uri}')
if tf.io.gfile.exists(pipeline_root):
print("Removing previous artifacts...")
tf.io.gfile.rmtree(pipeline_root)
if tf.io.gfile.exists('mlmd'):
print("Removing local mlmd SQLite...")
tf.io.gfile.rmtree('mlmd')
print("Creating mlmd directory...")
tf.io.gfile.mkdir('mlmd')
metadata_connection_config = metadata_store_pb2.ConnectionConfig()
metadata_connection_config.sqlite.filename_uri = local_mlmd_sqllite
metadata_connection_config.sqlite.connection_mode = 3
print("ML metadata store is ready.")
beam_pipeline_args = [
f'--runner=DirectRunner',
f'--project={config.PROJECT_ID}',
f'--temp_location={config.ARTIFACT_STORE_URI}/beam/tmp'
]
pipeline_module.SCHEMA_DIR = 'tfx_pipeline/schema'
pipeline_module.LOOKUP_CREATOR_MODULE = 'tfx_pipeline/lookup_creator.py'
pipeline_module.SCANN_INDEXER_MODULE = 'tfx_pipeline/scann_indexer.py'
runner = BeamDagRunner()
pipeline = pipeline_module.create_pipeline(
pipeline_name=config.PIPELINE_NAME,
pipeline_root=pipeline_root,
project_id=config.PROJECT_ID,
bq_dataset_name=config.BQ_DATASET_NAME,
min_item_frequency=15,
max_group_size=10,
dimensions=50,
num_leaves=500,
eval_min_recall=0.8,
eval_max_latency=0.001,
ai_platform_training_args=None,
beam_pipeline_args=beam_pipeline_args,
model_regisrty_uri=model_regisrty_uri,
metadata_connection_config=metadata_connection_config,
enable_cache=True
)
runner.run(pipeline)
Explanation: Run the Pipeline locally by using the Beam runner
End of explanation
!gcloud builds submit --tag $ML_IMAGE_URI tfx_pipeline
Explanation: Build the container image
The pipeline uses a custom container image, which is a derivative of the tensorflow/tfx:0.25.0 image, as a runtime execution environment for the pipeline's components. The container image is defined in a Dockerfile.
The container image installs the required libraries and copies over the modules from the solution's tfx_pipeline directory, where the custom components are implemented. The container image is also used by AI Platform Training for executing the training jobs.
Build the container image using Cloud Build and then store it in Cloud Container Registry:
End of explanation
!rm ${PIPELINE_NAME}.tar.gz
!tfx pipeline compile \
--engine=kubeflow \
--pipeline_path=tfx_pipeline/runner.py
Explanation: Compile the TFX pipeline using the TFX CLI
Use the TFX CLI to compile the TFX pipeline to the KFP format, which allows the pipeline to be deployed and executed on AI Platform Pipelines. The output is a .tar.gz package containing an Argo definition of your pipeline.
End of explanation
%%bash
gcloud container clusters get-credentials ${GKE_CLUSTER_NAME} --zone ${GKE_CLUSTER_ZONE}
export KFP_ENDPOINT=$(kubectl describe configmap inverse-proxy-config -n ${NAMESPACE} | grep "googleusercontent.com")
kfp --namespace=${NAMESPACE} --endpoint=${KFP_ENDPOINT} \
pipeline upload \
--pipeline-name=${PIPELINE_NAME} \
${PIPELINE_NAME}.tar.gz
Explanation: Deploy the compiled pipeline to KFP
Use the KFP CLI to deploy the pipeline to a hosted instance of KFP on AI Platform Pipelines:
End of explanation |
11,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This quickstart guide explains how to join two tables A and B using edit distance measure. First, you need to import the required packages as follows (if you have installed py_stringsimjoin it will automatically install the dependencies py_stringmatching and pandas)
Step1: Joining two tables using edit distance measure typically consists of three steps
Step2: 2. Profiling the tables
Before performing the join, we may want to profile the tables to
know about the characteristics of the attributes. This can help identify
Step3: If the input tables does not contain any key attribute, then you need
to create a key attribute. In the current example, both the input tables
A and B have key attributes, and hence you can proceed to the next step.
In the case the table does not have any key attribute, you can
add a key attribute using the following command
Step4: For the purpose of this guide, we will now join tables A and B on
'name' attribute using edit distance measure. Next, we need to decide on what
threshold to use for the join. For this guide, we will use a threshold of 5.
Specifically, the join will now find tuple pairs from A and B such that
the edit distance over the 'name' attributes is at most 5.
3. Performing the join
The next step is to perform the edit distance join using the following command
Step5: Handling missing values
By default, pairs with missing values are not included
in the output. This is because a string with a missing value
can potentially match with all strings in the other table and
hence the number of output pairs can become huge. If you want
to include pairs with missing value in the output, you need to
set the allow_missing flag to True, as shown below
Step6: Enabling parallel processing
If you have multiple cores which you want to exploit for performing the
join, you need to use the n_jobs option. If n_jobs is -1, all CPUs
are used. If 1 is given, no parallel computing code is used at all,
which is useful for debugging and is the default option. For n_jobs below
-1, (n_cpus + 1 + n_jobs) are used (where n_cpus is the total number of
CPUs in the machine). Thus for n_jobs = -2, all CPUs but one are used. If
(n_cpus + 1 + n_jobs) becomes less than 1, then no parallel computing code
will be used (i.e., equivalent to the default).
The following command exploits all the cores available to perform the join
Step7: You need to set n_jobs to 1 when you are debugging or you do not want
to use any parallel computing code. If you want to execute the join as
fast as possible, you need to set n_jobs to -1 which will exploit all
the CPUs in your machine. In case there are other concurrent processes
running in your machine and you do not want to halt them, then you may
need to set n_jobs to a value below -1.
Performing join on numeric attributes
The join method expects the join attributes to be of string type.
If you need to perform the join over numeric attributes, then you need
to first convert the attributes to string type and then perform the join.
For example, if you need to join 'A.zipcode' in table A with 'B.zipcode' in
table B, you need to first convert the attributes to string type using
the following command
Step8: Note that the above command preserves the NaN values while converting the numeric column to string type. Next, you can perform the join as shown below
Step9: Additional options
You can find all the options available for the edit distance
join function using the help command as shown below | Python Code:
# Import libraries
import py_stringsimjoin as ssj
import py_stringmatching as sm
import pandas as pd
import os, sys
print('python version: ' + sys.version)
print('py_stringsimjoin version: ' + ssj.__version__)
print('py_stringmatching version: ' + sm.__version__)
print('pandas version: ' + pd.__version__)
Explanation: This quickstart guide explains how to join two tables A and B using edit distance measure. First, you need to import the required packages as follows (if you have installed py_stringsimjoin it will automatically install the dependencies py_stringmatching and pandas):
End of explanation
# construct the path of the tables to be loaded. Since we are loading a
# dataset from the package, we need to access the data from the path
# where the package is installed. If you need to load your own data, you can directly
# provide your table path to the read_csv command.
table_A_path = os.sep.join([ssj.get_install_path(), 'datasets', 'data', 'person_table_A.csv'])
table_B_path = os.sep.join([ssj.get_install_path(), 'datasets', 'data', 'person_table_B.csv'])
# Load csv files as dataframes.
A = pd.read_csv(table_A_path)
B = pd.read_csv(table_B_path)
print('Number of records in A: ' + str(len(A)))
print('Number of records in B: ' + str(len(B)))
A
B
Explanation: Joining two tables using edit distance measure typically consists of three steps:
1. Loading the input tables
2. Profiling the tables
3. Performing the join
1. Loading the input tables
We begin by loading the two tables. For the purpose of this guide,
we use the sample dataset that comes with the package.
End of explanation
# profile attributes in table A
ssj.profile_table_for_join(A)
# profile attributes in table B
ssj.profile_table_for_join(B)
Explanation: 2. Profiling the tables
Before performing the join, we may want to profile the tables to
know about the characteristics of the attributes. This can help identify:
a) unique attributes in the table which can be used as key attribute when performing
the join. A key attribute is needed to uniquely identify a tuple.
b) the number of missing values present in each attribute. This can
help you in deciding the attribute on which to perform the join.
For example, an attribute with a lot of missing values may not be a good
join attribute. Further, based on the missing value information you
need to decide on how to handle missing values when performing the join
(See the section below on 'Handling missing values' to know more about
the options available for handling missing values when performing the join).
You can profile the attributes in a table using the following command:
End of explanation
B['new_key_attr'] = range(0, len(B))
B
Explanation: If the input tables does not contain any key attribute, then you need
to create a key attribute. In the current example, both the input tables
A and B have key attributes, and hence you can proceed to the next step.
In the case the table does not have any key attribute, you can
add a key attribute using the following command:
End of explanation
# find all pairs from A and B such that the edit distance
# on 'name' is at most 5.
# l_out_attrs and r_out_attrs denote the attributes from the
# left table (A) and right table (B) that need to be included in the output.
output_pairs = ssj.edit_distance_join(A, B, 'A.id', 'B.id', 'A.name', 'B.name', 5,
l_out_attrs=['A.name'], r_out_attrs=['B.name'])
len(output_pairs)
# examine the output pairs
output_pairs
Explanation: For the purpose of this guide, we will now join tables A and B on
'name' attribute using edit distance measure. Next, we need to decide on what
threshold to use for the join. For this guide, we will use a threshold of 5.
Specifically, the join will now find tuple pairs from A and B such that
the edit distance over the 'name' attributes is at most 5.
3. Performing the join
The next step is to perform the edit distance join using the following command:
End of explanation
output_pairs = ssj.edit_distance_join(A, B, 'A.id', 'B.id', 'A.name', 'B.name', 5, allow_missing=True,
l_out_attrs=['A.name'], r_out_attrs=['B.name'])
output_pairs
Explanation: Handling missing values
By default, pairs with missing values are not included
in the output. This is because a string with a missing value
can potentially match with all strings in the other table and
hence the number of output pairs can become huge. If you want
to include pairs with missing value in the output, you need to
set the allow_missing flag to True, as shown below:
End of explanation
output_pairs = ssj.edit_distance_join(A, B, 'A.id', 'B.id', 'A.name', 'B.name', 5,
l_out_attrs=['A.name'], r_out_attrs=['B.name'], n_jobs=-1)
len(output_pairs)
Explanation: Enabling parallel processing
If you have multiple cores which you want to exploit for performing the
join, you need to use the n_jobs option. If n_jobs is -1, all CPUs
are used. If 1 is given, no parallel computing code is used at all,
which is useful for debugging and is the default option. For n_jobs below
-1, (n_cpus + 1 + n_jobs) are used (where n_cpus is the total number of
CPUs in the machine). Thus for n_jobs = -2, all CPUs but one are used. If
(n_cpus + 1 + n_jobs) becomes less than 1, then no parallel computing code
will be used (i.e., equivalent to the default).
The following command exploits all the cores available to perform the join:
End of explanation
ssj.dataframe_column_to_str(A, 'A.zipcode', inplace=True)
ssj.dataframe_column_to_str(B, 'B.zipcode', inplace=True)
Explanation: You need to set n_jobs to 1 when you are debugging or you do not want
to use any parallel computing code. If you want to execute the join as
fast as possible, you need to set n_jobs to -1 which will exploit all
the CPUs in your machine. In case there are other concurrent processes
running in your machine and you do not want to halt them, then you may
need to set n_jobs to a value below -1.
Performing join on numeric attributes
The join method expects the join attributes to be of string type.
If you need to perform the join over numeric attributes, then you need
to first convert the attributes to string type and then perform the join.
For example, if you need to join 'A.zipcode' in table A with 'B.zipcode' in
table B, you need to first convert the attributes to string type using
the following command:
End of explanation
output_pairs = ssj.edit_distance_join(A, B, 'A.id', 'B.id', 'A.zipcode', 'B.zipcode', 1,
l_out_attrs=['A.zipcode'], r_out_attrs=['B.zipcode'])
output_pairs
Explanation: Note that the above command preserves the NaN values while converting the numeric column to string type. Next, you can perform the join as shown below:
End of explanation
help(ssj.edit_distance_join)
Explanation: Additional options
You can find all the options available for the edit distance
join function using the help command as shown below:
End of explanation |
11,684 | Given the following text description, write Python code to implement the functionality described.
Description:
Count of numbers whose difference with Fibonacci count upto them is atleast K
Python 3 program to find the count of numbers whose difference with Fibonacci count upto them is atleast K ; fibUpto [ i ] denotes the count of fibonacci numbers upto i ; Function to compute all the Fibonacci numbers and update fibUpto array ; Store the first two Fibonacci numbers ; Compute the Fibonacci numbers and store them in isFib array ; Compute fibUpto array ; Function to return the count of valid numbers ; Compute fibUpto array ; Binary search to find the minimum number that follows the condition ; Check if the number is valid , try to reduce it ; Ans is the minimum valid number ; Driver Code
| Python Code:
MAX = 1000005
fibUpto =[0 ] *(MAX + 1 )
def compute(sz ) :
isFib =[False ] *(sz + 1 )
prev = 0
curr = 1
isFib[prev ] = True
isFib[curr ] = True
while(curr <= sz ) :
temp = curr + prev
if(temp <= sz ) :
isFib[temp ] = True
prev = curr
curr = temp
fibUpto[0 ] = 1
for i in range(1 , sz + 1 ) :
fibUpto[i ] = fibUpto[i - 1 ]
if(isFib[i ] ) :
fibUpto[i ] += 1
def countOfNumbers(N , K ) :
compute(N )
low , high , ans = 1 , N , 0
while(low <= high ) :
mid =(low + high ) >> 1
if(mid - fibUpto[mid ] >= K ) :
ans = mid
high = mid - 1
else :
low = mid + 1
if(ans ) :
return(N - ans + 1 )
return 0
if __name__== "__main __":
N = 10
K = 3
print(countOfNumbers(N , K ) )
|
11,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Differential Equations
An ordinary differential equation or ODE is a mathematical equation containing a function or functions of one independent variable and its derivatives. The term ordinary is used in contrast with the term partial differential equation or PDE which involves functions and their partial derivatives with respect to more than one independent variable.
A Mathematical Model is a description of a system using mathematical concepts and language. ODEs and the PDEs are excellent tools for this purpose. Indeed, in real world applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. Because such relations are extremely common, differential equations play a prominent role in many disciplines including Dynamical Systems, Engineering, Physics, Biology…
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas. However, some properties of the solutions of a given differential equation may be determined without finding their exact form.
If an analytical solution is not available, the solution may be numerically approximated. The theory of Dynamical Systems puts emphasis on qualitative analysis of Systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
In general, partial differential equations are much more difficult to solve analytically than are ordinary differential equations. However, sometimes, separation of variables allows us to transform a PDE in a system of ODEs.
In this work only ODEs were taken into account.
Ordinary Differential Equations (ODEs)
Let F be a given function of x, y, and derivatives of y. Then an equation of the form
Step1: Second Order Linear System with 1 degree of freedom
In order to study the potential of the numerical integration of an ODE, the second order linear Dynamical System with different responses to different inputs discussed during DSC lectures was used. This Dynamical System can be mathematically modeled by the following ODE
Step2: Now let's consider the following initial conditions
Step3: Analytic solution
The characteristic equation for the ODE we are studying can be given by
Step4: Numerical solution
By calling the Runge Kutta function an approximation of the ODE can be calculated
Step5: We will plot the difference between the analytic solution and the numeric solution
Step6: We can see that the errors in the numerical approximation are adding up along time.
Notes
The natural response of the undamped System is a simple harmonic motion with an amplitude of $\sqrt{y(0)^2 + \frac{y'(0)}{w_n}^2}$ . For the numerical example we used $A=\sqrt{2.05}$;
As stated before, the analytic method may not be possible to use when in case there is no closed form solution, thus the approximation using the numeric method is generally used as it works for any ODE, but it accumulates errors, as shown before;
In case the transfer function has two complex conjugated poles located in the $Im$ axis of the complex plan (no real part). Then $y_h(t)=L^{-1} (G(s))$ is a linear combination of two complex exponential functions having only pure imaginary symmetric numbers and, applying the Euler’s formula, the expected solution was an harmonic response. The analytical solution and the numerical integration confirmed this prevision.
Underdamped System
A System is underdamped when $0 < \xi < 1$. Lets set $\xi = 0.05$
In this case the roots of the characteristic equation are 2 complex conjugated numbers, that is to say, in the Laplace transform method, the transfer function has two complex conjugated poles in the 3th and 4th quadrants of the complex plane. Then $y_h (t)=L^{-1} (G(s))$ is a linear combination of two complex exponential functions having conjugated exponents. Then common real exponential part multiplies an expression with the same form of the previous case. So the expected solution was an harmonic response but with a decreasing amplitude. The analytical solution and the numerical integration will confirm this prevision.
The analytical solution with the constants calculated from the initial conditions is given by
Step7: The System is bounded between $[-env(t), env(t)]$ and so $y(t) \to 0$ when $t \to \infty$.
Critically damped System
A System is called critically damped when $\xi=1$
The characteristic equation has 2 real identical roots. The transfer function in the Laplace method has 1 real pole with multiplicity 2. So $G(s)$ is a sum of two partial fractions, one having in the denominator $s-s_{1,2}$ and the other $(s-s_{1,2})^2$. The first one will give rise to a real exponential function and the other to the same real exponential function multiplied by $t$. So, the natural System response will not be harmonic but $y_h(t) \to 0$ as $t \to \infty$.
The analytical solution is
Step8: The System rapidly converges to 0.
Overdamped System
A System is called overdamped when $\xi > 1$
In this case the transfer function, $G(s)$, has two distinct real poles with multiplicity 1. Again, the System natural response will not be harmonic, since the solution of the differential equation will be a linear combination of two exponential real functions. Observing the form of the poles %s_1% and %s_2%, both are the sum of two real numbers being one of them common.
The analytical solution will be of the form
Step9: This System converges to 0 but it takes longer to do so.
System response to a permanent harmonic input
We will now focus on the behaviour of the System in the presence of a disturbance. This can be done analytically by studying
Step10: This force function is added to the second member of the last equation, in the first order System of ODEs defined in (2).
We can now study the System response to different values of $w$.
First was considered $ w = \frac{5\pi}{6} < w_n = 2 \pi$.
During the lectures on Dynamic Systems, the influence of $w$ was studied using bode plots.
Step11: We can see that the disturbance introduced in the System does not change the bounds of the natural function.
Now lets study the interference caused by the function $ w = w_n$
Step12: In this case the period of the natural function and the disturbance functions match ($w = w_n$), and thus the waves are infinitely amplified.
Finally lets consider the case of $ w = w_n - \epsilon = \frac{9}{10}w_n$ | Python Code:
def rungekutta(fn, y0, ti=0, tf=10, h=0.01):
h = np.float(h)
x = np.arange(ti, tf, h)
Y = np.zeros((len(x), len(y0)))
Y[0] = y0
for i in range(0, len(x)-1):
yi = Y[i]
xi = x[i]
k1 = h * fn(xi, yi)
k2 = h * fn(xi + 0.5 * h, yi + 0.5 * k1)
k3 = h * fn(xi + 0.5 * h, yi + 0.5 * k2)
k4 = h * fn(xi + 1.0 * h, yi + 1.0 * k3)
yk = yi + (1./6.) * (k1 + 2*k2 + 2*k3 + k4)
Y[i+1] = yk
return x, Y
Explanation: Differential Equations
An ordinary differential equation or ODE is a mathematical equation containing a function or functions of one independent variable and its derivatives. The term ordinary is used in contrast with the term partial differential equation or PDE which involves functions and their partial derivatives with respect to more than one independent variable.
A Mathematical Model is a description of a system using mathematical concepts and language. ODEs and the PDEs are excellent tools for this purpose. Indeed, in real world applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. Because such relations are extremely common, differential equations play a prominent role in many disciplines including Dynamical Systems, Engineering, Physics, Biology…
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas. However, some properties of the solutions of a given differential equation may be determined without finding their exact form.
If an analytical solution is not available, the solution may be numerically approximated. The theory of Dynamical Systems puts emphasis on qualitative analysis of Systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
In general, partial differential equations are much more difficult to solve analytically than are ordinary differential equations. However, sometimes, separation of variables allows us to transform a PDE in a system of ODEs.
In this work only ODEs were taken into account.
Ordinary Differential Equations (ODEs)
Let F be a given function of x, y, and derivatives of y. Then an equation of the form:
$$F\left (x,y,y',\cdots y^{(n-1)} \right )=y^{(n)}$$
where $y$ is a function of $x$, $y'= \frac{dy}{dx}$ is the first derivative with respect to $x$, and $y^{(n)}=\frac{d^{n} y}{dx^{n}}$ is the nth derivative with respect to $x$, is called an explicit ordinary differential equation of order $n$.
More generally, an implicit ordinary differential equation of order $n$ takes the form:
$$F\left(x, y, y', y'',\ \cdots,\ y^{(n)}\right) = 0$$
An ODE of order $n$ is said to be linear if it is of the form:
$$y^{(n)}(x)+a_{n-1}y^{(n-1)}(x)+\cdots+a_2y''(x)+a_1y'(x)+a_0y(x)=Q(x)$$
$$(1)$$
where $a_0$, $a_1$, $...$, $a_{n-1}$ are constants and $Q(x)$ is a function of the independent variable $x$. If $Q(x)=0$, the linear ODE is said to be homogeneous.
In general, an nth-order linear ODE has $n$ linearly independent solutions. Furthermore, any linear combination of linearly independent functions solutions is also a solution, the general solution. The general solution of a non-homogeneous differential equation is obtained by adding the general solution of the associated homogeneous equation with a particular solution of the given equation. The coefficients of the linear combination of the solutions are obtained from the given initial conditions of the problem: $y(0)$, $y’(0)$,$\cdots$, $y^(n-1)(0)$.
Simple theories exist for first-order and second-order ordinary differential equations, and arbitrary ODEs with linear constant coefficients can be solved when they are of certain factorable forms. Methods such as:
Method of undetermined coefficients
Integrating factor
Method of variation of parameters
Separable differential equation
are usually used. Integral transforms such as the Laplace transform can also be used to solve classes of linear ODEs. This last method was widely discussed during our Dynamical Systems and Control lectures in order to study first order and second order systems responses to some specific inputs, such as step or sinusoidal inputs.
By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more difficult, as one can rarely represent them by functions in closed form. Instead, exact and analytic solutions of ODEs are in series or integral form that can be solved using methods such as:
Successive Approximations
Multiple scale Analysis
Power series solution of differential equations
Generalized Fourier series
While there are many general techniques for analytically solving classes of ODEs, the only practical technique for approximating solutions of complicated equations is to use numerical methods. Graphical and numerical methods may approximate solutions of ODEs and yield information that often suffices in the absence of exact analytic solutions. Such methods include:
Euler method — the most basic method for solving an ODE
Explicit and implicit methods — implicit methods need to solve an equation at every step
Backward Euler method — implicit variant of the Euler method
Trapezoidal rule — second-order implicit method
Runge–Kutta methods — one of the two main classes of methods for initial-value problems
The most popular of these are the Runge-Kutta methods. A 4th order method (5th order truncation method) was implemented in this work.
In order to use numerical methods to solve a nth-order differential equation, the first step is to transform the differential equation into a system of $n$ differential equations of first order:
Let be $Z$ a vector having as components the function $y$ and its first $n-1$ derivatives with respect to $x$:
$$ Z=\begin{bmatrix}y \ y' \ y'' \ \cdots \ y^{(n-1)}\end{bmatrix} = \begin{bmatrix}z_1 \ z_2 \ z_3 \ \cdots \ z_n\end{bmatrix}$$
The differential equation (1) is, then transformed into:
$$\begin{cases}z_1'=z_2 \ z_2'=z_3 \ \vdots \ z_{n-1}'=z_n \ z_n'= Q(x)-a_{n-1}z_n- \cdots - a_1z_2 - a_0z_1 \end{cases}$$
In Dynamical Systems, the independent variable is always the time so, from this point on, we are going to change the notation $x$ to $t$.
Runge-Kutta method
The Runge–Kutta methods are a family of implicit and explicit iterative methods, which are used in temporal discretization for the approximation of solutions of ordinary differential equations. These techniques were developed around 1900 by the German mathematicians C. Runge and M. W. Kutta.
$$y_{n+1}=y_{n}+h\sum {i=1}^{s}b{i}k_{i}$$
$$k_{1}=f(t_{n},y_{n})$$
$$k_{2}=f(t_{n}+c_{2}h,y_{n}+h(a_{21}k_{1}))$$
$$k_{3}=f(t_{n}+c_{3}h,y_{n}+h(a_{31}k_{1}+a_{32}k_{2}))$$
$$\vdots$$
$$k_{s}=f(t_{n}+c_{s}h,y_{n}+h(a_{s1}k_{1}+a_{s2}k_{2}+\cdots +a_{s,s-1}k_{s-1}))$$
To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients $a_{ij}$, $b_i$, and $c_i$. The matrix $a_{ij}$ is called the Runge–Kutta matrix, while the $b_i$ and $c_i$ are known as the weights and the nodes.
From the family of the runge-kutta methods the most commonly used are the order 4 and 5 methods, which can be implemented as follows, by inlining the values for $a_{ij}$, $b_i$, and $c_i$ directly in the code:
End of explanation
def natural(wn, qsi):
return lambda x,y: np.array([
y[1],
-2 * wn * qsi * y[1] - np.power(wn, 2) * y[0],
])
Explanation: Second Order Linear System with 1 degree of freedom
In order to study the potential of the numerical integration of an ODE, the second order linear Dynamical System with different responses to different inputs discussed during DSC lectures was used. This Dynamical System can be mathematically modeled by the following ODE:
$$y''(t)+2 \xi w_n y'(t)+w_n^2 y(t)=F(t)$$
where $w_n$ represents the undamped natural frequency, $\xi$ represents the damping ratio and $F(t)$ a forced exterior action.
The solution $y(t)$ of this kind of differential equations is obtained as a sum of the general solution of the homogeneous differential equation, $y_h(t)$, with a particular solution of the complete differential equation, $y_p(t)$:
$$y(t)=y_h(t)+y_p(t)$$
This is a problem of initial conditions, $y(0)$ and $y'(0)$, which allow the determination of the integration constants.
Natural response of the system
The natural response of the system is obtained in the absence of forced exterior actions, in other words, it is the solution of the homogeneous differential equation:
$$y''(t)+2 \xi w_n y'(t)+w_n^2 y(t)=0$$
However, in order to get a response of the Dynamical System, it is necessary to change the initial conditions to the introduction of an initial perturbation to the System which can be modeled by a Dirac impulse. This is a convenient form to apply Laplace transform method to solve analytically the homogenous differential equation:
$$y''(t)+2 \xi w_n y'(t)+w_n^2 y(t)=\delta(t)$$
In these conditions the transfer function is:
$$G(s)=\frac{1}{s^2+2 \xi w_n s+w_n^2}$$
As referred before, the numerical integration of ODEs requires that each equation of degree $n$ is transformed into a system of ODEs of degree 1. This ODE is of degree 2. Thus, we will transform it into a system of 2 ODEs of degree 1.
This ODE is of degree 2. Thus, we will transform it into a system of 2 ODEs of degree 1, as follows:
$$ z = \begin{bmatrix}z_1 \ z_2 \end{bmatrix} = \begin{bmatrix} y(t) \ y'(t) \end{bmatrix} $$
$$ z' = \begin{bmatrix}z_1' \ z_2' \end{bmatrix} = \begin{bmatrix} z_2 \ -2\xi w_n z_2 - w_n^{2} z_1 \end{bmatrix}$$
$$(2)$$
The function natural is a builder function that takes as arguments $w_n$ and $\xi$ and returns a lambda function representing the system $z'$:
End of explanation
y0 = np.array([np.sqrt(2), np.sqrt(2)])
wn = 2 * np.pi
Explanation: Now let's consider the following initial conditions:
$$y_0 = \begin{bmatrix} y(0)\ y'(0) \end{bmatrix} = \begin{bmatrix} \sqrt{2}\ \sqrt{2} \end{bmatrix}$$
We will also consider $w_n=2\pi$.
We will focus on studying the properties of $z_2'$, and plotting only it.
End of explanation
def undampedAnalyticSolution(x, Y, wn):
return Y[0] * np.cos(wn * x) + (Y[1] / wn) * np.sin(wn * x)
x = np.arange(0,5,0.01)
plt.plot(x, undampedAnalyticSolution(x, y0, wn))
plt.title('Undamped System - Analytic solution')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.grid()
Explanation: Analytic solution
The characteristic equation for the ODE we are studying can be given by:
$$ s^2 + 2\xi w_n s + w_n^2 = 0$$
Applying the solving formula for polynomials, the roots for this equation are given by:
$$ s_{1,2} = -\xi w_n \pm w_n \sqrt{\xi^2 - 1}$$
So, the general solution of this homogenous equation is:
$$ y(t) = C_1 e^{s_1 t} + C_2 e^{s_2 t} $$
Where $C_1$ and $C_2$ are constants to be determined by the initial conditions.
Because this System has different behaviours depending on the values of $\xi$, we will study the System response with regard to different values of $\xi$, namely for:
$\xi=0$, an undamped System
$\xi \in{]0,1[} $, an under damped System
$\xi=1$, a critically damped System
$\xi>1$, an over damped System
Note that the characteristic equation equals to the denominator of the transfer function $G(s)$ and thus, the poles of the transfer function are the roots of the characteristic equation.
Undamped system
A System is called undamped when $\xi=0$.
We will study the behaviour of $z$ when $\xi=0$ analytically and numerically.
Analytical solution
Since $\xi = 0$ the ODE can be written in the form:
$$ y(t) = C_1 e^{j w_n t} + C_2 e^{-j w_n t} $$
$$ y(t) = A_1 \cos{w_n t} + A_2 \sin{w_n t} $$
And its' first derivative:
$$ y'(t) = -w_n A_1 \sin{w_n t} + w_n A_2 \cos{w_n t} $$
Now considering initial conditions $y(0)$:
$$ y(t=0) = y_0 = A_1$$
$$ y'(t=0) = y_0' = w_n A_2 $$
$$ A_1 = y_0 $$
$$ A_2 = \frac{y_0'}{w_n} $$
Replacing $A_1$ and $A_2$ the expression can be given by:
$$ y(t) = y_0 \cos{w_n t} + \frac{y_0'}{w_n} \sin{w_n t} $$
This is implemented as follows:
End of explanation
qsi=0
x, Y = rungekutta(natural(wn, qsi), y0, tf=5)
plt.plot(x, Y[:,0])
plt.title('Undamped System - Numerical solution')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.grid()
Explanation: Numerical solution
By calling the Runge Kutta function an approximation of the ODE can be calculated:
End of explanation
xa, Ya = rungekutta(natural(wn, qsi), y0, tf=5)
xb = np.arange(0,5,0.01)
Yb = undampedAnalyticSolution(xb, y0, wn)
plt.plot(xb, Ya[:,0]-Yb)
plt.title('Undamped System (analytic - numerical solutions)')
plt.xlabel('t')
plt.ylabel('residual')
plt.grid()
Explanation: We will plot the difference between the analytic solution and the numeric solution:
End of explanation
qsi=0.05
A = np.sqrt(np.power((y0[1]+qsi*wn*y0[0]) / (wn * np.sqrt(1-np.power(qsi,2))), 2) + np.power(y0[0],2))
envelope = lambda x : A * np.exp(-qsi*wn*x)
x, Y = rungekutta(natural(wn, qsi), y0, tf=5)
plt.plot(x, Y[:,0])
plt.plot(x, [[-envelope(x), envelope(x)] for x in x], color="gray", alpha=0.5)
plt.title('Underdamped System')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.grid()
Explanation: We can see that the errors in the numerical approximation are adding up along time.
Notes
The natural response of the undamped System is a simple harmonic motion with an amplitude of $\sqrt{y(0)^2 + \frac{y'(0)}{w_n}^2}$ . For the numerical example we used $A=\sqrt{2.05}$;
As stated before, the analytic method may not be possible to use when in case there is no closed form solution, thus the approximation using the numeric method is generally used as it works for any ODE, but it accumulates errors, as shown before;
In case the transfer function has two complex conjugated poles located in the $Im$ axis of the complex plan (no real part). Then $y_h(t)=L^{-1} (G(s))$ is a linear combination of two complex exponential functions having only pure imaginary symmetric numbers and, applying the Euler’s formula, the expected solution was an harmonic response. The analytical solution and the numerical integration confirmed this prevision.
Underdamped System
A System is underdamped when $0 < \xi < 1$. Lets set $\xi = 0.05$
In this case the roots of the characteristic equation are 2 complex conjugated numbers, that is to say, in the Laplace transform method, the transfer function has two complex conjugated poles in the 3th and 4th quadrants of the complex plane. Then $y_h (t)=L^{-1} (G(s))$ is a linear combination of two complex exponential functions having conjugated exponents. Then common real exponential part multiplies an expression with the same form of the previous case. So the expected solution was an harmonic response but with a decreasing amplitude. The analytical solution and the numerical integration will confirm this prevision.
The analytical solution with the constants calculated from the initial conditions is given by:
$$y(t)=Ae^{-ξw_n t} cos(w_d t-ϕ)$$
where:
$w_d=w_n \sqrt{1-ξ^2}$ is the damped natural frequency
$A=\sqrt{\frac{y'(0)+ξw_n y(0)}{w_d}+y(0)^2}$ is a constant calculated from the initial conditions.
$ϕ=\tan^{-1}\frac{y'(0)+ξw_n y(0)}{w_d y(0)}$ is the phase angle
As this analytical solution shows, the underdamped System has the interesting property of being enveloped by $env(t)=Ae^{-ξw_n t}$, the upper envelope, and $-env(t)$, the lower envelope.
To compare the analytical solution with the numerical one, in the next plot the numerical solution is in blue and the envelope of the analytical one is in gray, which is implemented as follows:
End of explanation
x, Y = rungekutta(natural(wn, qsi=1), y0, tf=5, h=.01)
plt.plot(x, Y[:,0])
plt.title('Critically damped System')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.grid()
Explanation: The System is bounded between $[-env(t), env(t)]$ and so $y(t) \to 0$ when $t \to \infty$.
Critically damped System
A System is called critically damped when $\xi=1$
The characteristic equation has 2 real identical roots. The transfer function in the Laplace method has 1 real pole with multiplicity 2. So $G(s)$ is a sum of two partial fractions, one having in the denominator $s-s_{1,2}$ and the other $(s-s_{1,2})^2$. The first one will give rise to a real exponential function and the other to the same real exponential function multiplied by $t$. So, the natural System response will not be harmonic but $y_h(t) \to 0$ as $t \to \infty$.
The analytical solution is:
$$y(t)=(y(0)+(y'(0)+w_n y(0))t)e^{-w_n t}$$
which confirms what was said in the previous paragraph.
The numerical integration was implemented as:
End of explanation
x, Y = rungekutta(natural(wn, qsi=2), y0, tf=5, h=.01)
plt.plot(x, Y[:,0])
plt.title('Overdamped System')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.grid()
Explanation: The System rapidly converges to 0.
Overdamped System
A System is called overdamped when $\xi > 1$
In this case the transfer function, $G(s)$, has two distinct real poles with multiplicity 1. Again, the System natural response will not be harmonic, since the solution of the differential equation will be a linear combination of two exponential real functions. Observing the form of the poles %s_1% and %s_2%, both are the sum of two real numbers being one of them common.
The analytical solution will be of the form:
$$y(t)=e^{-ξw_n t}(C_1 e^{w_n \sqrt{ξ^2-1} t}+C_2 e^{-w_n \sqrt{ξ^2-1} t} )$$
and the constants are evaluated from the initial conditions:
$$C_1=\frac{y(0) w_n (ξ+\sqrt{ξ^2-1})+y'(0)}{2w_n \sqrt{ξ^2-1}}$$
$$C_2=\frac{-y(0) w_n (ξ+\sqrt{ξ^2-1})-y'(0)}{2w_n \sqrt{ξ^2-1}}$$
For $\xi=2$ the numerical solution was:
End of explanation
def forced(wn, qsi, f=30, force=lambda x: 1):
n = natural(wn, qsi)
return lambda x,y: np.array([
n(x,y)[0],
n(x,y)[1] + f * force(x),
])
Explanation: This System converges to 0 but it takes longer to do so.
System response to a permanent harmonic input
We will now focus on the behaviour of the System in the presence of a disturbance. This can be done analytically by studying:
$$y''(t)+2 \xi w_n y'(t)+w_n^2 y(t)=F(t)$$
This can be done analytically by adding to the general solution of the homogeneous equation a particular solution of the complete equation:
$$y(t) = y_h(t) + y_p(t)$$
The type of disturbance applied can be divided into the following:
Harmonic - The applied disturbance is given by a sinusoidal function
Periodic - The applied disturbance is given by a fourier series
Transient - Any other type of disturbance
While the harmonic and periodic Systems can be solved analytically or approximated using series, transient Systems may only be solved using convolution of the integrals, Laplace transforms or numeric integration.
The sinusoidal input we are going to consider is of the form $F(t)=f cos(wt)$, where $f$ and $w$ are, respectively, the amplitude and the frequency of the permanent exterior harmonic excitation.
We will considered the undamped System ($\xi=0$), and $f=30$ in order to study the System response to different values of the harmonic excitation input frequency, $w$.
For numerical integration purposes, we define a forced builder function with parameters $w_n$, $\xi$ and f as a multiplier for the force argument which takes a lambda function as follows:
End of explanation
x, Y = (rungekutta(forced(wn, qsi=0, force=lambda x : np.cos(x*5*np.pi/6.)), y0))
plt.plot(x, Y[:,0], label='forced response')
x, Y = (rungekutta(natural(wn, qsi=0), y0))
plt.plot(x, Y[:,0], color="grey", alpha=0.5, label='natural response')
plt.title('System response when F(t) = 30 * cos(5pi/6 t)')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.grid()
Explanation: This force function is added to the second member of the last equation, in the first order System of ODEs defined in (2).
We can now study the System response to different values of $w$.
First was considered $ w = \frac{5\pi}{6} < w_n = 2 \pi$.
During the lectures on Dynamic Systems, the influence of $w$ was studied using bode plots.
End of explanation
x, Y = (rungekutta(forced(wn, qsi=0, force=lambda x :np.cos(x*wn)), y0))
plt.plot(x, Y[:,0], label='forced response')
x, Y = (rungekutta(natural(wn, qsi=0), y0))
plt.plot(x, Y[:,0], color="grey", alpha=0.5, label='natural response')
plt.title('System response when F(t) = 30 * cos(wn t)')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.grid()
Explanation: We can see that the disturbance introduced in the System does not change the bounds of the natural function.
Now lets study the interference caused by the function $ w = w_n$:
End of explanation
x, Y = rungekutta(forced(wn, qsi=0, force=lambda x :np.cos(x*0.90*wn)), y0, tf=20)
plt.plot(x, Y[:,0], label='forced response')
x, Y = rungekutta(natural(wn, qsi=0), y0, tf=20)
plt.plot(x, Y[:,0], color="grey", alpha=0.5, label='natural response')
plt.title('System response when F(t) = 30 * cos(0.9wn t)')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.grid()
Explanation: In this case the period of the natural function and the disturbance functions match ($w = w_n$), and thus the waves are infinitely amplified.
Finally lets consider the case of $ w = w_n - \epsilon = \frac{9}{10}w_n$
End of explanation |
11,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Import the necessary packages to read in the data, plot, and create a linear regression model
Step1: 2. Read in the hanford.csv file
Step2: 3. Calculate the basic descriptive statistics on the data
Step3: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step4: r = 0.926345
Step5: Yes, there does seem to be a correlation worthy of investigation.
5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step6: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step7: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10 | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
%matplotlib inline
import statsmodels.formula.api as smf
Explanation: 1. Import the necessary packages to read in the data, plot, and create a linear regression model
End of explanation
df = pd.read_csv('hanford.csv')
Explanation: 2. Read in the hanford.csv file
End of explanation
df.describe()
Explanation: 3. Calculate the basic descriptive statistics on the data
End of explanation
df.corr()
Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
End of explanation
lm = smf.ols(formula="Mortality~Exposure",data=df).fit()
lm.params
intercept, slope = lm.params
df.plot(kind="scatter",x="Exposure",y="Mortality")
Explanation: r = 0.926345
End of explanation
exposure = int(input('What is the exposure level? '))
mortality = slope * exposure + intercept
print('If the exposure is ' + str(exposure) + ' the mortality rate is probably around ' + str(round(mortality, 2)))
Explanation: Yes, there does seem to be a correlation worthy of investigation.
5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
End of explanation
df.plot(kind="scatter",x="Exposure",y="Mortality")
plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="darkgrey")
plt.title('Correlation between exposure and mortality rate')
plt.xlabel('Exposure')
plt.ylabel('Mortality Rate')
0.926345 ** 2
Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
End of explanation
exposure = 10
mortality = slope * exposure + intercept
print('If the exposure is ' + str(exposure) + ' the mortality rate is probably around ' + str(round(mortality, 2)))
Explanation: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
End of explanation |
11,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting Models
Learning Objectives
Step1: Introduction
In Data Science it is common to start with data and develop a model of that data. Such models can help to explain the data and make predictions about future observations. In fields like Physics, these models are often given in the form of differential equations, whose solutions explain and predict the data. In most other fields, such differential equations are not known. Often, models have to include sources of uncertainty and randomness. Given a set of data, fitting a model to the data is the process of tuning the parameters of the model to best explain the data.
When a model has a linear dependence on its parameters, such as $a x^2 + b x + c$, this process is known as linear regression. When a model has a non-linear dependence on its parameters, such as $ a e^{bx} $, this process in known as non-linear regression. Thus, fitting data to a straight line model of $m x + b $ is linear regression, because of its linear dependence on $m$ and $b$ (rather than $x$).
Fitting a straight line
A classical example of fitting a model is finding the slope and intercept of a straight line that goes through a set of data points ${x_i,y_i}$. For a straight line the model is
Step2: Fitting by hand
It is useful to see visually how changing the model parameters changes the value of $\chi^2$. By using IPython's interact function, we can create a user interface that allows us to pick a slope and intercept interactively and see the resulting line and $\chi^2$ value.
Here is the function we want to minimize. Note how we have combined the two parameters into a single parameters vector $\theta = [m, b]$, which is the first argument of the function
Step3: Go ahead and play with the sliders and try to
Step4: Here are the values of $b$ and $m$ that minimize $\chi^2$
Step5: These values are close to the true values of $b=-1$ and $m=2$. The reason our values are different is that our data set has a limited number of points. In general, we expect that as the number of points in our data set increases, the model parameters will converge to the true values. But having a limited number of data points is not a problem - it is a reality of most data collection processes.
We can plot the raw data and the best fit line
Step6: Minimize $\chi^2$ using scipy.optimize.leastsq
Performing regression by minimizing $\chi^2$ is known as least squares regression, because we are minimizing the sum of squares of the deviations. The linear version of this is known as linear least squares. For this case, SciPy provides a purpose built function, scipy.optimize.leastsq. Instead of taking the $\chi^2$ function to minimize, leastsq takes a function that computes the deviations
Step7: Here we have passed the full_output=True option. When this is passed the covariance matrix $\Sigma_{ij}$ of the model parameters is also returned. The uncertainties (as standard deviations) in the parameters are the square roots of the diagonal elements of the covariance matrix
Step8: We can again plot the raw data and best fit line
Step9: Fitting using scipy.optimize.curve_fit
SciPy also provides a general curve fitting function, curve_fit, that can handle both linear and non-linear models. This function
Step10: Then call curve_fit passing the model function and the raw data. The uncertainties of each data point are provided with the sigma keyword argument. If there are no uncertainties, this can be omitted. By default the uncertainties are treated as relative. To treat them as absolute, pass the absolute_sigma=True argument.
Step11: Again, display the optimal values of $b$ and $m$ along with their uncertainties
Step12: We can again plot the raw data and best fit line
Step13: Non-linear models
So far we have been using a linear model $y_{model}(x) = m x +b$. Remember this model was linear, not because of its dependence on $x$, but on $b$ and $m$. A non-linear model will have a non-linear dependece on the model parameters. Examples are $A e^{B x}$, $A \cos{B x}$, etc. In this section we will generate data for the following non-linear model
Step14: Plot the raw data
Step15: Let's see if we can use non-linear regression to recover the true values of our model parameters. First define the model
Step16: Then use curve_fit to fit the model
Step17: Our optimized parameters are close to the true values of $A=10$ and $B=-0.2$
Step18: Plot the raw data and fitted model | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import optimize as opt
from IPython.html.widgets import interact
Explanation: Fitting Models
Learning Objectives: learn to fit models to data using linear and non-linear regression.
This material is licensed under the MIT license and was developed by Brian Granger. It was adapted from material from Jake VanderPlas and Jennifer Klay.
End of explanation
N = 50
m_true = 2
b_true = -1
dy = 2.0 # uncertainty of each point
np.random.seed(0)
xdata = 10 * np.random.random(N) # don't use regularly spaced data
ydata = b_true + m_true * xdata + np.random.normal(0.0, dy, size=N) # our errors are additive
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y');
Explanation: Introduction
In Data Science it is common to start with data and develop a model of that data. Such models can help to explain the data and make predictions about future observations. In fields like Physics, these models are often given in the form of differential equations, whose solutions explain and predict the data. In most other fields, such differential equations are not known. Often, models have to include sources of uncertainty and randomness. Given a set of data, fitting a model to the data is the process of tuning the parameters of the model to best explain the data.
When a model has a linear dependence on its parameters, such as $a x^2 + b x + c$, this process is known as linear regression. When a model has a non-linear dependence on its parameters, such as $ a e^{bx} $, this process in known as non-linear regression. Thus, fitting data to a straight line model of $m x + b $ is linear regression, because of its linear dependence on $m$ and $b$ (rather than $x$).
Fitting a straight line
A classical example of fitting a model is finding the slope and intercept of a straight line that goes through a set of data points ${x_i,y_i}$. For a straight line the model is:
$$
y_{model}(x) = mx + b
$$
Given this model, we can define a metric, or cost function, that quantifies the error the model makes. One commonly used metric is $\chi^2$, which depends on the deviation of the model from each data point ($y_i - y_{model}(x_i)$) and the measured uncertainty of each data point $ \sigma_i$:
$$
\chi^2 = \sum_{i=1}^N \left(\frac{y_i - y_{model}(x)}{\sigma_i}\right)^2
$$
When $\chi^2$ is small, the model's predictions will be close the data points. Likewise, when $\chi^2$ is large, the model's predictions will be far from the data points. Given this, our task is to minimize $\chi^2$ with respect to the model parameters $\theta = [m, b]$ in order to find the best fit.
To illustrate linear regression, let's create a synthetic data set with a known slope and intercept, but random noise that is additive and normally distributed.
End of explanation
def chi2(theta, x, y, dy):
# theta = [b, m]
return np.sum(((y - theta[0] - theta[1] * x) / dy) ** 2)
def manual_fit(b, m):
modely = m*xdata + b
plt.plot(xdata, modely)
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y')
plt.text(1, 15, 'b={0:.2f}'.format(b))
plt.text(1, 12.5, 'm={0:.2f}'.format(m))
plt.text(1, 10.0, '$\chi^2$={0:.2f}'.format(chi2([b,m],xdata,ydata, dy)))
interact(manual_fit, b=(-3.0,3.0,0.01), m=(0.0,4.0,0.01));
Explanation: Fitting by hand
It is useful to see visually how changing the model parameters changes the value of $\chi^2$. By using IPython's interact function, we can create a user interface that allows us to pick a slope and intercept interactively and see the resulting line and $\chi^2$ value.
Here is the function we want to minimize. Note how we have combined the two parameters into a single parameters vector $\theta = [m, b]$, which is the first argument of the function:
End of explanation
theta_guess = [0.0,1.0]
result = opt.minimize(chi2, theta_guess, args=(xdata,ydata,dy))
Explanation: Go ahead and play with the sliders and try to:
Find the lowest value of $\chi^2$
Find the "best" line through the data points.
You should see that these two conditions coincide.
Minimize $\chi^2$ using scipy.optimize.minimize
Now that we have seen how minimizing $\chi^2$ gives the best parameters in a model, let's perform this minimization numerically using scipy.optimize.minimize. We have already defined the function we want to minimize, chi2, so we only have to pass it to minimize along with an initial guess and the additional arguments (the raw data):
End of explanation
theta_best = result.x
print(theta_best)
Explanation: Here are the values of $b$ and $m$ that minimize $\chi^2$:
End of explanation
xfit = np.linspace(0,10.0)
yfit = theta_best[1]*xfit + theta_best[0]
plt.plot(xfit, yfit)
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y');
Explanation: These values are close to the true values of $b=-1$ and $m=2$. The reason our values are different is that our data set has a limited number of points. In general, we expect that as the number of points in our data set increases, the model parameters will converge to the true values. But having a limited number of data points is not a problem - it is a reality of most data collection processes.
We can plot the raw data and the best fit line:
End of explanation
def deviations(theta, x, y, dy):
return (y - theta[0] - theta[1] * x) / dy
result = opt.leastsq(deviations, theta_guess, args=(xdata, ydata, dy), full_output=True)
Explanation: Minimize $\chi^2$ using scipy.optimize.leastsq
Performing regression by minimizing $\chi^2$ is known as least squares regression, because we are minimizing the sum of squares of the deviations. The linear version of this is known as linear least squares. For this case, SciPy provides a purpose built function, scipy.optimize.leastsq. Instead of taking the $\chi^2$ function to minimize, leastsq takes a function that computes the deviations:
End of explanation
theta_best = result[0]
theta_cov = result[1]
print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('m = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))
Explanation: Here we have passed the full_output=True option. When this is passed the covariance matrix $\Sigma_{ij}$ of the model parameters is also returned. The uncertainties (as standard deviations) in the parameters are the square roots of the diagonal elements of the covariance matrix:
$$ \sigma_i = \sqrt{\Sigma_{ii}} $$
A proof of this is beyond the scope of the current notebook.
End of explanation
yfit = theta_best[0] + theta_best[1] * xfit
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray');
plt.plot(xfit, yfit, '-b');
Explanation: We can again plot the raw data and best fit line:
End of explanation
def model(x, b, m):
return m*x+b
Explanation: Fitting using scipy.optimize.curve_fit
SciPy also provides a general curve fitting function, curve_fit, that can handle both linear and non-linear models. This function:
Allows you to directly specify the model as a function, rather than the cost function (it assumes $\chi^2$).
Returns the covariance matrix for the parameters that provides estimates of the errors in each of the parameters.
Let's apply curve_fit to the above data. First we define a model function. The first argument should be the independent variable of the model.
End of explanation
theta_best, theta_cov = opt.curve_fit(model, xdata, ydata, sigma=dy)
Explanation: Then call curve_fit passing the model function and the raw data. The uncertainties of each data point are provided with the sigma keyword argument. If there are no uncertainties, this can be omitted. By default the uncertainties are treated as relative. To treat them as absolute, pass the absolute_sigma=True argument.
End of explanation
print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('m = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))
Explanation: Again, display the optimal values of $b$ and $m$ along with their uncertainties:
End of explanation
xfit = np.linspace(0,10.0)
yfit = theta_best[1]*xfit + theta_best[0]
plt.plot(xfit, yfit)
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y');
Explanation: We can again plot the raw data and best fit line:
End of explanation
npoints = 20
Atrue = 10.0
Btrue = -0.2
xdata = np.linspace(0.0, 20.0, npoints)
dy = np.random.normal(0.0, 0.1, size=npoints)
ydata = Atrue*np.exp(Btrue*tdata) + dy
Explanation: Non-linear models
So far we have been using a linear model $y_{model}(x) = m x +b$. Remember this model was linear, not because of its dependence on $x$, but on $b$ and $m$. A non-linear model will have a non-linear dependece on the model parameters. Examples are $A e^{B x}$, $A \cos{B x}$, etc. In this section we will generate data for the following non-linear model:
$$y_{model}(x) = Ae^{Bx}$$
and fit that data using curve_fit. Let's start out by using this model to generate a data set to use for our fitting:
End of explanation
plt.plot(xdata, ydata, 'k.')
plt.xlabel('x')
plt.ylabel('y');
Explanation: Plot the raw data:
End of explanation
def exp_model(x, A, B):
return A*np.exp(x*B)
Explanation: Let's see if we can use non-linear regression to recover the true values of our model parameters. First define the model:
End of explanation
theta_best, theta_cov = opt.curve_fit(exp_model2, xdata, ydata)
Explanation: Then use curve_fit to fit the model:
End of explanation
print('A = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('B = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))
Explanation: Our optimized parameters are close to the true values of $A=10$ and $B=-0.2$:
End of explanation
xfit = np.linspace(0,20)
yfit = exp_model(xfit, theta_best[0], theta_best[1])
plt.plot(xfit, yfit)
plt.plot(xdata, ydata, 'k.')
plt.xlabel('x')
plt.ylabel('y');
Explanation: Plot the raw data and fitted model:
End of explanation |
11,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CI/CD for a Kubeflow pipeline on Vertex AI
Learning Objectives
Step1: Let us make sure that the artifact store exists
Step2: Creating the KFP CLI builder for Vertex AI
Review the Dockerfile describing the KFP CLI builder
Step3: Build the image and push it to your project's Container Registry.
Step4: Understanding the Cloud Build workflow.
Review the cloudbuild_vertex.yaml file to understand how the CI/CD workflow is implemented and how environment specific settings are abstracted using Cloud Build variables.
The CI/CD workflow automates the steps you walked through manually during lab-02_vertex | Python Code:
PROJECT_ID = !(gcloud config get-value project)
PROJECT_ID = PROJECT_ID[0]
REGION = "us-central1"
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
Explanation: CI/CD for a Kubeflow pipeline on Vertex AI
Learning Objectives:
1. Learn how to create a custom Cloud Build builder to pilote Vertex AI Pipelines
1. Learn how to write a Cloud Build config file to build and push all the artifacts for a KFP
1. Learn how to setup a Cloud Build GitHub trigger a new run of the Kubeflow PIpeline
In this lab you will walk through authoring of a Cloud Build CI/CD workflow that automatically builds, deploys, and runs a Kubeflow pipeline on Vertex AI. You will also integrate your workflow with GitHub by setting up a trigger that starts the workflow when a new tag is applied to the GitHub repo hosting the pipeline's code.
Configuring environment settings
End of explanation
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
Explanation: Let us make sure that the artifact store exists:
End of explanation
!cat kfp-cli_vertex/Dockerfile
Explanation: Creating the KFP CLI builder for Vertex AI
Review the Dockerfile describing the KFP CLI builder
End of explanation
KFP_CLI_IMAGE_NAME = "kfp-cli-vertex"
KFP_CLI_IMAGE_URI = f"gcr.io/{PROJECT_ID}/{KFP_CLI_IMAGE_NAME}:latest"
KFP_CLI_IMAGE_URI
!gcloud builds submit --timeout 15m --tag {KFP_CLI_IMAGE_URI} kfp-cli_vertex
Explanation: Build the image and push it to your project's Container Registry.
End of explanation
SUBSTITUTIONS = f"_REGION={REGION},_PIPELINE_FOLDER=./"
SUBSTITUTIONS
!gcloud builds submit . --config cloudbuild_vertex.yaml --substitutions {SUBSTITUTIONS}
Explanation: Understanding the Cloud Build workflow.
Review the cloudbuild_vertex.yaml file to understand how the CI/CD workflow is implemented and how environment specific settings are abstracted using Cloud Build variables.
The CI/CD workflow automates the steps you walked through manually during lab-02_vertex:
1. Builds the trainer image
1. Compiles the pipeline
1. Uploads and run the pipeline to the Vertex AI Pipeline environment
1. Pushes the trainer to your project's Container Registry
The Cloud Build workflow configuration uses both standard and custom Cloud Build builders. The custom builder encapsulates KFP CLI.
Manually triggering CI/CD runs
You can manually trigger Cloud Build runs using the gcloud builds submit command.
End of explanation |
11,689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting the H.E.S.S. Crab spectrum with iminuit and emcee
As an example of a chi^2 fit, we use the flux points from the Crab nebula
as measured by H.E.S.S. in 2006
Step1: The data
We start by loading the flux points from a text file.
It is of course possible to load this data using just Python
or Numpy, but we'll use the pandas.read_table function
because it's very flexible, i.e. by setting a few arguments you'll
be able to load most ascii tables.
Step2: The model
In the paper they fit a power-law with an exponential cutoff and find the following parameters (see row "all" in table 6)
Step3: Plot data and model
Let's plot the data and model and compare to Figure 18b
from the paper ...
Step4: The likelihood
In this case we'll use a chi^2 likelihood function to
fit the model to the data.
Note that the likelihood function combines the data and model, and just depends on the model parameters that
shall be estimated (whereas the model function flux_ecpl has an extra parameter energy).
Also note that we're accessing data and model flux_ecpl from the global scope instead of passing them in explicitly as parameters. Modeling and fitting frameworks like e.g. Sherpa have more elaborate ways to combine data and models and likelihood functions, but for simple, small code examples like we do here, using the global scope to tie things together works just fine.
Step5: ML fit with Minuit
Let's use Minuit to do a maximum likelihood (ML) analysis.
Note that this is not what they did in the paper (TODO
Step6: Analysis with emcee
Should we only do Bayesian analysis? (posterior sampling)
Or should we start with a ML analysis (likelihood sampling and compare with iminuit) | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('ggplot')
Explanation: Fitting the H.E.S.S. Crab spectrum with iminuit and emcee
As an example of a chi^2 fit, we use the flux points from the Crab nebula
as measured by H.E.S.S. in 2006: http://adsabs.harvard.edu/abs/2006A%26A...457..899A
End of explanation
data = pd.read_table('spectrum_crab_hess_2006.txt',
comment='#', sep='\s*', engine='python')
data
Explanation: The data
We start by loading the flux points from a text file.
It is of course possible to load this data using just Python
or Numpy, but we'll use the pandas.read_table function
because it's very flexible, i.e. by setting a few arguments you'll
be able to load most ascii tables.
End of explanation
def flux_ecpl(energy, flux1, gamma, energy_cut):
return flux1 * energy ** (-gamma) * np.exp(-energy / energy_cut)
Explanation: The model
In the paper they fit a power-law with an exponential cutoff and find the following parameters (see row "all" in table 6):
* gamma = 2.39 +- 0.03
* energy_cut = 14.3 +- 2.1 TeV
* flux1 = (3.76 +- 0.07) x 1e-11 cm^-2 s^-1 TeV^-1
The flux1 is the differential flux at 1 TeV.
Let's code up that model ...
TODO: extend this tutorial to also consider a power-law model and compare the two models via chi2 / ndf.
End of explanation
energy = np.logspace(-0.5, 1.6, 100)
flux = flux_ecpl(energy, flux1=3.76e-11, gamma=2.39, energy_cut=14.3)
plt.plot(energy, flux)
plt.errorbar(x=data['energy'],
y=data['flux'],
yerr=data['flux_err'],
fmt='.'
)
plt.loglog();
Explanation: Plot data and model
Let's plot the data and model and compare to Figure 18b
from the paper ...
End of explanation
def chi2(flux1, gamma, energy_cut):
energy = data['energy']
flux_model = flux_ecpl(energy, flux1, gamma, energy_cut)
chi = (data['flux'] - flux_model) / data['flux_err']
return np.sum(chi ** 2)
# TODO: visualise the likelihood as a 1D profile or
# 2D contour to check that the implementation is OK
# before fitting. E.g. reproduce Fig 19 from the paper?
# Maybe talk about how chi2 differences relate to
# confidence levels here?
Explanation: The likelihood
In this case we'll use a chi^2 likelihood function to
fit the model to the data.
Note that the likelihood function combines the data and model, and just depends on the model parameters that
shall be estimated (whereas the model function flux_ecpl has an extra parameter energy).
Also note that we're accessing data and model flux_ecpl from the global scope instead of passing them in explicitly as parameters. Modeling and fitting frameworks like e.g. Sherpa have more elaborate ways to combine data and models and likelihood functions, but for simple, small code examples like we do here, using the global scope to tie things together works just fine.
End of explanation
from iminuit import Minuit
# TODO
Explanation: ML fit with Minuit
Let's use Minuit to do a maximum likelihood (ML) analysis.
Note that this is not what they did in the paper (TODO: check), so it's not surprising if best-fit results
are a little different.
End of explanation
import emcee
# TODO
Explanation: Analysis with emcee
Should we only do Bayesian analysis? (posterior sampling)
Or should we start with a ML analysis (likelihood sampling and compare with iminuit)
End of explanation |
11,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Tabular-Output" data-toc-modified-id="Tabular-Output-1"><span class="toc-item-num">1 </span>Tabular Output</a></span></li></ul></div>
Tabular Output
In addition to waveforms, myhdlpeek also lets you display the captured traces as tables.
To demonstrate, I'll use our old friend the multiplexer
Step2: Once the simulation has run, I can display the results using a table instead of waveforms
Step3: I can use the same options for tabular output that are available for showing waveforms
Step4: There's even a version for use in console mode (outside of the Jupyter environment) | Python Code:
from myhdl import *
from myhdlpeek import Peeker # Import the myhdlpeeker module.
def mux(z, a, b, sel):
A simple multiplexer.
@always_comb
def mux_logic():
if sel == 1:
z.next = a # Signal a sent to mux output when sel is high.
else:
z.next = b # Signal b sent to mux output when sel is low.
return mux_logic
# Create some signals to attach to the multiplexer.
a, b, z = [Signal(0) for _ in range(3)] # Integer signals for the inputs & output.
sel = Signal(intbv(0)[1:]) # Binary signal for the selector.
# Create some Peekers to monitor the multiplexer I/Os.
Peeker.clear() # Clear any existing Peekers. (Start with a clean slate.)
Peeker(a, 'a') # Add a Peeker to the a input.
Peeker(b, 'b') # Add a Peeker to the b input.
Peeker(z, 'z') # Add a peeker to the z output.
Peeker(sel, 'select') # Add a Peeker to the select input. The Peeker label doesn't have to match the signal name.
# Instantiate mux.
mux_1 = mux(z, a, b, sel)
# Apply random patterns to the multiplexer.
from random import randrange
def test():
for _ in range(8):
a.next, b.next, sel.next = randrange(8), randrange(8), randrange(2)
yield delay(1)
# Simulate the multiplexer, testbed and the peekers.
sim = Simulation(mux_1, test(), *Peeker.instances()).run()
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Tabular-Output" data-toc-modified-id="Tabular-Output-1"><span class="toc-item-num">1 </span>Tabular Output</a></span></li></ul></div>
Tabular Output
In addition to waveforms, myhdlpeek also lets you display the captured traces as tables.
To demonstrate, I'll use our old friend the multiplexer:
End of explanation
Peeker.to_html_table()
Explanation: Once the simulation has run, I can display the results using a table instead of waveforms:
End of explanation
Peeker.to_html_table('select a b z', start_time=3) # Select and change order of signals, and set start time.
Explanation: I can use the same options for tabular output that are available for showing waveforms:
End of explanation
Peeker.to_text_table('select a b z')
Explanation: There's even a version for use in console mode (outside of the Jupyter environment):
End of explanation |
11,691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LSTM text generation from Nietzsche's writings
The original script is here. It has the following message regarding speed
Step1: Get the data
Step2: Build the neural network
Step3: Train the network and output some text at each step | Python Code:
# Imports
from __future__ import print_function
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.layers import LSTM
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
Explanation: LSTM text generation from Nietzsche's writings
The original script is here. It has the following message regarding speed:
At least 20 epochs are required before the generated text
starts sounding coherent.
It is recommended to run this script on GPU, as recurrent
networks are quite computationally intensive.
If you try this script on new data, make sure your corpus
has at least ~100k characters. ~1M is better.
End of explanation
# Get the data
path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt")
text = open(path).read().lower()
print('corpus length:', len(text))
chars = set(text)
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))
print('Vectorization...')
X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
Explanation: Get the data
End of explanation
# build the model: 2 stacked LSTM
print('Build model...')
model = Sequential()
model.add(LSTM(512, return_sequences=True, input_shape=(maxlen, len(chars))))
model.add(Dropout(0.2))
model.add(LSTM(512, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
Explanation: Build the neural network
End of explanation
def sample(a, temperature=1.0):
# helper function to sample an index from a probability array
a = np.log(a) / temperature
a = np.exp(a) / np.sum(np.exp(a))
return np.argmax(np.random.multinomial(1, a, 1))
# train the model, output generated text after each iteration
for iteration in range(1, 60):
print()
print('-' * 50)
print('Iteration', iteration)
model.fit(X, y, batch_size=128, nb_epoch=1)
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print()
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
Explanation: Train the network and output some text at each step
End of explanation |
11,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Formulas
Step1: Import convention
You can import explicitly from statsmodels.formula.api
Step2: Alternatively, you can just use the formula namespace of the main statsmodels.api.
Step3: Or you can use the following convention
Step4: These names are just a convenient way to get access to each model's from_formula classmethod. See, for instance
Step5: All of the lower case models accept formula and data arguments, whereas upper case ones take endog and exog design matrices. formula accepts a string which describes the model in terms of a patsy formula. data takes a pandas data frame or any other data structure that defines a __getitem__ for variable names like a structured array or a dictionary of variables.
dir(sm.formula) will print a list of available models.
Formula-compatible models have the following generic call signature
Step6: Fit the model
Step7: Categorical variables
Looking at the summary printed above, notice that patsy determined that elements of Region were text strings, so it treated Region as a categorical variable. patsy's default is also to include an intercept, so we automatically dropped one of the Region categories.
If Region had been an integer variable that we wanted to treat explicitly as categorical, we could have done so by using the C() operator
Step8: Patsy's mode advanced features for categorical variables are discussed in
Step9: Multiplicative interactions
"
Step10: Many other things are possible with operators. Please consult the patsy docs to learn more.
Functions
You can apply vectorized functions to the variables in your model
Step11: Define a custom function
Step12: Any function that is in the calling namespace is available to the formula.
Using formulas with models that do not (yet) support them
Even if a given statsmodels function does not support formulas, you can still use patsy's formula language to produce design matrices. Those matrices
can then be fed to the fitting function as endog and exog arguments.
To generate numpy arrays
Step13: To generate pandas data frames | Python Code:
import numpy as np # noqa:F401 needed in namespace for patsy
import statsmodels.api as sm
Explanation: Formulas: Fitting models using R-style formulas
Since version 0.5.0, statsmodels allows users to fit statistical models using R-style formulas. Internally, statsmodels uses the patsy package to convert formulas and data to the matrices that are used in model fitting. The formula framework is quite powerful; this tutorial only scratches the surface. A full description of the formula language can be found in the patsy docs:
Patsy formula language description
Loading modules and functions
End of explanation
from statsmodels.formula.api import ols
Explanation: Import convention
You can import explicitly from statsmodels.formula.api
End of explanation
sm.formula.ols
Explanation: Alternatively, you can just use the formula namespace of the main statsmodels.api.
End of explanation
import statsmodels.formula.api as smf
Explanation: Or you can use the following convention
End of explanation
sm.OLS.from_formula
Explanation: These names are just a convenient way to get access to each model's from_formula classmethod. See, for instance
End of explanation
dta = sm.datasets.get_rdataset("Guerry", "HistData", cache=True)
df = dta.data[["Lottery", "Literacy", "Wealth", "Region"]].dropna()
df.head()
Explanation: All of the lower case models accept formula and data arguments, whereas upper case ones take endog and exog design matrices. formula accepts a string which describes the model in terms of a patsy formula. data takes a pandas data frame or any other data structure that defines a __getitem__ for variable names like a structured array or a dictionary of variables.
dir(sm.formula) will print a list of available models.
Formula-compatible models have the following generic call signature: (formula, data, subset=None, *args, **kwargs)
OLS regression using formulas
To begin, we fit the linear model described on the Getting Started page. Download the data, subset columns, and list-wise delete to remove missing observations:
End of explanation
mod = ols(formula="Lottery ~ Literacy + Wealth + Region", data=df)
res = mod.fit()
print(res.summary())
Explanation: Fit the model:
End of explanation
res = ols(formula="Lottery ~ Literacy + Wealth + C(Region)", data=df).fit()
print(res.params)
Explanation: Categorical variables
Looking at the summary printed above, notice that patsy determined that elements of Region were text strings, so it treated Region as a categorical variable. patsy's default is also to include an intercept, so we automatically dropped one of the Region categories.
If Region had been an integer variable that we wanted to treat explicitly as categorical, we could have done so by using the C() operator:
End of explanation
res = ols(formula="Lottery ~ Literacy + Wealth + C(Region) -1 ", data=df).fit()
print(res.params)
Explanation: Patsy's mode advanced features for categorical variables are discussed in: Patsy: Contrast Coding Systems for categorical variables
Operators
We have already seen that "~" separates the left-hand side of the model from the right-hand side, and that "+" adds new columns to the design matrix.
Removing variables
The "-" sign can be used to remove columns/variables. For instance, we can remove the intercept from a model by:
End of explanation
res1 = ols(formula="Lottery ~ Literacy : Wealth - 1", data=df).fit()
res2 = ols(formula="Lottery ~ Literacy * Wealth - 1", data=df).fit()
print(res1.params, "\n")
print(res2.params)
Explanation: Multiplicative interactions
":" adds a new column to the design matrix with the interaction of the other two columns. "*" will also include the individual columns that were multiplied together:
End of explanation
res = smf.ols(formula="Lottery ~ np.log(Literacy)", data=df).fit()
print(res.params)
Explanation: Many other things are possible with operators. Please consult the patsy docs to learn more.
Functions
You can apply vectorized functions to the variables in your model:
End of explanation
def log_plus_1(x):
return np.log(x) + 1.0
res = smf.ols(formula="Lottery ~ log_plus_1(Literacy)", data=df).fit()
print(res.params)
Explanation: Define a custom function:
End of explanation
import patsy
f = "Lottery ~ Literacy * Wealth"
y, X = patsy.dmatrices(f, df, return_type="matrix")
print(y[:5])
print(X[:5])
Explanation: Any function that is in the calling namespace is available to the formula.
Using formulas with models that do not (yet) support them
Even if a given statsmodels function does not support formulas, you can still use patsy's formula language to produce design matrices. Those matrices
can then be fed to the fitting function as endog and exog arguments.
To generate numpy arrays:
End of explanation
f = "Lottery ~ Literacy * Wealth"
y, X = patsy.dmatrices(f, df, return_type="dataframe")
print(y[:5])
print(X[:5])
print(sm.OLS(y, X).fit().summary())
Explanation: To generate pandas data frames:
End of explanation |
11,693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read Loans csv and Create test/train csv files
Step1: Read and process train and test dataframes
Step2: Model Tuning with skopt
Step3: GBM best results - sorted
Step4: XGB best results - sorted
Step5: Tune XGBoost Model manually with CV
Step6: BELOW HERE IS MISC STUFF
Step7: consolidation checking
Step8: Create Two Predictions files from test.csv | Python Code:
%%time
print('Reading: loan_stat542.csv into loans dataframe...')
loans = pd.read_csv('loan_stat542.csv')
print('Loans dataframe:', loans.shape)
test_ids = pd.read_csv('Project3_test_id.csv', dtype={'test1':int,'test2':int, 'test3':int,})
print('ids dataframe:', test_ids.shape)
trains = []
tests = []
labels = []
for i, col in enumerate(test_ids.columns):
trains.append(loans.loc[~loans.id.isin(test_ids[col]),:])
tests.append( loans.loc[ loans.id.isin(test_ids[col]), loans.columns!='loan_status'])
labels.append(loans.loc[ loans.id.isin(test_ids[col]), ['id','loan_status']])
labels[i]["y"] = (labels[i].loan_status != 'Fully Paid').astype(int)
labels[i].drop('loan_status', axis=1, inplace=True)
print('Fold', i, trains[i].shape, tests[i].shape, labels[i].shape)
print('Writing train, test, labels csv files...')
# fold=0
# _ = trains[fold].to_csv('train.csv', index=False)
# _ = tests [fold].to_csv('test.csv', index=False)
# _ = labels[fold].to_csv('label.csv', index=False)
print('Done!')
Explanation: Read Loans csv and Create test/train csv files
End of explanation
def process(data):
data['emp_length'] = data.emp_length.fillna('Unknown').str.replace('<','LT')
data['dti'] = data.dti.fillna(0)
data['revol_util'] = data.revol_util.fillna(0)
data['mort_acc'] = data.mort_acc.fillna(0)
data['pub_rec_bankruptcies'] = data.pub_rec_bankruptcies.fillna(0)
temp = pd.to_datetime(data.earliest_cr_line)
data['earliest_cr_line'] = temp.dt.year*12 - 1950*12 + temp.dt.month
data.drop(['emp_title','title','zip_code','grade','fico_range_high'], axis=1, inplace=True)
return data
def logloss(y, p):
loglosses = np.where(y==1, -np.log(p+1e-15), -np.log(1-p+1e-15))
return np.mean(loglosses)
def prep_train_test(train, test):
train = process(train)
X_train = train.drop(['loan_status'], axis=1)
X_train = pd.get_dummies(X_train) # create dataframe with dummy variables replacing categoricals
X_train = X_train.reindex(sorted(X_train.columns), axis=1) # sort columns to be in same sequence as test
y_train = (train.loan_status!='Fully Paid').astype(int)
test = process(test)
X_test = pd.get_dummies(test) # create dataframe with dummy variables replacing categoricals
all_columns = X_train.columns.union(X_test.columns) # add columns to test that are in train but not test
X_test = X_test.reindex(columns=all_columns).fillna(0)
X_test = X_test.reindex(sorted(X_train.columns), axis=1) # sort columns to be in same sequence at train
return X_train, y_train, X_test
%%time
import time
seed=42
models = [
LogisticRegression(penalty='l1',C=1, random_state=seed),
# GradientBoostingClassifier(max_features='sqrt', learning_rate=0.055, n_estimators=780, max_depth=7,
# min_samples_leaf=2, subsample=0.9, min_samples_split=4,
# min_weight_fraction_leaf=0, random_state=seed),
xgb.XGBClassifier(learning_rate=0.037, n_estimators=860, min_child_weight=8, max_depth=7, gamma=0.3,
subsample=0.52, colsample_bytree=0.92, reg_lambda=0.67, reg_alpha=0.03,
objective= 'binary:logistic', n_jobs=-1, random_state=seed, eval_metric='logloss'),
]
num_models, num_folds = len(models), len(test_ids.columns)
errors = np.zeros([num_models, num_folds])
for fold in range(num_folds):
np.random.seed(seed=seed)
train = trains[fold].copy()
test = tests [fold].copy()
label = labels[fold].copy()
fraction = 1
if fraction < 1:
train = train.sample(frac=fraction, random_state=seed)
test = test.sample(frac=fraction*4, random_state=seed)
print()
X_train, y_train, X_test = prep_train_test(train, test)
# print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
for i, model in enumerate(models):
start_time = time.time()
_ = model.fit(X_train, y_train)
probs = model.predict_proba(X_test)[:,1]
y_test = pd.merge(tests[fold][['id']], label, how='left', on='id')
errors[i, fold] = logloss(y_test.y, probs)
print('{:26.26} Fold={}, Runime={:8.2f} seconds, logloss={:8.5f}'.format(
type(model).__name__, fold, round(time.time()-start_time,2), errors[i,fold]))
# df = pd.DataFrame({'id': test.id, 'prob': probs.round(5)})
# df.to_csv('mysubmission'+str(i+1)+'.txt', index=False)
# print('Created mysubmission'+str(i+1)+'.txt, rows=', df.shape[0])
print("\nSUMMARY:")
for i, m in enumerate(models):
print('{:26.26} mean logloss={:8.5f}'.format(type(m).__name__, errors[i,:].mean()))
Explanation: Read and process train and test dataframes
End of explanation
from skopt import gp_minimize, gbrt_minimize
from skopt.plots import plot_convergence
import datetime, warnings
def objective(values):
index = str(values)
if index in cache:
print('GET FROM CACHE:', index, round(cache[index],4))
return cache[index]
if model_type == 'LogisticRegression':
params = {'penalty': values[0], 'C': values[1],}
model = LogisticRegression(**params, random_state=seed, n_jobs=-1)
# if model_type == 'RandomForestClassifier':
# params = {'n_estimators': values[0], 'max_features': values[1], 'max_depth': values[2],}
# model = RandomForestClassifier(**params, n_jobs=-1)
if model_type == 'GradientBoostingClassifier':
params = {'learning_rate': values[0], 'n_estimators': values[1], 'max_depth': values[2],
'min_samples_split': values[3], 'min_samples_leaf': values[4],
'min_weight_fraction_leaf' : values[5], 'subsample': values[6], 'max_features': values[7] }
model = GradientBoostingClassifier(**params, random_state=seed)
if model_type == 'XGBClassifier':
params = {'learning_rate': values[0], 'n_estimators': int(values[1]), 'min_child_weight': int(values[2]),
'max_depth': int(values[3]), 'gamma': values[4], 'subsample': values[5],
'colsample_bytree': values[6], 'lambda': values[7], 'alpha': values[8], 'eval_metric':'logloss'}
model = xgb.XGBClassifier(**params, random_state=seed, nthread=-1, n_jobs=-1,silent=1)
print(datetime.datetime.now().time().replace(microsecond=0), ', Params',params)
# scores = -cross_val_score(model, X_train, y_train, scoring="neg_log_loss", cv=5, n_jobs=-1)
_ = model.fit(X_train, y_train)
probs = model.predict_proba(X_test)[:,1]
y_test = pd.merge(test[['id']], label, how='left', on='id')
cache[index] = np.mean( logloss(y_test.y, probs) )
return cache[index]
# %%time
import warnings
np.random.seed(seed)
warnings.filterwarnings("ignore", category=UserWarning) # turn off already evaluated errors
params={'LogisticRegression': [
['l1',],
(1e-1,1e+1,'uniform'),
],
'GradientBoostingClassifier': [
(0.04, 0.10, 'uniform'), # learning rate
(500, 900), # n_estimators
(3, 7), # max_depth
(2, 5), # min_samples_split
(2, 5), # min_samples_leaf
(0, 0.3), # min_weight_fraction_leaf
(0.8, 1.0,'uniform'), # subsample
('sqrt',), # max_features
],
'XGBClassifier': [
(0.01, 0.05, 'uniform'), # learning_rate 0.05, 0.3,
(300, 700), # n_estimators
(5, 9), # min_child_weight
(4, 8), # max_depth 3-10
(0, 0.5, 'uniform'), # gamma 0-0.4
(0.4, 1.0, 'uniform'), # subsample 0.5 - 0.99
(0.8, 1.0, 'uniform'), # colsample_bytree 0.5 - 0.99
(0.8, 1.0, 'uniform'), # reg_lambda
(0.0, 0.5, 'uniform'), # reg_alpha
],}
train = trains[0].copy()
test = tests[0].copy()
label = labels[0].copy()
fraction = 1
if fraction < 1:
train = train.sample(frac=fraction, random_state=seed)
test = test.sample(frac=fraction, random_state=seed)
print(train.shape)
X_train, y_train, X_test = prep_train_test(train, test)
print(X_train.shape, y_train.shape, X_test.shape)
model_types = params.keys()
model_types = ['GradientBoostingClassifier']
for model_type in model_types:
cache = {}
space = params[model_type]
result = gbrt_minimize(objective,space,n_random_starts=15, n_calls=300, random_state=seed,verbose=True,n_jobs=-1)
print('\n', model_type, ', Best Params=', result.x, ' Best Score=', round(result.fun,6),'\n')
_ = plt.figure(figsize=(15,8))
_ = plot_convergence(result, yscale='log')
warnings.filterwarnings("default", category=UserWarning) # turn on already evaluated errors
Explanation: Model Tuning with skopt
End of explanation
sorted_d = sorted(cache.items(), key=lambda x: x[1])
temp = []
for i in range(len(sorted_d)):
temp.append((sorted_d[i][0], round(sorted_d[i][1],5)))
print('{} {}'.format(round(sorted_d[i][1],5), sorted_d[i][0]))
Explanation: GBM best results - sorted
End of explanation
sorted_d = sorted(cache.items(), key=lambda x: x[1])
temp = []
for i in range(len(sorted_d)):
temp.append((sorted_d[i][0], round(sorted_d[i][1],5)))
print('{} {}'.format(round(sorted_d[i][1],5), sorted_d[i][0]))
Explanation: XGB best results - sorted
End of explanation
%%time
XGB_params = {
'learning_rate': np.linspace(0.05, 1, 1), # 0.03 to 0.2, tuned to 0.05
'min_child_weight': np.linspace(5, 10, 1, dtype=int), # 1 to 6, tuned to 2
'max_depth': np.linspace(4, 8, 1, dtype=int), # 3 to 10, tuned to 3
'gamma': np.linspace(0.2, 0.4, 1), # 0 to 0.4, tuned to 0
'subsample': np.linspace(1, 1, 1), # 0.6 to 1, tuned to 1.0
'colsample_bytree': np.linspace(0.75, 1, 1), # 0.6 to 1, tuned to 0.6
'reg_lambda': np.linspace(0.25, 0.6, 1), # 0 to 1, tuned to 1.0
'reg_alpha': np.linspace(0.15, 0.5, 1), # 0 to 1, tuned to 0.5
'silent': [1,],
}
train = trains[0].copy()
test = tests[0].copy()
fraction = 0.05
if fraction < 1:
train = train.sample(frac=fraction, random_state=seed)
test = test.sample(frac=fraction*4, random_state=seed)
print(train.shape, test.shape)
X_train, y_train, X_test= prep_train_test(train, test)
print(X_train.shape, y_train.shape, X_test.shape)
tune_cv=False # perform CV just to get number of boost rounds/estimators before using GridSearchCV
if tune_cv:
xgtrain = xgb.DMatrix(X_train, label=y_train)
cvresult = xgb.cv(objective=binary:logistic, learning_rate=0.05,min_child_weight=5, max_depth=4, gamma=0.2,
subsample=1.0, colsample_bytree=0.75, reg_lambda=0.25, reg_alpha=0.15, silent=1,
xgtrain, num_boost_round=1000, early_stopping_rounds=50, nfold=10,
metrics='logloss', verbose_eval=10, show_stdv=False)
else:
XGBbase = xgb.XGBClassifier(n_estimators=150, objective= 'binary:logistic', random_state=seed, n_jobs=-1)
XGBmodel = GridSearchCV(XGBbase, cv=10, n_jobs=-1, param_grid=XGB_params, scoring='neg_log_loss' ,verbose=5)
_ = XGBmodel.fit(X_train, y_train)
print(XGBmodel.best_estimator_)
print(XGBmodel.best_params_)
print(-round(XGBmodel.best_score_,6))
# fig, ax = plt.subplots(1,1,figsize=(16,26))
# _ = xgb.plot_importance(model, ax=ax)
Explanation: Tune XGBoost Model manually with CV
End of explanation
# rejected models
# KNeighborsClassifier(500), # tuned on 5% of data
# SVC(gamma='auto', C=1, probability=True, random_state=seed), # quite slow
# DecisionTreeClassifier(max_depth=2, random_state=seed),
# RandomForestClassifier(n_estimators=300, random_state=seed),
# AdaBoostClassifier(random_state=seed),
# GaussianNB(),
# LinearDiscriminantAnalysis(),
temp = X_train.corr()
temp = temp[temp>0.50]
temp.values[[np.arange(len(temp.columns))]*2] = np.NaN
temp = temp.dropna(how='all', axis=0)
temp = temp.dropna(how='all', axis=1)
temp
m1 = models[-1]
df = pd.DataFrame({'id': test.id, 'prob': probs.round(11)})
df.to_csv('mysubmission1.txt', index=False)
print('Created mysubmission1.txt, rows=', df.shape[0], ', Model=', type(m1).__name__)
m2 = models[-1]
df = pd.DataFrame({'id': test.id, 'prob': probs.round(11)})
df.to_csv('mysubmission2.txt', index=False)
print('Created mysubmission2.txt, rows=', df.shape[0], ', Model=', type(m2).__name__)
m3 = models[-1]
df = pd.DataFrame({'id': test.id, 'prob': probs.round(11)})
df.to_csv('mysubmission3.txt', index=False)
print('Created mysubmission3.txt, rows=', df.shape[0], ', Model=', type(m3).__name__)
Explanation: BELOW HERE IS MISC STUFF
End of explanation
%%time
loans['emp_title'] = loans.emp_title.fillna('_unknown').str.lower()
loans['emp_title1'] = categorise_emp_title(loans.emp_title)
loans['default'] = (loans.loan_status!='Fully Paid').astype(int)
g1 = loans.groupby(['emp_title','emp_title1'])['default'].agg(['sum','count'])
g1.columns = ['sum1','count1']
g1['rate1'] = g1['sum1'] / g1['count1']
g1 = g1.sort_values("rate1", ascending=False)
Explanation: consolidation checking
End of explanation
features = X_train.columns
importances = model.feature_importances_
indices = np.argsort(importances*-1)
num_features = 40
indices = indices[0:num_features]
_ = plt.figure(figsize=(16,7))
_ = plt.title('XGBoost: Most Important Features')
_ = plt.bar(range(len(indices)), importances[indices], color='steelblue')
_ = plt.xticks(range(len(indices)), [features[i] for i in indices], rotation=85, fontsize = 14)
_ = plt.ylabel('Relative Importance')
_ = plt.xlim(-1,num_features-1)
_ = plt.tight_layout()
_ = plt.savefig('features.png',dpi=100)
np.random.seed(seed)
# TRAIN LASSO MODEL AND GENERATE SUBMISSION FILE
model = ElasticNet(alpha=0.001, l1_ratio=0.4, max_iter=5000, tol=0.0001, random_state=seed),
_ = model.fit(X_train, y_train)
XGBpreds = model.predict(X_test)
XGB_df = pd.DataFrame({'PID': y_test.index, 'Sale_Price': np.expm1(XGBpreds).round(1)})
XGB_df.to_csv('mysubmission1.txt', index=False)
print('Created mysubmission1.txt, rows=', XGB_df.shape[0],
', Model=', type(model).__name__, ', RMSElogPrice =', round( rmse(y_test, XGBpreds),6 ))
# TRAIN GBM MODEL AND GENERATE SUBMISSION FILE
model = GradientBoostingRegressor(learning_rate=0.03, n_estimators=550, max_depth=5, min_samples_split=4,
min_samples_leaf=3, min_weight_fraction_leaf=0, subsample=0.64, max_features='sqrt', random_state=seed)
_ = model.fit(X_train, y_train)
ENet_preds = model.predict(X_test)
ENet_df = pd.DataFrame({'PID': y_test.index, 'Sale_Price': np.expm1(ENet_preds).round(1)})
ENet_df.to_csv('mysubmission2.txt', index=False)
print('Created mysubmission2.txt, rows=', XGB_df.shape[0],
', Model=', type(model).__name__, ', RMSElogPrice =', round( rmse(y_test, ENet_preds),6 ))
# RE-READ SUBMISSION FILES AND CHECK FOR CORRECTNESS
temp = pd.read_csv('mysubmission1.txt')
print('\nChecking mysubmission1 file, RMSE=', round(rmse(np.log1p(temp.Sale_Price), y_test.values),6) )
temp = pd.read_csv('mysubmission2.txt')
print('Checking mysubmission2 file, RMSE=', round(rmse(np.log1p(temp.Sale_Price), y_test.values),6) )
Explanation: Create Two Predictions files from test.csv
End of explanation |
11,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook explores the speed of searching for values in sets and lists.
After reading this notebook, watch Brandon Rhodes' videos All Your Ducks In A Row
Step1: Notice that the difference between searching small sets and large sets in not large. This is the magic of Python sets and dictionaries. Read the hash table Wikipedia article for an explanation of how this works. | Python Code:
def make_list(n):
if True:
return list(range(n))
else:
return list(str(i) for i in range(n))
n = int(25e6)
# n = 5
m = (0, n // 2, n-1, n)
a_list = make_list(n)
a_set = set(a_list)
n, m
# Finding something that is in a set is fast.
# The key one is looking for has little effect on the speed.
beginning = 0
middle = n//2
end = n-1
%timeit beginning in a_set
%timeit middle in a_set
%timeit end in a_set
# Finding something that is _not_ in a set is also fast.
%timeit n in a_set
# Searching for something in a list
# starts at the beginning and compares each value.
# The search time depends on where the value is in the list.
# That can be slow.
beginning = 0
middle = n//2
end = n-1
%timeit beginning in a_list
%timeit middle in a_list
%timeit end in a_list
# Finding something that is not is a list is the worst case.
# It has to be compared to all values of the list.
%timeit n in a_list
max_exponent = 6
for n in (10 ** i for i in range(1, max_exponent+1)):
a_list = make_list(n)
a_set = set(a_list)
m = (0, n // 2, n-1, n)
for j in m:
print('length is %s, looking for %s' % (n, j))
%timeit j in a_set
Explanation: This notebook explores the speed of searching for values in sets and lists.
After reading this notebook, watch Brandon Rhodes' videos All Your Ducks In A Row: Data Structures in the Standard Library and Beyond and The Mighty Dictionary.
End of explanation
for n in (10 ** i for i in range(1, max_exponent+1)):
a_list = make_list(n)
a_set = set(a_list)
m = (0, n // 2, n-1, n)
for j in m:
print('length is %s, looking for %s' % (n, j))
%timeit j in a_list
Explanation: Notice that the difference between searching small sets and large sets in not large. This is the magic of Python sets and dictionaries. Read the hash table Wikipedia article for an explanation of how this works.
End of explanation |
11,695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spark Cluster Overview
Spark applications run as independent sets of processes on a cluster, coordinated by the SparkContext object in your main program (called the driver program).
Step1: This code runs on the 'driver' node
Step2: Create some data and distribute on the cluster 'executor' nodes
Step3: Run a function on the nodes and return the values back to the 'driver' node
Step4: Print all the values
Step5: Print out the unique values
Step6: Alternating Least Squares (ALS) Hello World
Load the data
Step7: Inspect the data
Step8: Load the data in spark
Step9: Predict how the user=1 would rate product=1
Step10: Predict the top (1) recommendations for all users. | Python Code:
import socket
Explanation: Spark Cluster Overview
Spark applications run as independent sets of processes on a cluster, coordinated by the SparkContext object in your main program (called the driver program).
End of explanation
print( "Hello World from " + socket.gethostname() )
Explanation: This code runs on the 'driver' node
End of explanation
rdd = spark.sparkContext.parallelize( range(0, 100) )
Explanation: Create some data and distribute on the cluster 'executor' nodes
End of explanation
rdd = rdd.map( lambda x: "Hello World from " + socket.gethostname() ).collect()
Explanation: Run a function on the nodes and return the values back to the 'driver' node
End of explanation
print( rdd )
Explanation: Print all the values
End of explanation
print( set(rdd) )
Explanation: Print out the unique values
End of explanation
! rm -f ratings.dat
! wget https://raw.githubusercontent.com/snowch/movie-recommender-demo/master/web_app/data/ratings.dat
Explanation: Alternating Least Squares (ALS) Hello World
Load the data
End of explanation
! head -3 ratings.dat
! echo
! tail -3 ratings.dat
Explanation: Inspect the data
End of explanation
from pyspark.mllib.recommendation import Rating
ratingsRDD = sc.textFile('ratings.dat') \
.map(lambda l: l.split("::")) \
.map(lambda p: Rating(
user = int(p[0]),
product = int(p[1]),
rating = float(p[2]),
)).cache()
from pyspark.mllib.recommendation import ALS
# set some values for the parameters
# these should be ascertained via experimentation
rank = 5
numIterations = 20
lambdaParam = 0.1
model = ALS.train(ratingsRDD.toDF(), rank, numIterations, lambdaParam)
Explanation: Load the data in spark
End of explanation
model.predict(user=1, product=1)
Explanation: Predict how the user=1 would rate product=1
End of explanation
model.recommendProductsForUsers(1).toDF().collect()
Explanation: Predict the top (1) recommendations for all users.
End of explanation |
11,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fruitful Functions
Return Values
Some of the built-in functions we have used, such as the math functions, produce results. Calling the function generates a value, which we usually assign to a variable or use as part of an expression.
Step1: All of the functions we have written so far are void; they print something but their return value is None.
Now, we are (finally) going to write fruitful functions (Function with return value).
Step2: We have seen the return statement before, but in a fruitful function the return statement includes an expression. This statement means
Step3: On the other hand, temporary variables like temp often make debugging easier.
Sometimes it is useful to have multiple return statements, one in each branch of a conditional
Step4: Since these return statements are in an alternative conditional, only one will be executed.As soon as a return statement executes, the function terminates without executing any subsequent statements. Code that appears after a return statement, or any other place the flow of execution can never reach, is called dead code.
Try Yourself!
Write a compare function that returns 1 if x > y, 0 if x == y, and -1 if x < y.
Incremental Development
As you write larger functions, you might find yourself spending more time debugging.
To deal with increasingly complex programs, you might want to try a process called incremental development. The goal of incremental development is to avoid long debugging sessions by adding and testing only a small amount of code at a time.
As an example, suppose you want to find the distance between two points, given by the
coordinates $(x_1, y_1)$ and $(x_2, y_2)$.
$$
distance = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}
$$
The first step is to consider what a distance function should look like in Python. In other words, what are the inputs (parameters) and what is the output (return value)?
In this case, the inputs are two points, which you can represent using four numbers. The return value is the distance, which is a floating-point value.
Already you can write an outline of the function
Step5: If the function is working, it should display 'dx is 3' and 'dy is 4' for values (1,2,4,6) which will result (3,4,5). If so, we know that the function is getting the right arguments and performing the first computation correctly. If not, there are only a few lines to check.
Next we compute the sum of squares of dx and dy
Step6: Again, you would run the program at this stage and check the output (which should be 25). Finally, you can use math.sqrt to compute and return the result
Step7: If that works correctly, you are done. Otherwise, you might want to print the value of result before the return statement.
The final version of the function doesn’t display anything when it runs; it only returns a value. The print statements we wrote are useful for debugging, but once you get the function working, you should remove them. Code like that is called scaffolding because it is helpful for building the program but is not part of the final product.
When you start out, you should add only a line or two of code at a time. As you gain more experience, you might find yourself writing and debugging bigger chunks. Either way, incremental development can save you a lot of debugging time.
The key aspects of the process are
Step8: Boolean Functions
Functions can return booleans, which is often convenient for hiding complicated tests inside functions. For example
Step9: It is common to give boolean functions names that sound like yes/no questions; is_divisible returns either True or False to indicate whether x is divisible by y.
Step10: The result of the == operator is a boolean, so we can write the function more concisely by returning it directly
Step11: Boolean functions are often used in conditional statements
Step12: Try Yourself!
Write a function is_between(x, y, z) that returns True if x <= y <= z or False
otherwise.
More Recursion
To give you an idea of what you can do with the tools you have learned so far, we’ll evaluate a few recursively defined mathematical functions. A recursive definition is similar to a circular definition, in the sense that the definition contains a reference to the thing being defined. A truly circular definition is not very useful
Step13: Let's define the fibonacci which is recursively defined mathematical function and one of the most common recursive introduction example
Step14: If you try to follow the flow of execution here, even for fairly small values of n, your head explodes. But according to the leap of faith, if you assume that the two recursive calls work correctly, then it is clear that you get the right result by adding them together.
Checking Types
What happens if we call factorial and give it 1.5 as an argument?
Step15: It looks like an infinite recursion. But how can that be? There is a base case—when n == 0. But if n is not an integer, we can miss the base case and recurse forever.
In the first recursive call, the value of n is 0.5. In the next, it is -0.5. From there, it gets smaller (more negative), but it will never be 0.
We have two choices. We can try to generalize the factorial function to work with floating-point numbers, or we can make factorial check the type of its argument. The first option is called the gamma function and it’s a little beyond the scope of this book. So we’ll go for the second.
We can use the built-in function isinstance to verify the type of the argument. While we’re at it, we can also make sure the argument is positive
Step16: The first base case handles nonintegers; the second catches negative integers. In both cases, the program prints an error message and returns None to indicate that something went wrong | Python Code:
import math
e = math.exp(1.0)
e
Explanation: Fruitful Functions
Return Values
Some of the built-in functions we have used, such as the math functions, produce results. Calling the function generates a value, which we usually assign to a variable or use as part of an expression.
End of explanation
# Function that returns the area of cirle
def area(radius):
temp = math.pi * radius**2
return temp
Explanation: All of the functions we have written so far are void; they print something but their return value is None.
Now, we are (finally) going to write fruitful functions (Function with return value).
End of explanation
def area(radius):
return math.pi * radius**2
Explanation: We have seen the return statement before, but in a fruitful function the return statement includes an expression. This statement means: “Return immediately from this function and use the following expression as a return value.” The expression can be arbitrarily complicated, so we could have written this function more concisely:
End of explanation
def absolute_value(x):
if x < 0:
return -x
else:
return x
Explanation: On the other hand, temporary variables like temp often make debugging easier.
Sometimes it is useful to have multiple return statements, one in each branch of a conditional:
End of explanation
def distance(x1, y1, x2, y2):
dx = x2 - x1
dy = y2 - y1
print('dx is', dx)
print('dy is', dy)
return 0.0
distance(1,2,4,6)
Explanation: Since these return statements are in an alternative conditional, only one will be executed.As soon as a return statement executes, the function terminates without executing any subsequent statements. Code that appears after a return statement, or any other place the flow of execution can never reach, is called dead code.
Try Yourself!
Write a compare function that returns 1 if x > y, 0 if x == y, and -1 if x < y.
Incremental Development
As you write larger functions, you might find yourself spending more time debugging.
To deal with increasingly complex programs, you might want to try a process called incremental development. The goal of incremental development is to avoid long debugging sessions by adding and testing only a small amount of code at a time.
As an example, suppose you want to find the distance between two points, given by the
coordinates $(x_1, y_1)$ and $(x_2, y_2)$.
$$
distance = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}
$$
The first step is to consider what a distance function should look like in Python. In other words, what are the inputs (parameters) and what is the output (return value)?
In this case, the inputs are two points, which you can represent using four numbers. The return value is the distance, which is a floating-point value.
Already you can write an outline of the function:
Python
def distance(x1, y1, x2, y2):
return 0.0
Obviously, this version doesn’t compute distances; it always returns zero. But it is syntactically correct, and it runs, which means that you can test it before you make it more complicated.
We can start adding code to the body. A reasonable next step is to find the differences $x_2-x_1$ and $y_2-y_1$. The next version stores those values in temporary variables and prints them.
End of explanation
def distance(x1, y1, x2, y2):
dx = x2 - x1
dy = y2 - y1
dsquared = dx**2 + dy**2
print('dsquared is: ', dsquared)
return 0.0
distance(1,2,4,6)
Explanation: If the function is working, it should display 'dx is 3' and 'dy is 4' for values (1,2,4,6) which will result (3,4,5). If so, we know that the function is getting the right arguments and performing the first computation correctly. If not, there are only a few lines to check.
Next we compute the sum of squares of dx and dy:
End of explanation
def distance(x1, y1, x2, y2):
dx = x2 - x1
dy = y2 - y1
dsquared = dx**2 + dy**2
result = math.sqrt(dsquared)
return result
distance(1,2,4,6)
Explanation: Again, you would run the program at this stage and check the output (which should be 25). Finally, you can use math.sqrt to compute and return the result:
End of explanation
def circle_area(xc, yc, xp, yp):
return area(distance(xc, yc, xp, yp))
Explanation: If that works correctly, you are done. Otherwise, you might want to print the value of result before the return statement.
The final version of the function doesn’t display anything when it runs; it only returns a value. The print statements we wrote are useful for debugging, but once you get the function working, you should remove them. Code like that is called scaffolding because it is helpful for building the program but is not part of the final product.
When you start out, you should add only a line or two of code at a time. As you gain more experience, you might find yourself writing and debugging bigger chunks. Either way, incremental development can save you a lot of debugging time.
The key aspects of the process are:
1. Start with a working program and make small incremental changes. At any point, if there is an error, you should have a good idea where it is.
Use temporary variables to hold intermediate values so you can display and check them.
Once the program is working, you might want to remove some of the scaffolding or consolidate multiple statements into compound expressions, but only if it does not make the program difficult to read.
Try Yourself!
Use incremental development to write a function called hypotenuse that returns the length of the hypotenuse of a right triangle given the lengths of the two legs as arguments. Record each stage of the development process as you go.
Composition
As you should expect by now, you can call one function from within another. This ability is called composition.
As an example, we’ll write a function that takes two points, the center of the circle and a point on the perimeter, and computes the area of the circle.
Assume that the center point is stored in the variables xc and yc, and the perimeter point is in xp and yp. The first step is to find the radius of the circle, which is the distance between the two points. We just wrote a function, distance, that does that:
Python
radius = distance(xc, yc, xp, yp)
The next step is to find the area of a circle with that radius; we just wrote that, too:
Python
result = area(radius)
Encapsulating these steps in a function, we get:
Python
def circle_area(xc, yc, xp, yp):
radius = distance(xc, yc, xp, yp)
result = area(radius)
return result
The temporary variables radius and result are useful for development and debugging, but once the program is working, we can make it more concise by composing the function calls:
End of explanation
def is_divisible(x, y):
if x % y == 0:
return True
else:
return False
Explanation: Boolean Functions
Functions can return booleans, which is often convenient for hiding complicated tests inside functions. For example:
End of explanation
is_divisible(6,4)
is_divisible(6,3)
Explanation: It is common to give boolean functions names that sound like yes/no questions; is_divisible returns either True or False to indicate whether x is divisible by y.
End of explanation
def is_divisible(x, y):
return x % y == 0
Explanation: The result of the == operator is a boolean, so we can write the function more concisely by returning it directly:
End of explanation
x = 9
y = 14
if is_divisible(x, y):
print('x is divisible by y')
else:
print('x is not divisible by y')
Explanation: Boolean functions are often used in conditional statements:
End of explanation
def factorial(n):
if n == 0:
return 1
else:
recurse = factorial(n-1)
result = n * recurse
return result
factorial(6)
Explanation: Try Yourself!
Write a function is_between(x, y, z) that returns True if x <= y <= z or False
otherwise.
More Recursion
To give you an idea of what you can do with the tools you have learned so far, we’ll evaluate a few recursively defined mathematical functions. A recursive definition is similar to a circular definition, in the sense that the definition contains a reference to the thing being defined. A truly circular definition is not very useful:
vorpal: An adjective used to describe something that is vorpal
If you saw that definition in the dictionary, you might be annoyed. On the other hand, if you looked up the definition of the factorial function, denoted with the symbol !, you might get something like this:
$$
0! = 1
$$
$$
n! = n(n-1)!
$$
This definition says that the factorial of 0 is 1, and the factorial of any other value, n, is n
multiplied by the factorial of n-1.
So 3! is 3 times 2!, which is 2 times 1!, which is 1 times 0!. Putting it all together, 3! equals 3
times 2 times 1 times 1, which is 6.
If you can write a recursive definition of something, you can usually write a Python program to evaluate it. The first step is to decide what the parameters should be. In this case it should be clear that factorial takes an integer:
Python
def factorial(n):
If the argument happens to be 0, all we have to do is return 1:
Python
def factorial(n):
if n == 0:
return 1
Otherwise, and this is the interesting part, we have to make a recursive call to find the factorial of n-1 and then multiply it by n:
Python
def factorial(n):
if n == 0:
return 1
else:
recurse = factorial(n-1)
result = n * recurse
return result
End of explanation
def fibonacci (n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
fibonacci(12)
Explanation: Let's define the fibonacci which is recursively defined mathematical function and one of the most common recursive introduction example:
End of explanation
fibonacci(1.5)
Explanation: If you try to follow the flow of execution here, even for fairly small values of n, your head explodes. But according to the leap of faith, if you assume that the two recursive calls work correctly, then it is clear that you get the right result by adding them together.
Checking Types
What happens if we call factorial and give it 1.5 as an argument?
End of explanation
def factorial (n):
if not isinstance(n, int):
print('Factorial is only defined for integers.')
return None
elif n < 0:
print('Factorial is not defined for negative integers.')
return None
elif n == 0:
return 1
else:
return n * factorial(n-1)
Explanation: It looks like an infinite recursion. But how can that be? There is a base case—when n == 0. But if n is not an integer, we can miss the base case and recurse forever.
In the first recursive call, the value of n is 0.5. In the next, it is -0.5. From there, it gets smaller (more negative), but it will never be 0.
We have two choices. We can try to generalize the factorial function to work with floating-point numbers, or we can make factorial check the type of its argument. The first option is called the gamma function and it’s a little beyond the scope of this book. So we’ll go for the second.
We can use the built-in function isinstance to verify the type of the argument. While we’re at it, we can also make sure the argument is positive:
End of explanation
factorial("fred")
factorial(-2)
Explanation: The first base case handles nonintegers; the second catches negative integers. In both cases, the program prints an error message and returns None to indicate that something went wrong:
End of explanation |
11,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Process U-Wind
Step1: 2. Read u-wind data and pick variables
2.1 Use print to check variable information
Actually, you can also use numdump infile.nc -h to check the same inforamtion
Step2: 2.2 Read data
Have to set_auto_mask(False) to automatically scaling and offseting, or may cause problem.
Step3: 3. Calculate Mean in time
Step4: 4. Calculate zonal mean
Step5: 5. Visualize zonal mean
5.1 Extract first 10 levels from 1000 to 200 hPa
Step6: 5.2 Visualize
Step7: 6. Interpolate zonal mean from 10 to 41 levels
6.1 Make new grids
Create new levels between 1000 and 200
make resolution of latitude from 2.5 to 1.0
It is worth noting to make level and latitude in a monotonic increasing manner.
Step8: 6.2 Begin to interpolate u_10y_zm_10 for new grids
Step9: 6.3 Visualize the interpolated zonal mean
It should look better than orignal data. | Python Code:
% matplotlib inline
from pylab import *
import numpy as np
from scipy.interpolate import interp2d
from netCDF4 import Dataset as netcdf # netcdf4-python module
import matplotlib.pyplot as plt
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 6
Explanation: Process U-Wind: Zonal Mean and Interpolation
This notebook still works on u-Wind data. We will finish the following tasks
* calculate zonal mean
* interpolate zonal means along the axises of latitude and level
Data
wind data can be downlaed from https://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis2.html
This u-wind is a 4D data, includding [months|levels|lat|lon]. The presure levels in hPa.
Moreover, the wind data with scaling and offset. when using them, have to restore them to oringal values.
In addition, we will use the interpolation functions from another faumous library of SciPy, where scipy.interpolate.interp2d is used. See more from https://docs.scipy.org/doc/scipy/reference/interpolate.html.
1. Load basic libs
End of explanation
ncset = netcdf(r'data/uwnd3.mon.mean.nc')
#print(ncset)
Explanation: 2. Read u-wind data and pick variables
2.1 Use print to check variable information
Actually, you can also use numdump infile.nc -h to check the same inforamtion
End of explanation
ncset.set_auto_mask(False)
lon = ncset['lon'][:]
lat = ncset['lat'][:]
lev = ncset['level'][:]
u = ncset['uwnd'][504:624,:] # for the period 1990-1999.
print(u.shape)
print(lev)
Explanation: 2.2 Read data
Have to set_auto_mask(False) to automatically scaling and offseting, or may cause problem.
End of explanation
u_10y = np.mean(u, axis=0) # calculate mean for all years and months
u_10y.shape
Explanation: 3. Calculate Mean in time
End of explanation
u_10y_zm = np.mean(u_10y, axis=2)
u_10y_zm.shape
Explanation: 4. Calculate zonal mean
End of explanation
lev_10 = lev[0:10]
u_10y_zm_10 = u_10y_zm[0:10,:]
Explanation: 5. Visualize zonal mean
5.1 Extract first 10 levels from 1000 to 200 hPa
End of explanation
#minu = floor(np.min(u_10y_zm_10))
#maxu = ceil(np.max(u_10y_zm_10))
[lats, levs] = meshgrid(lat, lev_10)
fig, ax = plt.subplots()
im = ax.pcolormesh(lats, levs, u_10y_zm_10, cmap='jet', vmin=-6., vmax=30.)
cf = ax.contour(lats, levs, u_10y_zm_10, 25, c='b', vmin=-6., vmax=30.)
# Label levels with specially formatted floats
if plt.rcParams["text.usetex"]:
fmt = r'%r \%'
else:
fmt = '%r'
ax.clabel(cf, inline=True, fmt=fmt, fontsize=10)
ax.set_title('U-Wind Zonal Mean between 1990-1999 [m/s]', fontsize=16)
ax.set_xlabel('Latitude [$^o$]')
ax.set_ylabel('Pressure Level [hPa]')
# set the limits of the plot to the limits of the data
ax.axis([lats.min(),lats.max(), levs.min(), levs.max()])
fig.colorbar(im)
fig.tight_layout()
Explanation: 5.2 Visualize
End of explanation
lev_new = np.linspace(200,1000, num=41)
lat_new = np.linspace(-90, 90, num=181)
Explanation: 6. Interpolate zonal mean from 10 to 41 levels
6.1 Make new grids
Create new levels between 1000 and 200
make resolution of latitude from 2.5 to 1.0
It is worth noting to make level and latitude in a monotonic increasing manner.
End of explanation
func = interp2d(lat, lev_10, u_10y_zm_10, kind='cubic')
# apply to new level and latitude
unew = func(lat_new, lev_new)
Explanation: 6.2 Begin to interpolate u_10y_zm_10 for new grids
End of explanation
#minu = floor(np.min(unew))
#maxu = ceil(np.max(unew))
[lats, levs] = np.meshgrid(lat_new, lev_new)
fig, ax = plt.subplots()
im = ax.pcolormesh(lats, levs, unew, cmap='jet', vmin=-6., vmax=30.)
cf = ax.contour( lats, levs, unew, 25, c='b', vmin=-6., vmax=30.)
# Label levels with specially formatted floats
if plt.rcParams["text.usetex"]:
fmt = r'%r \%'
else:
fmt = '%r'
ax.clabel(cf, inline=True, fmt=fmt, fontsize=10)
ax.set_title('Interpolated U-Wind Zonal Mean between 1990-1999 [m/s]', fontsize=16)
ax.set_xlabel('Latitude [$^o$]')
ax.set_ylabel('Pressure Level [hPa]')
# set the limits of the plot to the limits of the data
ax.axis([lats.min(),lats.max(), levs.min(), levs.max()])
fig.colorbar(im)
fig.tight_layout()
Explanation: 6.3 Visualize the interpolated zonal mean
It should look better than orignal data.
End of explanation |
11,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with data files
Reading and writing data files is a common task, and Python offers native support for working with many kinds of data files. Today, we're going to be working mainly with CSVs.
Import the csv module
We're going to be working with delimited text files, so the first thing we need to do is import this functionality from the standard library.
Opening a file to read the contents
We're going to use something called a with statement to open a file and read the contents. The open() function takes at least two arguments
Step1: Simple filtering
If you wanted to filter your data, you could use an if statement inside your with block.
Step2: Exercise
Read in the MLB data, print only the names and salaries of players who make at least $1 million. (Hint
Step3: DictReader
Step4: Writing to CSV files
You can also use the csv module to create csv files -- same idea, you just need to change the mode to 'w'. As with reading, there's a list-based writing method and a dictionary-based method.
Step5: Using DictWriter to write data
Similar to using the list-based method, except that you need to ensure that the keys in your dictionaries of data match exactly a list of fieldnames.
Step6: You can open multiple files for reading/writing
Sometimes you want to open multiple files at the same time. One thing you might want to do | Python Code:
# open the MLB data file `as` mlb
# create a reader object
# loop over the rows in the file
# assign variables to each element in the row (shortcut!)
# print the row, which is a list
Explanation: Working with data files
Reading and writing data files is a common task, and Python offers native support for working with many kinds of data files. Today, we're going to be working mainly with CSVs.
Import the csv module
We're going to be working with delimited text files, so the first thing we need to do is import this functionality from the standard library.
Opening a file to read the contents
We're going to use something called a with statement to open a file and read the contents. The open() function takes at least two arguments: The path to the file you're opening and what "mode" you're opening it in.
To start with, we're going to use the 'r' mode to read the data. We'll use the default arguments for delimiter -- comma -- and we don't need to specify a quote character.
Important: If you open a data file in w (write) mode, anything that's already in the file will be erased.
The file we're using -- MLB roster data from 2017 -- lives at data/mlb.csv.
Once we have the file open, we're going to use some functionality from the csv module to iterate over the lines of data and print each one.
Specifically, we're going to use the csv.reader method, which returns a list of lines in the data file. Each line, in turn, is a list of the "cells" of data in that line.
Then we're going to loop over the lines of data and print each line. We can also use bracket notation to retrieve elements from inside each line of data.
End of explanation
# open the MLB data file `as` mlb
# create a reader object
# move past the header row
# loop over the rows in the file
# assign variables to each element in the row (shortcut!)
# print the line of data ~only~ if the player is on the Twins
# print the row, which is a list
Explanation: Simple filtering
If you wanted to filter your data, you could use an if statement inside your with block.
End of explanation
# open the MLB data file `as` mlb
# create a reader object
# move past the header row
# loop over the rows in the file
# assign variables to each element in the row (shortcut!)
# print the line of data ~only~ if the player is on the Twins
# print the row, which is a list
Explanation: Exercise
Read in the MLB data, print only the names and salaries of players who make at least $1 million. (Hint: Use type coercion!)
End of explanation
# open the MLB data file `as` mlb
# create a reader object
# loop over the rows in the file
# print just the player's name (the column header is "NAME")
Explanation: DictReader: Another way to read CSV files
Sometimes it's more convenient to work with data files as a list of dictionaries instead of a list of lists. That way, you don't have to remember the position of each "column" of data -- you can just reference the column name. To do it, we'll use a csv.DictReader object instead of a csv.reader object. Otherwise the code is much the same.
End of explanation
# define the column names
# let's make a few rows of data to write
# open an output file in write mode
# create a writer object
# write the header row
# loop over the data and write to file
Explanation: Writing to CSV files
You can also use the csv module to create csv files -- same idea, you just need to change the mode to 'w'. As with reading, there's a list-based writing method and a dictionary-based method.
End of explanation
# define the column names
# let's make a few rows of data to write
# open an output file in write mode
# create a writer object -- pass the list of column names to the `fieldnames` keyword argument
# use the writeheader method to write the header row
# loop over the data and write to file
Explanation: Using DictWriter to write data
Similar to using the list-based method, except that you need to ensure that the keys in your dictionaries of data match exactly a list of fieldnames.
End of explanation
# open the MLB data file `as` mlb
# also, open `mlb-copy.csv` to write to
# create a reader object
# create a writer object
# we're going to use the `fieldnames` attribute of the DictReader object
# as our output headers, as well
# b/c we're basically just making a copy
# write header row
# loop over the rows in the file
# what type of object is `row`?
# how would we find out?
# write row to output file
Explanation: You can open multiple files for reading/writing
Sometimes you want to open multiple files at the same time. One thing you might want to do: Opening a file of raw data in read mode, clean each row in a loop and write out the clean data to a new file.
You can open multiple files in the same with block -- just separate your open() functions with a comma.
For this example, we're not going to do any cleaning -- we're just going to copy the contents of one file to another.
End of explanation |
11,699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {word: i for i, word in enumerate(vocab)}
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
# Originally += 1, turns out the existence of a word in the review is sufficient,
# counting the number of times it appears just introduces lots of noise.
word_vector[idx] = 1
return np.array(word_vector)
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :35]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split, 0], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split, 0], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
with tf.device("/gpu:0"):
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
net = tflearn.input_data([None, 10000])
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Mediocre film, but I really enjoyed the laughs and comedy, as low-brow as they were."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.