Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
10,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PathMovers
This notebook is an introduction to handling PathMover and MoveChange instances. It mostly covers the questions on
1. how to check if a certain mover was part of a change.
2. What are the possible changes a mover can generate.
3. ...
Load OPENPATHSAMPLING
Step1: Let's open a file that contains a mover and lot's of changes
Step2: A Simulator creates simulation steps. So we load a single step.
Step3: Each step posesses a movechange which we will get
Step4: And the mover that was used to generate this change
Step5: Let's first check if the pmc was really created by the pathmover
Step6: This should be obvious, but under the hood there is some more fancy stuff happening. The in keyword with pathmovers and changes actually works on trees of movers. These trees are used to represent a specific order or movers being called and these trees are unique for changes. This means there is a way to label each possible pmc that can be generated by a mover. This possible only refers to choices in actual movers.
This is due to special PathMovers that can pick one or more movers among a set of possibilities. The simplest of these is the OneWayShooter which choses equal between forward and backward shooting. So a OneWayShooter has two possible outcomes. Let's look at these
Step7: The property enum will evaluate the (potentially VERY large) number of all possible changes
Step8: which is two in our case. Another example
Step9: Now we have 4 choices as expected 2 for the RandomChoice * 2 for the OneWayShooter. Although effectively there are only 2 distinct ones. Why is that? The problem in ow_mover_2 is that we defined two separate instances of the OneWayShootingMover and in general different instances are considered different. In this case it might even be possible to check for equality, but we decided to leave this to the user. If you create two instances we assume there is a reason. Maybe just calling the differently (although it makes no sense for the monte carlo scheme).
The next example uses the same mover twice
Step10: And we get only two distinct movers as we wanted to.
Let's look at the real mover
Step11: This one has 31. Let's if we can reconstruct that number. The pathmover first choses between 4 different move types.
Shooting, with one shooter for each of the 6 ensembles and two directions (forward and backward) are 6 * 2 = 12 in total.
Reversal moves for each ensemble = 6
ReplicaExchange moves between all neighboring ensembles [1,2], [2,3], ..., [5,6] = 5
And a Minus Move that consists of choices for picking the first or final inner trajectory from the minus ensemble and 2 choices for extending forward or backward. Here is important that we use a ConditionalSequentialMover that might call only some of its inner sequence. To be precise it could call only the subtrajectoryselection (2 choices), the selection and the replicaexchange (also 2 choices) or all three movers (4 choices). This will give a total of 8 possibilities for the minus move
In total we have 12 + 6 + 5 + 8 = 31 as enum predicted.
Note that this is only the maximal possible number of moves and not the real accessible moves. It could be that in a conditional mover some moves are always run in sequence. E.g. in our case the subtrajectoryselection should always work and hence the first 2 choices of the minus mover will never happen. It will always try a swap (which can fail if by chance the trajectory in the innermost interface crosses away from the state). So the effective number of choices will be 29 and not 31.
Let's do some more testing
First let's check if the keys we get for all changes in the storage are actually contained in the mover or put more simple the mover could have generated these changes. This should be true for all changes that originate from our mover.
Step12: What happened here? Why are only some changes in the storage made by our mover? Shouldn't all of them (except the 2 in the beginning) be the result of our pathmover? Yes and no. They are, but we are checking if the total change is made from a mover and we are also loading all subchanges subchanges.
Step13: We will now cache all changes to see if the next steps are really fast or not.
Step14: Now
Step15: While this works it would be better to rely on the steps in the simulation and not on the changes itself. We might store some other changes for whatever reason, but we only want the ones that are associated with a MC step. The steps in the storage do exactly that and point to the changes that are used to generate the next sampleset that we wanted.
Step16: We exclude the first step since it is the initialization which is not generated by our pathmover.
Let's now check which and how often one the 31 possible steps was called.
Step17: We see that especially minus moves are underrepresented. This is due to the standard weighting of the minus move which runs minus moves much less frequent than other moves.
Let's just do that again and pick a random chance among all the possibilities. This is done only from the possibilities without regard for acceptance etc. We only assume that all possible choices for a single mover are equally likely.
Step18: We realize that a specific shooter is called less likely than other movers
not sure if this is helpful. We could have additional weighting factors for the randomchoice, but each mover would have to know about it. Problem is that in most movers these weighting probabilities are dependent on the context and hence make no sense here!
One solution is to let a mover be able to give a formal expression for each possible choice, since each mover will only depend on
1. results of accepted/rejected of submoves and ultimately if certain samples are accepted which in turn depends on .valid and .proposal_probability
2. some random numbers (choices)
So the SampleGeneratingMovers will return something like [acc(samp[]), acc(samp[])]
while other movers will return something that depends on the submovers, it could just be a list of products.
Each mover has an expression for it being accepted
Each mover has an expression for picking a specific choice
Finally .random() will use the product of all probabilities and a specific value for accepting
Step19: Get the (unique) location of the randomchoicemover. You can search for Mover classes, Mover instances or by the .name property of a mover which is a string.
Step20: the location object is effectively a tree mover instances described by nested tuples. For convenience it is wrapped to make searching easier and format the output.
Step21: In most cases you can use python tuples instead of TupleTree. The structure of a tree of tuples looks as follows
Step22: Instead of checking for a pmc directly we can also check for the tuple tree representation
Step23: Now get the mover at the loc_rc location (which is of course a RandomChoiceMover).
Step24: These are the weights by mover
Step25: So a shooting move is about 30x more likely than a minus move.
Finally do some more checks
Step26: Note, that certain trees of movers can be arranged differently in a tree, but result in the same possible steps. Like Sequential(Forward, Sequential([Forward, Forward])) and Sequential([Forward, Forward, Forward]). We can regard this as being associative (placing arbitrary brackets) but we will not check for these types of equality. We assume that you arrange steps in a certain way for a reason and that your grouping of sequenced will reflect a certain logical idea. On the other hand are your locators / keys dependend on that choice!
What else can we learn or ask for?
We can check if a particular mover was run. Let's pick the very last MinusExtensionDirectionChoser. To check for it we can either just check for the instance in a pmc
Step27: the expression locator in pmc checks for the overall appearance of a specific mover somewhere dependent of the location. While using mover_instance in pmc check for the appearance of that mover instance independent of the location. If a particular mover_instance appears only once then both methods are equal.
Step28: In this case the MinusExtentionDirectionChoose was not called in this particular pmc.
Remember the following things about moves and changes
movers and changes represent a tree like structure of movers.
a change is a concrete realization of moves (and hence movers) while a mover itself has a certain degree of choice.
Some nodes in a mover can have different outcomes, like a RandomChoiceMover will pick one of several choices. The change it generates will have only the chosen one, while the RandomChoiceMover itself has all possibilities.
A concrete tree like structure (without choice) of movers can be represented by a tree of tuples. Reading the tuple-tree from left to right the opening bracket '(' means go down a level (a new child) while the closing bracket means trael up a level ')' go to the parent.
The smallest unique tupletree for a pmc can be accessed using .unique
Step29: will be interpreted as
Step30: Some speed tests | Python Code:
import openpathsampling as p
Explanation: PathMovers
This notebook is an introduction to handling PathMover and MoveChange instances. It mostly covers the questions on
1. how to check if a certain mover was part of a change.
2. What are the possible changes a mover can generate.
3. ...
Load OPENPATHSAMPLING
End of explanation
st = p.storage.Storage('_toy_retis.nc', mode='r')
Explanation: Let's open a file that contains a mover and lot's of changes
End of explanation
mc = st.steps[3]
print mc
Explanation: A Simulator creates simulation steps. So we load a single step.
End of explanation
pmc = mc.change
print pmc
Explanation: Each step posesses a movechange which we will get
End of explanation
pm = pmc.mover
print pm.treeprint()
Explanation: And the mover that was used to generate this change
End of explanation
pmc in pm
Explanation: Let's first check if the pmc was really created by the pathmover
End of explanation
ow_mover = p.OneWayShootingMover([], []) # we use dummy arguments since are not going to use it
Explanation: This should be obvious, but under the hood there is some more fancy stuff happening. The in keyword with pathmovers and changes actually works on trees of movers. These trees are used to represent a specific order or movers being called and these trees are unique for changes. This means there is a way to label each possible pmc that can be generated by a mover. This possible only refers to choices in actual movers.
This is due to special PathMovers that can pick one or more movers among a set of possibilities. The simplest of these is the OneWayShooter which choses equal between forward and backward shooting. So a OneWayShooter has two possible outcomes. Let's look at these
End of explanation
list(ow_mover.enum)
Explanation: The property enum will evaluate the (potentially VERY large) number of all possible changes
End of explanation
ow_mover_2 = p.RandomChoiceMover([
p.OneWayShootingMover([], []),
p.OneWayShootingMover([], [])
])
list(ow_mover_2.enum)
Explanation: which is two in our case. Another example
End of explanation
ow_mover_3 = p.RandomChoiceMover([
ow_mover,
ow_mover
])
list(ow_mover_3.enum)
Explanation: Now we have 4 choices as expected 2 for the RandomChoice * 2 for the OneWayShooter. Although effectively there are only 2 distinct ones. Why is that? The problem in ow_mover_2 is that we defined two separate instances of the OneWayShootingMover and in general different instances are considered different. In this case it might even be possible to check for equality, but we decided to leave this to the user. If you create two instances we assume there is a reason. Maybe just calling the differently (although it makes no sense for the monte carlo scheme).
The next example uses the same mover twice
End of explanation
all_changes = list(pm.enum)
print len(all_changes)
Explanation: And we get only two distinct movers as we wanted to.
Let's look at the real mover
End of explanation
print [pc in pm for pc in st.movechanges[0:20]]
Explanation: This one has 31. Let's if we can reconstruct that number. The pathmover first choses between 4 different move types.
Shooting, with one shooter for each of the 6 ensembles and two directions (forward and backward) are 6 * 2 = 12 in total.
Reversal moves for each ensemble = 6
ReplicaExchange moves between all neighboring ensembles [1,2], [2,3], ..., [5,6] = 5
And a Minus Move that consists of choices for picking the first or final inner trajectory from the minus ensemble and 2 choices for extending forward or backward. Here is important that we use a ConditionalSequentialMover that might call only some of its inner sequence. To be precise it could call only the subtrajectoryselection (2 choices), the selection and the replicaexchange (also 2 choices) or all three movers (4 choices). This will give a total of 8 possibilities for the minus move
In total we have 12 + 6 + 5 + 8 = 31 as enum predicted.
Note that this is only the maximal possible number of moves and not the real accessible moves. It could be that in a conditional mover some moves are always run in sequence. E.g. in our case the subtrajectoryselection should always work and hence the first 2 choices of the minus mover will never happen. It will always try a swap (which can fail if by chance the trajectory in the innermost interface crosses away from the state). So the effective number of choices will be 29 and not 31.
Let's do some more testing
First let's check if the keys we get for all changes in the storage are actually contained in the mover or put more simple the mover could have generated these changes. This should be true for all changes that originate from our mover.
End of explanation
print st.movechanges[2]
print st.movechanges[2] in pm
print
print st.movechanges[5]
print st.movechanges[5] in pm
Explanation: What happened here? Why are only some changes in the storage made by our mover? Shouldn't all of them (except the 2 in the beginning) be the result of our pathmover? Yes and no. They are, but we are checking if the total change is made from a mover and we are also loading all subchanges subchanges.
End of explanation
_ = list(st.movechanges)
_ = list(st.steps)
Explanation: We will now cache all changes to see if the next steps are really fast or not.
End of explanation
real_changes = filter(lambda x : x in pm, st.movechanges)
print len(real_changes), 'of', len(st.movechanges)
Explanation: Now: How do we get all the changes that are really important? One way is to scan all changes and use only the ones from the mover
End of explanation
step_changes = [step.change for step in st.steps[1:]]
print len(step_changes)
Explanation: While this works it would be better to rely on the steps in the simulation and not on the changes itself. We might store some other changes for whatever reason, but we only want the ones that are associated with a MC step. The steps in the storage do exactly that and point to the changes that are used to generate the next sampleset that we wanted.
End of explanation
import collections
counter = collections.defaultdict(lambda: 0)
for ch in step_changes:
counter[ch.unique] += 1
s = '%d of %d different changes run' % (len(counter), len(list(pm.enum)))
print s, '\n', '-' * len(s), '\n'
for y in sorted(counter.items(), key=lambda x : -x[1]):
print
print y[1], 'x'
print y[0].treeprint()
Explanation: We exclude the first step since it is the initialization which is not generated by our pathmover.
Let's now check which and how often one the 31 possible steps was called.
End of explanation
pmc_list = [pm.random() for x in xrange(10000)]
counter2 = collections.defaultdict(lambda: 0)
for ch in pmc_list:
counter2[ch] += 1
s = '%d of %d different changes run' % (len(counter2), len(list(pm.enum)))
print s, '\n', '-' * len(s), '\n'
for y in sorted(counter2.items(), key=lambda x : -x[1]):
print (100.0 * y[1]) / len(pmc_list), '%', repr(y[0])
Explanation: We see that especially minus moves are underrepresented. This is due to the standard weighting of the minus move which runs minus moves much less frequent than other moves.
Let's just do that again and pick a random chance among all the possibilities. This is done only from the possibilities without regard for acceptance etc. We only assume that all possible choices for a single mover are equally likely.
End of explanation
print pm.treeprint()
Explanation: We realize that a specific shooter is called less likely than other movers
not sure if this is helpful. We could have additional weighting factors for the randomchoice, but each mover would have to know about it. Problem is that in most movers these weighting probabilities are dependent on the context and hence make no sense here!
One solution is to let a mover be able to give a formal expression for each possible choice, since each mover will only depend on
1. results of accepted/rejected of submoves and ultimately if certain samples are accepted which in turn depends on .valid and .proposal_probability
2. some random numbers (choices)
So the SampleGeneratingMovers will return something like [acc(samp[]), acc(samp[])]
while other movers will return something that depends on the submovers, it could just be a list of products.
Each mover has an expression for it being accepted
Each mover has an expression for picking a specific choice
Finally .random() will use the product of all probabilities and a specific value for accepting
End of explanation
loc_rc = pm.locate('RootMover')
print loc_rc
Explanation: Get the (unique) location of the randomchoicemover. You can search for Mover classes, Mover instances or by the .name property of a mover which is a string.
End of explanation
print type(loc_rc)
print isinstance(loc_rc, tuple)
print repr(loc_rc)
print str(loc_rc)
Explanation: the location object is effectively a tree mover instances described by nested tuples. For convenience it is wrapped to make searching easier and format the output.
End of explanation
print pmc
print repr(pmc.unique)
Explanation: In most cases you can use python tuples instead of TupleTree. The structure of a tree of tuples looks as follows:
Each node has the following structure
(head, (child1, ), (child2, ), ...)
where head is a the node content (almost always a Mover instance) and child is again a node. It is important to note that a tuple tree does not support choices of different children like a real pathmover can. Therefore we could generate a tree tuple of the content of a pathmover but it will not reflect the possible choices in it. To get the associated tree tuple of a change (which has no choice in it) we can use .unique.
End of explanation
print pmc
print pmc in pm # check if pmc could have been generated by pm
print pmc.unique in pm # check if the tuple tree representation could have been generated by pm
Explanation: Instead of checking for a pmc directly we can also check for the tuple tree representation
End of explanation
rc = pm[loc_rc]
Explanation: Now get the mover at the loc_rc location (which is of course a RandomChoiceMover).
End of explanation
dict(zip(rc.movers, rc.weights))
Explanation: These are the weights by mover
End of explanation
print rc in pmc # check if the RandomChoiceMover was called in pmc
print pc in pm # check if the pathmover has a RandomChoiceMover at that position in the tree
Explanation: So a shooting move is about 30x more likely than a minus move.
Finally do some more checks
End of explanation
loc_medc = pm.locate('MinusExtensionDirectionChooser')
medc = pm[loc_medc]
print medc
%%time
for pc in step_changes:
if medc in pc:
print repr(pc)
print pc.unique
print
Explanation: Note, that certain trees of movers can be arranged differently in a tree, but result in the same possible steps. Like Sequential(Forward, Sequential([Forward, Forward])) and Sequential([Forward, Forward, Forward]). We can regard this as being associative (placing arbitrary brackets) but we will not check for these types of equality. We assume that you arrange steps in a certain way for a reason and that your grouping of sequenced will reflect a certain logical idea. On the other hand are your locators / keys dependend on that choice!
What else can we learn or ask for?
We can check if a particular mover was run. Let's pick the very last MinusExtensionDirectionChoser. To check for it we can either just check for the instance in a pmc
End of explanation
print pmc
print loc_medc in pm
print medc in pm
print loc_medc in pmc
print medc in pmc
Explanation: the expression locator in pmc checks for the overall appearance of a specific mover somewhere dependent of the location. While using mover_instance in pmc check for the appearance of that mover instance independent of the location. If a particular mover_instance appears only once then both methods are equal.
End of explanation
first_minus_change = filter(lambda x : p.MinusMover in x, step_changes)[0]
print first_minus_change.unique
Explanation: In this case the MinusExtentionDirectionChoose was not called in this particular pmc.
Remember the following things about moves and changes
movers and changes represent a tree like structure of movers.
a change is a concrete realization of moves (and hence movers) while a mover itself has a certain degree of choice.
Some nodes in a mover can have different outcomes, like a RandomChoiceMover will pick one of several choices. The change it generates will have only the chosen one, while the RandomChoiceMover itself has all possibilities.
A concrete tree like structure (without choice) of movers can be represented by a tree of tuples. Reading the tuple-tree from left to right the opening bracket '(' means go down a level (a new child) while the closing bracket means trael up a level ')' go to the parent.
The smallest unique tupletree for a pmc can be accessed using .unique
End of explanation
print first_minus_change.unique
pm.map_tree(lambda x : len(x.name))
Explanation: will be interpreted as
End of explanation
%%timeit
pm.enum
%%timeit
pmc in pm
%%timeit
[p in pmc for p in pm.enum]
%%timeit
pm in pm
%%timeit
pmc.unique in pm
Explanation: Some speed tests
End of explanation |
10,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright © 2019 The TensorFlow Authors.
Step1: TensorFlow Data Validation
An Example of a Key TFX Library
This example colab notebook illustrates how TensorFlow Data Validation (TFDV) can be used to investigate and visualize your dataset. That includes looking at descriptive statistics, inferring a schema, checking for and fixing anomalies, and checking for drift and skew in our dataset. It's important to understand your dataset's characteristics, including how it might change over time in your production pipeline. It's also important to look for anomalies in your data, and to compare your training, evaluation, and serving datasets to make sure that they're consistent.
Setup
First, we install the necessary packages, download data, import modules and set up paths.
Install TFX and TensorFlow
Note
Because of some of the updates to packages you must use the button at the bottom of the output of this cell to restart the runtime. Following restart, you should rerun this cell.
Step2: Import packages
We import necessary packages, including standard TFX component classes.
Step3: Check the versions
Step4: Download example data
We download the sample dataset for use in our TFX pipeline. We're working with a variant of the Online News Popularity dataset, which summarizes a heterogeneous set of features about articles published by Mashable in a period of two years. The goal is to predict how popular the article will be on social networks. Specifically, in the original dataset the objective was to predict the number of times each article will be shared on social networks. In this variant, the goal is to predict the article's popularity percentile. For example, if the model predicts a score of 0.7, then it means it expects the article to be shared more than 70% of all articles.
Step5: Split the dataset into train, eval and serving
Let's take a peek at the data.
Step6: Now let's split the data into a training set, an eval set and a serving set
Step7: Now let's take a peek at the training set, the eval set and the serving set
Step8: Compute and visualize statistics
First we'll use tfdv.generate_statistics_from_csv to compute statistics for our training data. (ignore the snappy warnings)
TFDV can compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions.
Internally, TFDV uses Apache Beam's data-parallel processing framework to scale the computation of statistics over large datasets. For applications that wish to integrate deeper with TFDV (e.g., attach statistics generation at the end of a data-generation pipeline), the API also exposes a Beam PTransform for statistics generation.
Step9: Now let's use tfdv.visualize_statistics, which uses Facets to create a succinct visualization of our training data
Step10: Infer a schema
Now let's use tfdv.infer_schema to create a schema for our data. A schema defines constraints for the data that are relevant for ML. Example constraints include the data type of each feature, whether it's numerical or categorical, or the frequency of its presence in the data. For categorical features the schema also defines the domain - the list of acceptable values. Since writing a schema can be a tedious task, especially for datasets with lots of features, TFDV provides a method to generate an initial version of the schema based on the descriptive statistics.
Getting the schema right is important because the rest of our production pipeline will be relying on the schema that TFDV generates to be correct. The schema also provides documentation for the data, and so is useful when different developers work on the same data. Let's use tfdv.display_schema to display the inferred schema so that we can review it.
Step11: Check evaluation data for errors
So far we've only been looking at the training data. It's important that our evaluation data is consistent with our training data, including that it uses the same schema. It's also important that the evaluation data includes examples of roughly the same ranges of values for our numerical features as our training data, so that our coverage of the loss surface during evaluation is roughly the same as during training. The same is true for categorical features. Otherwise, we may have training issues that are not identified during evaluation, because we didn't evaluate part of our loss surface.
Notice that each feature now includes statistics for both the training and evaluation datasets.
Notice that the charts now have both the training and evaluation datasets overlaid, making it easy to compare them.
Notice that the charts now include a percentages view, which can be combined with log or the default linear scales.
Notice that some features are significantly different for the training versus the evaluation datasets, in particular check the mean and median. Will that cause problems?
Click expand on the Numeric Features chart, and select the log scale. Review the n_hrefs feature, and notice the difference in the max. Will evaluation miss parts of the loss surface?
Step12: Check for evaluation anomalies
Does our evaluation dataset match the schema from our training dataset? This is especially important for categorical features, where we want to identify the range of acceptable values.
Key Point
Step13: Fix evaluation anomalies in the schema
Oops! It looks like we have some new values for data_channel in our evaluation data, that we didn't have in our training data (what a surprise!). This should be considered an anomaly, but what we decide to do about it depends on our domain knowledge of the data. If an anomaly truly indicates a data error, then the underlying data should be fixed. Otherwise, we can simply update the schema to include the values in the eval dataset.
Key Point
Step14: Hey, look at that! We verified that the training and evaluation data are now consistent! Thanks TFDV ;)
Schema Environments
We also split off a 'serving' dataset for this example, so we should check that too. By default all datasets in a pipeline should use the same schema, but there are often exceptions. For example, in supervised learning we need to include labels in our dataset, but when we serve the model for inference the labels will not be included. In some cases introducing slight schema variations is necessary.
Environments can be used to express such requirements. In particular, features in schema can be associated with a set of environments using default_environment, in_environment and not_in_environment.
For example, in this dataset the n_shares_percentile feature is included as the label for training, but it's missing in the serving data. Without environment specified, it will show up as an anomaly.
Step15: TDFV noticed that the n_shares_percentile column is missing in the serving set (as expected), and it also noticed that some features which should be floats are actually integers.
It's very easy to be unaware of problems like that until model performance suffers, sometimes catastrophically. It may or may not be a significant issue, but in any case this should be cause for further investigation.
In this case, we can safely convert integers to floats, so we want to tell TFDV to use our schema to infer the type. Let's do that now.
Step16: Now we just have the n_shares_percentile feature (which is our label) showing up as an anomaly ('Column dropped'). Of course we don't expect to have labels in our serving data, so let's tell TFDV to ignore that.
Step17: Check for drift and skew
In addition to checking whether a dataset conforms to the expectations set in the schema, TFDV also provides functionalities to detect drift and skew. TFDV performs this check by comparing the statistics of the different datasets based on the drift/skew comparators specified in the schema.
Drift
Drift detection is supported for categorical features and between consecutive spans of data (i.e., between span N and span N+1), such as between different days of training data. We express drift in terms of L-infinity distance, and you can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation.
Skew
TFDV can detect three different kinds of skew in your data - schema skew, feature skew, and distribution skew.
Schema Skew
Schema skew occurs when the training and serving data do not conform to the same schema. Both training and serving data are expected to adhere to the same schema. Any expected deviations between the two (such as the label feature being only present in the training data but not in serving) should be specified through environments field in the schema.
Feature Skew
Feature skew occurs when the feature values that a model trains on are different from the feature values that it sees at serving time. For example, this can happen when
Step18: No drift and no skew!
Freeze the schema
Now that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright © 2019 The TensorFlow Authors.
End of explanation
!pip install -q -U \
tensorflow==2.0.0 \
tensorflow_data_validation
Explanation: TensorFlow Data Validation
An Example of a Key TFX Library
This example colab notebook illustrates how TensorFlow Data Validation (TFDV) can be used to investigate and visualize your dataset. That includes looking at descriptive statistics, inferring a schema, checking for and fixing anomalies, and checking for drift and skew in our dataset. It's important to understand your dataset's characteristics, including how it might change over time in your production pipeline. It's also important to look for anomalies in your data, and to compare your training, evaluation, and serving datasets to make sure that they're consistent.
Setup
First, we install the necessary packages, download data, import modules and set up paths.
Install TFX and TensorFlow
Note
Because of some of the updates to packages you must use the button at the bottom of the output of this cell to restart the runtime. Following restart, you should rerun this cell.
End of explanation
import os
import tempfile
import urllib
import tensorflow as tf
import tensorflow_data_validation as tfdv
Explanation: Import packages
We import necessary packages, including standard TFX component classes.
End of explanation
print('TensorFlow version: {}'.format(tf.__version__))
print('TensorFlow Data Validation version: {}'.format(tfdv.__version__))
Explanation: Check the versions
End of explanation
# Download the example data.
DATA_PATH = 'https://raw.githubusercontent.com/ageron/open-datasets/master/' \
'online_news_popularity_for_course/online_news_popularity_for_course.csv'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
Explanation: Download example data
We download the sample dataset for use in our TFX pipeline. We're working with a variant of the Online News Popularity dataset, which summarizes a heterogeneous set of features about articles published by Mashable in a period of two years. The goal is to predict how popular the article will be on social networks. Specifically, in the original dataset the objective was to predict the number of times each article will be shared on social networks. In this variant, the goal is to predict the article's popularity percentile. For example, if the model predicts a score of 0.7, then it means it expects the article to be shared more than 70% of all articles.
End of explanation
!head {_data_filepath}
Explanation: Split the dataset into train, eval and serving
Let's take a peek at the data.
End of explanation
_train_data_filepath = os.path.join(_data_root, "train.csv")
_eval_data_filepath = os.path.join(_data_root, "eval.csv")
_serving_data_filepath = os.path.join(_data_root, "serving.csv")
with open(_data_filepath) as data_file, \
open(_train_data_filepath, "w") as train_file, \
open(_eval_data_filepath, "w") as eval_file, \
open(_serving_data_filepath, "w") as serving_file:
lines = data_file.readlines()
train_file.write(lines[0])
eval_file.write(lines[0])
serving_file.write(lines[0].rsplit(",", 1)[0] + "\n")
for line in lines[1:]:
if line < "2014-11-01":
train_file.write(line)
elif line < "2014-12-01":
line = line.replace("2014-11-01,0,World,awkward-teen-dance",
"2014-11-01,0,Fun,awkward-teen-dance")
eval_file.write(line)
else:
serving_file.write(line.rsplit(",", 1)[0].replace(".0,", ",") + "\n")
Explanation: Now let's split the data into a training set, an eval set and a serving set:
* The training set will be used to train ML models.
* The eval set (also called the validation set or dev set) will be used to evaluate the models we train and choose the best one.
* The serving set should look exactly like production data so we can test our production validation rules. For this, we remove the labels.
We also modify one line in the eval set, replacing 'World' with 'Fun' in the data_channel feature, and we replace many floats with integers in the serving set: this will allow us to show how TFDV can detect anomalies.
End of explanation
!head {_train_data_filepath}
!head {_eval_data_filepath}
!head {_serving_data_filepath}
Explanation: Now let's take a peek at the training set, the eval set and the serving set:
End of explanation
train_stats = tfdv.generate_statistics_from_csv(
data_location=_train_data_filepath)
Explanation: Compute and visualize statistics
First we'll use tfdv.generate_statistics_from_csv to compute statistics for our training data. (ignore the snappy warnings)
TFDV can compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions.
Internally, TFDV uses Apache Beam's data-parallel processing framework to scale the computation of statistics over large datasets. For applications that wish to integrate deeper with TFDV (e.g., attach statistics generation at the end of a data-generation pipeline), the API also exposes a Beam PTransform for statistics generation.
End of explanation
tfdv.visualize_statistics(train_stats)
Explanation: Now let's use tfdv.visualize_statistics, which uses Facets to create a succinct visualization of our training data:
Notice that numeric features and catagorical features are visualized separately, and that charts are displayed showing the distributions for each feature.
Notice that features with missing or zero values display a percentage in red as a visual indicator that there may be issues with examples in those features. The percentage is the percentage of examples that have missing or zero values for that feature.
Notice that there are no examples with values for pickup_census_tract. This is an opportunity for dimensionality reduction!
Try clicking "expand" above the charts to change the display
Try hovering over bars in the charts to display bucket ranges and counts
Try switching between the log and linear scales, and notice how the log scale reveals much more detail about the payment_type categorical feature
Try selecting "quantiles" from the "Chart to show" menu, and hover over the markers to show the quantile percentages
End of explanation
schema = tfdv.infer_schema(statistics=train_stats)
tfdv.display_schema(schema=schema)
Explanation: Infer a schema
Now let's use tfdv.infer_schema to create a schema for our data. A schema defines constraints for the data that are relevant for ML. Example constraints include the data type of each feature, whether it's numerical or categorical, or the frequency of its presence in the data. For categorical features the schema also defines the domain - the list of acceptable values. Since writing a schema can be a tedious task, especially for datasets with lots of features, TFDV provides a method to generate an initial version of the schema based on the descriptive statistics.
Getting the schema right is important because the rest of our production pipeline will be relying on the schema that TFDV generates to be correct. The schema also provides documentation for the data, and so is useful when different developers work on the same data. Let's use tfdv.display_schema to display the inferred schema so that we can review it.
End of explanation
# Compute stats for evaluation data
eval_stats = tfdv.generate_statistics_from_csv(
data_location=_eval_data_filepath)
# Compare evaluation data with training data
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
Explanation: Check evaluation data for errors
So far we've only been looking at the training data. It's important that our evaluation data is consistent with our training data, including that it uses the same schema. It's also important that the evaluation data includes examples of roughly the same ranges of values for our numerical features as our training data, so that our coverage of the loss surface during evaluation is roughly the same as during training. The same is true for categorical features. Otherwise, we may have training issues that are not identified during evaluation, because we didn't evaluate part of our loss surface.
Notice that each feature now includes statistics for both the training and evaluation datasets.
Notice that the charts now have both the training and evaluation datasets overlaid, making it easy to compare them.
Notice that the charts now include a percentages view, which can be combined with log or the default linear scales.
Notice that some features are significantly different for the training versus the evaluation datasets, in particular check the mean and median. Will that cause problems?
Click expand on the Numeric Features chart, and select the log scale. Review the n_hrefs feature, and notice the difference in the max. Will evaluation miss parts of the loss surface?
End of explanation
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
Explanation: Check for evaluation anomalies
Does our evaluation dataset match the schema from our training dataset? This is especially important for categorical features, where we want to identify the range of acceptable values.
Key Point: What would happen if we tried to evaluate using data with categorical feature values that were not in our training dataset? What about numeric features that are outside the ranges in our training dataset?
End of explanation
# Relax the minimum fraction of values that must come
# from the domain for feature data_channel.
data_channel = tfdv.get_feature(schema, 'data_channel')
data_channel.distribution_constraints.min_domain_mass = 1.0
# Add new value to the domain of feature data_channel.
data_channel_domain = tfdv.get_domain(schema, 'data_channel')
data_channel_domain.value.append('Fun')
# Validate eval stats after updating the schema
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
Explanation: Fix evaluation anomalies in the schema
Oops! It looks like we have some new values for data_channel in our evaluation data, that we didn't have in our training data (what a surprise!). This should be considered an anomaly, but what we decide to do about it depends on our domain knowledge of the data. If an anomaly truly indicates a data error, then the underlying data should be fixed. Otherwise, we can simply update the schema to include the values in the eval dataset.
Key Point: How would our evaluation results be affected if we did not fix this problem?
Unless we change our evaluation dataset we can't fix everything, but we can fix things in the schema that we're comfortable accepting. That includes relaxing our view of what is and what is not an anomaly for particular features, as well as updating our schema to include missing values for categorical features. TFDV has enabled us to discover what we need to fix.
Let's make the fix now, and then review one more time.
End of explanation
serving_stats = tfdv.generate_statistics_from_csv(_serving_data_filepath)
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
Explanation: Hey, look at that! We verified that the training and evaluation data are now consistent! Thanks TFDV ;)
Schema Environments
We also split off a 'serving' dataset for this example, so we should check that too. By default all datasets in a pipeline should use the same schema, but there are often exceptions. For example, in supervised learning we need to include labels in our dataset, but when we serve the model for inference the labels will not be included. In some cases introducing slight schema variations is necessary.
Environments can be used to express such requirements. In particular, features in schema can be associated with a set of environments using default_environment, in_environment and not_in_environment.
For example, in this dataset the n_shares_percentile feature is included as the label for training, but it's missing in the serving data. Without environment specified, it will show up as an anomaly.
End of explanation
options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
serving_stats = tfdv.generate_statistics_from_csv(_serving_data_filepath,
stats_options=options)
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
Explanation: TDFV noticed that the n_shares_percentile column is missing in the serving set (as expected), and it also noticed that some features which should be floats are actually integers.
It's very easy to be unaware of problems like that until model performance suffers, sometimes catastrophically. It may or may not be a significant issue, but in any case this should be cause for further investigation.
In this case, we can safely convert integers to floats, so we want to tell TFDV to use our schema to infer the type. Let's do that now.
End of explanation
# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
# Specify that 'n_shares_percentile' feature is not in SERVING environment.
tfdv.get_feature(schema, 'n_shares_percentile').not_in_environment.append('SERVING')
serving_anomalies_with_env = tfdv.validate_statistics(
serving_stats, schema, environment='SERVING')
tfdv.display_anomalies(serving_anomalies_with_env)
Explanation: Now we just have the n_shares_percentile feature (which is our label) showing up as an anomaly ('Column dropped'). Of course we don't expect to have labels in our serving data, so let's tell TFDV to ignore that.
End of explanation
# Add skew comparator for 'weekday' feature.
weekday = tfdv.get_feature(schema, 'weekday')
weekday.skew_comparator.infinity_norm.threshold = 0.01
# Add drift comparator for 'weekday' feature.
weekday.drift_comparator.infinity_norm.threshold = 0.001
skew_anomalies = tfdv.validate_statistics(train_stats, schema,
previous_statistics=eval_stats,
serving_statistics=serving_stats)
tfdv.display_anomalies(skew_anomalies)
Explanation: Check for drift and skew
In addition to checking whether a dataset conforms to the expectations set in the schema, TFDV also provides functionalities to detect drift and skew. TFDV performs this check by comparing the statistics of the different datasets based on the drift/skew comparators specified in the schema.
Drift
Drift detection is supported for categorical features and between consecutive spans of data (i.e., between span N and span N+1), such as between different days of training data. We express drift in terms of L-infinity distance, and you can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation.
Skew
TFDV can detect three different kinds of skew in your data - schema skew, feature skew, and distribution skew.
Schema Skew
Schema skew occurs when the training and serving data do not conform to the same schema. Both training and serving data are expected to adhere to the same schema. Any expected deviations between the two (such as the label feature being only present in the training data but not in serving) should be specified through environments field in the schema.
Feature Skew
Feature skew occurs when the feature values that a model trains on are different from the feature values that it sees at serving time. For example, this can happen when:
A data source that provides some feature values is modified between training and serving time
There is different logic for generating features between training and serving. For example, if you apply some transformation only in one of the two code paths.
Distribution Skew
Distribution skew occurs when the distribution of the training dataset is significantly different from the distribution of the serving dataset. One of the key causes for distribution skew is using different code or different data sources to generate the training dataset. Another reason is a faulty sampling mechanism that chooses a non-representative subsample of the serving data to train on.
End of explanation
_output_dir = os.path.join(tempfile.mkdtemp(),
'serving_model/online_news_simple')
from google.protobuf import text_format
tf.io.gfile.makedirs(_output_dir)
schema_file = os.path.join(_output_dir, 'schema.pbtxt')
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file}
Explanation: No drift and no skew!
Freeze the schema
Now that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state.
End of explanation |
10,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2-Dimensional Frame Analysis - Version 04
This program performs an elastic analysis of 2-dimensional structural frames. It has the following features
Step1: Test Frame
Nodes
Step2: Supports
Step3: Members
Step4: Releases
Step5: Properties
If the SST module is loadable, member properties may be specified by giving steel shape designations
(such as 'W310x97') in the member properties data. If the module is not available, you may still give $A$ and
$I_x$ directly (it only tries to lookup the properties if these two are not provided).
Step6: Node Loads
Step7: Member Loads
Step8: Load Combinations
Step9: Load Iterators
Step10: Accumulated Cell Data
Step11: Input Everything
Step12: Number the DOFs
Step14: Display Nodes
Step15: Display Members
Step16: Display loads | Python Code:
from __future__ import print_function
import salib as sl
sl.import_notebooks()
from Tables import Table
from Nodes import Node
from Members import Member
from LoadSets import LoadSet, LoadCombination
from NodeLoads import makeNodeLoad
from MemberLoads import makeMemberLoad
from collections import OrderedDict, defaultdict
import numpy as np
class Object(object):
pass
class Frame2D(object):
def __init__(self,dsname=None):
self.dsname = dsname
self.rawdata = Object()
self.nodes = OrderedDict()
self.members = OrderedDict()
self.nodeloads = LoadSet()
self.memberloads = LoadSet()
self.loadcombinations = LoadCombination()
#self.dofdesc = []
#self.nodeloads = defaultdict(list)
#self.membloads = defaultdict(list)
self.ndof = 0
self.nfree = 0
self.ncons = 0
self.R = None
self.D = None
self.PDF = None # P-Delta forces
COLUMNS_xxx = [] # list of column names for table 'xxx'
def get_table(self,tablename,extrasok=False,optional=False):
columns = getattr(self,'COLUMNS_'+tablename)
t = Table(tablename,columns=columns,optional=optional)
t.read(optional=optional)
reqdl= columns
reqd = set(reqdl)
prov = set(t.columns)
if reqd-prov:
raise Exception('Columns missing {} for table "{}". Required columns are: {}'\
.format(list(reqd-prov),tablename,reqdl))
if not extrasok:
if prov-reqd:
raise Exception('Extra columns {} for table "{}". Required columns are: {}'\
.format(list(prov-reqd),tablename,reqdl))
return t
Explanation: 2-Dimensional Frame Analysis - Version 04
This program performs an elastic analysis of 2-dimensional structural frames. It has the following features:
1. Input is provided by a set of CSV files (and cell-magics exist so you can specifiy the CSV data
in a notebook cell). See the example below for an, er, example.
1. Handles concentrated forces on nodes, and concentrated forces, concentrated moments, and linearly varying distributed loads applied transversely anywhere along the member (i.e., there is as yet no way to handle longitudinal
load components).
1. It handles fixed, pinned, roller supports and member end moment releases (internal pins). The former are
handled by assigning free or fixed global degrees of freedom, and the latter are handled by adjusting the
member stiffness matrix.
1. It has the ability to handle named sets of loads with factored combinations of these.
1. The DOF #'s are assigned by the program, with the fixed DOF #'s assigned after the non-fixed. The equilibrium
equation is then partitioned for solution. Among other advantages, this means that support settlement could be
easily added (there is no UI for that, yet).
1. A non-linear analysis can be performed using the P-Delta method (fake shears are computed at column ends due to the vertical load acting through horizontal displacement differences, and these shears are applied as extra loads
to the nodes).
1. A full non-linear (2nd order) elastic analysis will soon be available by forming the equilibrium equations
on the deformed structure. This is very easy to add, but it hasn't been done yet. Shouldn't be too long.
1. There is very little no documentation below, but that will improve, slowly.
End of explanation
%%Table nodes
NODEID,X,Y,Z
A,0.,0.,5000.
B,0,4000,5000
C,8000,4000,5000
D,8000,0,5000
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_nodes = ('NODEID','X','Y')
def install_nodes(self):
node_table = self.get_table('nodes')
for ix,r in node_table.data.iterrows():
if r.NODEID in self.nodes:
raise Exception('Multiply defined node: {}'.format(r.NODEID))
n = Node(r.NODEID,r.X,r.Y)
self.nodes[n.id] = n
self.rawdata.nodes = node_table
def get_node(self,id):
try:
return self.nodes[id]
except KeyError:
raise Exception('Node not defined: {}'.format(id))
##test:
f = Frame2D()
##test:
f.install_nodes()
##test:
f.nodes
##test:
f.get_node('C')
Explanation: Test Frame
Nodes
End of explanation
%%Table supports
NODEID,C0,C1,C2
A,FX,FY,MZ
D,FX,FY
def isnan(x):
if x is None:
return True
try:
return np.isnan(x)
except TypeError:
return False
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_supports = ('NODEID','C0','C1','C2')
def install_supports(self):
table = self.get_table('supports')
for ix,row in table.data.iterrows():
node = self.get_node(row.NODEID)
for c in [row.C0,row.C1,row.C2]:
if not isnan(c):
node.add_constraint(c)
self.rawdata.supports = table
##test:
f.install_supports()
vars(f.get_node('D'))
Explanation: Supports
End of explanation
%%Table members
MEMBERID,NODEJ,NODEK
AB,A,B
BC,B,C
DC,D,C
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_members = ('MEMBERID','NODEJ','NODEK')
def install_members(self):
table = self.get_table('members')
for ix,m in table.data.iterrows():
if m.MEMBERID in self.members:
raise Exception('Multiply defined member: {}'.format(m.MEMBERID))
memb = Member(m.MEMBERID,self.get_node(m.NODEJ),self.get_node(m.NODEK))
self.members[memb.id] = memb
self.rawdata.members = table
def get_member(self,id):
try:
return self.members[id]
except KeyError:
raise Exception('Member not defined: {}'.format(id))
##test:
f.install_members()
f.members
##test:
m = f.get_member('BC')
m.id, m.L, m.dcx, m.dcy
Explanation: Members
End of explanation
%%Table releases
MEMBERID,RELEASE
AB,MZK
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_releases = ('MEMBERID','RELEASE')
def install_releases(self):
table = self.get_table('releases',optional=True)
for ix,r in table.data.iterrows():
memb = self.get_member(r.MEMBERID)
memb.add_release(r.RELEASE)
self.rawdata.releases = table
##test:
f.install_releases()
##test:
vars(f.get_member('AB'))
Explanation: Releases
End of explanation
try:
from sst import SST
__SST = SST()
get_section = __SST.section
except ImportError:
def get_section(dsg,fields):
raise ValueError('Cannot lookup property SIZE because SST is not available. SIZE = {}'.format(dsg))
##return [1.] * len(fields.split(',')) # in case you want to do it that way
%%Table properties
MEMBERID,SIZE,IX,A
BC,W460x106,,
AB,W310x97,,
DC,,
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_properties = ('MEMBERID','SIZE','IX','A')
def install_properties(self):
table = self.get_table('properties')
table = self.fill_properties(table)
for ix,row in table.data.iterrows():
memb = self.get_member(row.MEMBERID)
memb.size = row.SIZE
memb.Ix = row.IX
memb.A = row.A
self.rawdata.properties = table
def fill_properties(self,table):
data = table.data
prev = None
for ix,row in data.iterrows():
nf = 0
if type(row.SIZE) in [type(''),type(u'')]:
if isnan(row.IX) or isnan(row.A):
Ix,A = get_section(row.SIZE,'Ix,A')
if isnan(row.IX):
nf += 1
data.loc[ix,'IX'] = Ix
if isnan(row.A):
nf += 1
data.loc[ix,'A'] = A
elif isnan(row.SIZE):
data.loc[ix,'SIZE'] = '' if nf == 0 else prev
prev = data.loc[ix,'SIZE']
table.data = data.fillna(method='ffill')
return table
##test:
f.install_properties()
##test:
vars(f.get_member('DC'))
Explanation: Properties
If the SST module is loadable, member properties may be specified by giving steel shape designations
(such as 'W310x97') in the member properties data. If the module is not available, you may still give $A$ and
$I_x$ directly (it only tries to lookup the properties if these two are not provided).
End of explanation
%%Table node_loads
LOAD,NODEID,DIRN,F
Wind,B,FX,-200000.
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_node_loads = ('LOAD','NODEID','DIRN','F')
def install_node_loads(self):
table = self.get_table('node_loads')
dirns = ['FX','FY','FZ']
for ix,row in table.data.iterrows():
n = self.get_node(row.NODEID)
if row.DIRN not in dirns:
raise ValueError("Invalid node load direction: {} for load {}, node {}; must be one of '{}'"
.format(row.DIRN, row.LOAD, row.NODEID, ', '.join(dirns)))
l = makeNodeLoad({row.DIRN:row.F})
self.nodeloads.append(row.LOAD,n,l)
self.rawdata.node_loads = table
##test:
f.install_node_loads()
##test:
for o,l,fact in f.nodeloads.iterloads('Wind'):
print(o,l,fact,l*fact)
Explanation: Node Loads
End of explanation
%%Table member_loads
LOAD,MEMBERID,TYPE,W1,W2,A,B,C
Live,BC,UDL,-50,,,,
Live,BC,PL,-200000,,5000
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_member_loads = ('LOAD','MEMBERID','TYPE','W1','W2','A','B','C')
def install_member_loads(self):
table = self.get_table('member_loads')
for ix,row in table.data.iterrows():
m = self.get_member(row.MEMBERID)
l = makeMemberLoad(m.L,row)
self.memberloads.append(row.LOAD,m,l)
self.rawdata.member_loads = table
##test:
f.install_member_loads()
##test:
for o,l,fact in f.memberloads.iterloads('Live'):
print(o.id,l,fact,l.fefs()*fact)
Explanation: Member Loads
End of explanation
%%Table load_combinations
COMBO,LOAD,FACTOR
One,Live,1.5
One,Wind,1.75
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_load_combinations = ('COMBO','LOAD','FACTOR')
def install_load_combinations(self):
table = self.get_table('load_combinations')
for ix,row in table.data.iterrows():
self.loadcombinations.append(row.COMBO,row.LOAD,row.FACTOR)
self.rawdata.load_combinations = table
##test:
f.install_load_combinations()
##test:
for o,l,fact in f.loadcombinations.iterloads('One',f.nodeloads):
print(o.id,l,fact)
for o,l,fact in f.loadcombinations.iterloads('One',f.memberloads):
print(o.id,l,fact,l.fefs()*fact)
Explanation: Load Combinations
End of explanation
@sl.extend(Frame2D)
class Frame2D:
def iter_nodeloads(self,comboname):
for o,l,f in self.loadcombinations.iterloads(comboname,self.nodeloads):
yield o,l,f
def iter_memberloads(self,comboname):
for o,l,f in self.loadcombinations.iterloads(comboname,self.memberloads):
yield o,l,f
##test:
for o,l,fact in f.iter_nodeloads('One'):
print(o.id,l,fact)
for o,l,fact in f.iter_memberloads('One'):
print(o.id,l,fact)
Explanation: Load Iterators
End of explanation
##test:
Table.CELLDATA
Explanation: Accumulated Cell Data
End of explanation
@sl.extend(Frame2D)
class Frame2D:
def install_all(self):
self.install_nodes()
self.install_supports()
self.install_members()
self.install_releases()
self.install_properties()
self.install_node_loads()
self.install_member_loads()
self.install_load_combinations()
f = Frame2D(dsname='frame-6b')
f.install_all()
Explanation: Input Everything
End of explanation
@sl.extend(Frame2D)
class Frame2D:
def number_dofs(self):
self.ndof = (3*len(self.nodes))
self.ncons = sum([len(node.constraints) for node in self.nodes.values()])
self.nfree = self.ndof - self.ncons
ifree = 0
icons = self.nfree
self.dofdesc = [None] * self.ndof
for node in self.nodes.values():
for dirn,ix in node.DIRECTIONS.items():
if dirn in node.constraints:
n = icons
icons += 1
else:
n = ifree
ifree += 1
node.dofnums[ix] = n
self.dofdesc[n] = (node,dirn)
##test:
f.number_dofs()
##test:
f.ndof, f.ncons, f.nfree
Explanation: Number the DOFs
End of explanation
def prhead(txt,ul='='):
Print a heading and underline it.
print()
print(txt)
if ul:
print(ul*(len(txt)//len(ul)))
print()
@sl.extend(Frame2D)
class Frame2D:
def print_nodes(self,precision=0,printdof=False):
prhead('Nodes:')
print('Node X Y Constraints DOF #s')
print('---- ----- ----- ----------- ------')
for nid,node in self.nodes.items():
ct = ','.join(sorted(node.constraints,key=lambda t: Node.DIRECTIONS[t]))
dt = ','.join([str(x) for x in node.dofnums])
print('{:<5s}{:>10.{precision}f}{:>10.{precision}f} {:<11s} {}'\
.format(nid,node.x,node.y,ct,dt,precision=precision))
if not printdof:
return
print()
print('DOF# Node Dirn')
print('---- ---- ----')
for i in range(len(self.dofdesc)):
node,dirn = self.dofdesc[i]
print('{:>4d} {:<4s} {}'.format(i,node.id,dirn))
##test:
f.print_nodes(printdof=True)
Explanation: Display Nodes
End of explanation
@sl.extend(Frame2D)
class Frame2D:
def print_members(self,precision=1):
prhead('Members:')
print('Member Node-J Node-K Length dcx dcy Size Ix A Releases')
print('------ ------ ------ ------ ------- ------- -------- -------- ----- --------')
for mid,memb in self.members.items():
nj = memb.nodej
nk = memb.nodek
rt = ','.join(sorted(memb.releases,key=lambda t: Member.RELEASES[t]))
print('{:<7s} {:<6s} {:<6s} {:>8.{precision}f} {:>8.5f} {:>8.5f} {:<10s} {:>10g} {:>10g} {}'\
.format(memb.id,nj.id,nk.id,memb.L,memb.dcx,memb.dcy,str(memb.size),memb.Ix,memb.A,rt,precision=precision))
##test:
f.print_members()
Explanation: Display Members
End of explanation
@sl.extend(Frame2D)
class Frame2D:
def print_loads(self,precision=0):
prhead('Node Loads:')
if self.nodeloads:
print('Type Node FX FY MZ')
print('---- ---- ---------- ---------- ----------')
for lname,node,load in self.nodeloads:
print('{:<4s} {:<4s} {:>10.{precision}f} {:>10.{precision}f} {:>10.{precision}f}'
.format(lname,node.id,load.fx,load.fy,load.mz,precision=precision))
else:
print(" - - - none - - -")
prhead('Member Loads:')
if self.memberloads:
print('Type Member Load')
print('---- ------ ----------------')
for lname,memb,load in self.memberloads:
print("{:<4s} {:<6s} {}".format(lname,memb.id,load))
else:
print(" - - - none - - -")
prhead("Load Combinations:")
if self.loadcombinations:
print('Combo Type Factor')
print('----- ---- ------')
prev = None
for cname,lname,f in self.loadcombinations:
cn = ' '*(len(prev)//2)+'"' if cname == prev else cname
print("{:<5s} {:<4s} {:>6.2f}".format(cn,lname,f))
prev = cname
else:
print(" - - - none - - -")
##test:
f.print_loads()
@sl.extend(Frame2D)
class Frame2D:
def print_input(self):
prhead('Frame '+str(self.dsname)+':')
print()
print(' # of nodal degrees of freedom:',self.ndof)
print(' # of constrained nodal degrees of freedom:',self.ncons)
print('# of unconstrained nodal degrees of freedom:',self.nfree,' (= degree of kinematic indeterminacy)')
m = len(self.members)
r = self.ncons
j = len(self.nodes)
c = len(self.rawdata.releases)
print()
print(' # of members:',m)
print(' # of reactions:',r)
print(' # of nodes:',j)
print(' # of conditions:',c)
print(' degree of static indeterminacy:',(3*m+r)-(3*j+c))
print('\n')
self.print_nodes()
print('\n')
self.print_members()
print('\n')
self.print_loads()
##test:
f.print_input()
Explanation: Display loads
End of explanation |
10,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. gps-receivers
Step1: Here, we used
Step2: As you can see,
Step3: Alternatively, we might use the record id field to key our code off a specific type of NMEA message
Step4: Another alternative would be to add a filter to our pipe so we only receive records that match some criteria
Step5: Note that
Step6: Let's explore other things we can do with our pipe. To begin, you might want to add additional metadata to the records returned from a device. For example, if you were collecting data from multiple devices you might want to "tag" records with a user-specific unique identifier
Step7: Now let's consider calculating some simple statistics, such as our average speed on a trip. When we iterate over the contents of a pipe using a for loop, we receive one record at-a-time until the pipe is empty. We could keep track of a "running" average during iteration, and there are use-cases where that is the best way to solve the problem. However, for moderately-sized data, Pipecat provides a more convenient approach
Step8: Here,
Step9: Consolidating fields using the cache is also perfect for generating plots with a library like Toyplot (http | Python Code:
# nbconvert: hide
from __future__ import absolute_import, division, print_function
import sys
sys.path.append("../features/steps")
import test
socket = test.mock_module("socket")
path = "../data/gps"
client = "172.10.0.20"
socket.socket().recvfrom.side_effect = test.recvfrom_file(path=path, client=client, stop=6)
import pipecat.record
import pipecat.udp
pipe = pipecat.udp.receive(address=("0.0.0.0", 7777), maxsize=1024)
for record in pipe:
pipecat.record.dump(record)
Explanation: .. gps-receivers:
GPS Receivers
Most GPS receivers have data logging capabilities that you can use with Pipecat to view navigational information. Some receivers connect to your computer via a serial port or a serial-over-USB cable that acts like a traditional serial port. Others can push data to a network socket. For this demonstration, we will receive GPS data sent from an iPhone to a UDP socket:
End of explanation
# nbconvert: hide
socket.socket().recvfrom.side_effect = test.recvfrom_file(path=path, client=client, stop=6)
import pipecat.device.gps
pipe = pipecat.udp.receive(address=("0.0.0.0", 7777), maxsize=1024)
pipe = pipecat.device.gps.nmea(pipe, key="message")
for record in pipe:
pipecat.record.dump(record)
Explanation: Here, we used :func:pipecat.udp.receive to open a UDP socket listening on port 7777 on all available network interfaces ("0.0.0.0") and convert the received messages into Pipecat :ref:records, which we dump to the console. Note that each record includes the address of the client (the phone in this case), along with a "message" field containing the raw data of the message. In this case the raw data is in NMEA format, a widely-used standard for exchanging navigational data. To decode the contents of each message, we add the appropriate Pipecat device to the end of the pipe:
End of explanation
# nbconvert: hide
socket.socket().recvfrom.side_effect = test.recvfrom_file(path=path, client=client, start=100, stop=110)
pipe = pipecat.udp.receive(address=("0.0.0.0", 7777), maxsize=1024)
pipe = pipecat.device.gps.nmea(pipe, key="message")
for record in pipe:
if "latitude" in record:
print("Latitude:", record["latitude"], "Longitude:", record["longitude"])
Explanation: As you can see, :func:pipecat.device.gps.nmea has converted the raw NMEA messages into records containing human-readable navigational fields with appropriate physical units. Note that unlike the :ref:battery-chargers example, not every record produced by the GPS receiver has the same fields. The NMEA standard includes many different types of messages, and most GPS receivers will produce more than one type. This will increase the complexity of our code - for example, we might have to test for the presence of a field before extracting it from a record:
End of explanation
# nbconvert: hide
socket.socket().recvfrom.side_effect = test.recvfrom_file(path=path, client=client, start=100, stop=120)
pipe = pipecat.udp.receive(address=("0.0.0.0", 7777), maxsize=1024)
pipe = pipecat.device.gps.nmea(pipe, key="message")
for record in pipe:
if record["id"] == "PASHR":
print("Pitch:", record["pitch"])
Explanation: Alternatively, we might use the record id field to key our code off a specific type of NMEA message:
End of explanation
# nbconvert: hide
socket.socket().recvfrom.side_effect = test.recvfrom_file(path=path, client=client, start=100, stop=120)
import pipecat.filter
pipe = pipecat.udp.receive(address=("0.0.0.0", 7777), maxsize=1024)
pipe = pipecat.device.gps.nmea(pipe, key="message")
pipe = pipecat.filter.keep(pipe, key="id", value="GPGLL")
for record in pipe:
pipecat.record.dump(record)
Explanation: Another alternative would be to add a filter to our pipe so we only receive records that match some criteria:
End of explanation
# nbconvert: hide
socket.socket().recvfrom.side_effect = test.recvfrom_file(path=path, client=client, start=100, stop=120)
pipe = pipecat.udp.receive(address=("0.0.0.0", 7777), maxsize=1024)
pipe = pipecat.device.gps.nmea(pipe, key="message")
for record in pipe:
if "speed" in record:
print(record["speed"].to(pipecat.units.mph))
Explanation: Note that :func:pipecat.filter.keep discards all records that don't meet the given criteria, which allows our downstream code to rely on the availability of specific fields.
Regardless of the logic you employ to identify fields of interest, Pipecat always makes it easy to convert units safely and explicitly:
End of explanation
# nbconvert: hide
socket.socket().recvfrom.side_effect = test.recvfrom_file(path=path, client=client, start=100, stop=115)
import pipecat.utility
pipe = pipecat.udp.receive(address=("0.0.0.0", 7777), maxsize=1024)
pipe = pipecat.device.gps.nmea(pipe, key="message")
pipe = pipecat.filter.keep(pipe, key="id", value="GPGLL")
pipe = pipecat.utility.add_field(pipe, "serial", "1237V")
for record in pipe:
pipecat.record.dump(record)
Explanation: Let's explore other things we can do with our pipe. To begin, you might want to add additional metadata to the records returned from a device. For example, if you were collecting data from multiple devices you might want to "tag" records with a user-specific unique identifier:
End of explanation
# nbconvert: hide
socket.socket().recvfrom.side_effect = test.recvfrom_file(path=path, client=client)
import pipecat.store
pipe = pipecat.udp.receive(address=("0.0.0.0", 7777), maxsize=1024)
pipe = pipecat.device.gps.nmea(pipe, key="message")
pipe = pipecat.store.cache(pipe)
for record in pipe:
pass
print(pipe.table["speed"])
Explanation: Now let's consider calculating some simple statistics, such as our average speed on a trip. When we iterate over the contents of a pipe using a for loop, we receive one record at-a-time until the pipe is empty. We could keep track of a "running" average during iteration, and there are use-cases where that is the best way to solve the problem. However, for moderately-sized data, Pipecat provides a more convenient approach:
End of explanation
print("Average speed:", pipe.table["speed"].mean().to(pipecat.units.mph))
Explanation: Here, :func:pipecat.store.cache creates an in-memory cache that stores every record it receives. We have a do-nothing for loop that reads data from the charger to populate the cache. Once that's complete, we can use the cache table attribute to retrieve data from the cache using the same keys and syntax we would use with a record. Unlike a record, the cache returns every value for a given key at once (using a Numpy array), which makes it easy to compute the statistics we're interested in:
End of explanation
import toyplot
canvas = toyplot.Canvas(width=600, height=400)
axes = canvas.cartesian(grid=(2, 1, 0), xlabel="Record #", ylabel="Speed (MPH)")
axes.plot(pipe.table["speed"].to(pipecat.units.mph))
axes = canvas.cartesian(grid=(2, 1, 1), xlabel="Record #", ylabel="Track")
axes.plot(pipe.table["track"]);
Explanation: Consolidating fields using the cache is also perfect for generating plots with a library like Toyplot (http://toyplot.readthedocs.io):
End of explanation |
10,504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CNTK 103 Part A
Step1: Data download
We will download the data into local machine. The MNIST database is a standard handwritten digits that has been widely used for training and testing of machine learning algorithms. It has a training set of 60,000 images and a test set of 10,000 images with each image being 28 x 28 pixels. This set is easy to use visualize and train on any computer.
Step2: Download the data
The MNIST data is provided as train and test set. Training set has 60000 images while the test set has 10000 images. Let us download the data.
Step3: Visualize the data
Step4: Save the images
Save the images in a local directory. While saving the data we flatten the images to a vector (28x28 image pixels becomes an array of length 784 data points).
The labels are encoded as 1-hot encoding (label of 3 with 10 digits becomes 0001000000, where the first index corresponds to digit 0 and the last one corresponds to digit 9. | Python Code:
# Import the relevant modules to be used later
from __future__ import print_function
import gzip
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import os
import shutil
import struct
import sys
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
# Config matplotlib for inline plotting
%matplotlib inline
Explanation: CNTK 103 Part A: MNIST Data Loader
This tutorial is targeted to individuals who are new to CNTK and to machine learning. We assume you have completed or are familiar with CNTK 101 and 102. In this tutorial, we will download and pre-process the MNIST digit images to be used for building different models to recognize handwritten digits. We will extend CNTK 101 and 102 to be applied to this data set. Additionally we will introduce a convolutional network to achieve superior performance. This is the first example, where we will train and evaluate a neural network based model on read real world data.
CNTK 103 tutorial is divided into multiple parts:
- Part A: Familiarize with the MNIST database that will be used later in the tutorial
- Subsequent parts in this 103 series would be using the MNIST data with different types of networks.
End of explanation
# Functions to load MNIST images and unpack into train and test set.
# - loadData reads image data and formats into a 28x28 long array
# - loadLabels reads the corresponding labels data, 1 for each image
# - load packs the downloaded image and labels data into a combined format to be read later by
# CNTK text reader
def loadData(src, cimg):
print ('Downloading ' + src)
gzfname, h = urlretrieve(src, './delete.me')
print ('Done.')
try:
with gzip.open(gzfname) as gz:
n = struct.unpack('I', gz.read(4))
# Read magic number.
if n[0] != 0x3080000:
raise Exception('Invalid file: unexpected magic number.')
# Read number of entries.
n = struct.unpack('>I', gz.read(4))[0]
if n != cimg:
raise Exception('Invalid file: expected {0} entries.'.format(cimg))
crow = struct.unpack('>I', gz.read(4))[0]
ccol = struct.unpack('>I', gz.read(4))[0]
if crow != 28 or ccol != 28:
raise Exception('Invalid file: expected 28 rows/cols per image.')
# Read data.
res = np.fromstring(gz.read(cimg * crow * ccol), dtype = np.uint8)
finally:
os.remove(gzfname)
return res.reshape((cimg, crow * ccol))
def loadLabels(src, cimg):
print ('Downloading ' + src)
gzfname, h = urlretrieve(src, './delete.me')
print ('Done.')
try:
with gzip.open(gzfname) as gz:
n = struct.unpack('I', gz.read(4))
# Read magic number.
if n[0] != 0x1080000:
raise Exception('Invalid file: unexpected magic number.')
# Read number of entries.
n = struct.unpack('>I', gz.read(4))
if n[0] != cimg:
raise Exception('Invalid file: expected {0} rows.'.format(cimg))
# Read labels.
res = np.fromstring(gz.read(cimg), dtype = np.uint8)
finally:
os.remove(gzfname)
return res.reshape((cimg, 1))
def try_download(dataSrc, labelsSrc, cimg):
data = loadData(dataSrc, cimg)
labels = loadLabels(labelsSrc, cimg)
return np.hstack((data, labels))
Explanation: Data download
We will download the data into local machine. The MNIST database is a standard handwritten digits that has been widely used for training and testing of machine learning algorithms. It has a training set of 60,000 images and a test set of 10,000 images with each image being 28 x 28 pixels. This set is easy to use visualize and train on any computer.
End of explanation
# URLs for the train image and labels data
url_train_image = 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz'
url_train_labels = 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz'
num_train_samples = 60000
print("Downloading train data")
train = try_download(url_train_image, url_train_labels, num_train_samples)
url_test_image = 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz'
url_test_labels = 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
num_test_samples = 10000
print("Downloading test data")
test = try_download(url_test_image, url_test_labels, num_test_samples)
Explanation: Download the data
The MNIST data is provided as train and test set. Training set has 60000 images while the test set has 10000 images. Let us download the data.
End of explanation
# Plot a random image
sample_number = 5001
plt.imshow(train[sample_number,:-1].reshape(28,28), cmap="gray_r")
plt.axis('off')
print("Image Label: ", train[sample_number,-1])
Explanation: Visualize the data
End of explanation
# Save the data files into a format compatible with CNTK text reader
def savetxt(filename, ndarray):
dir = os.path.dirname(filename)
if not os.path.exists(dir):
os.makedirs(dir)
if not os.path.isfile(filename):
print("Saving", filename )
with open(filename, 'w') as f:
labels = list(map(' '.join, np.eye(10, dtype=np.uint).astype(str)))
for row in ndarray:
row_str = row.astype(str)
label_str = labels[row[-1]]
feature_str = ' '.join(row_str[:-1])
f.write('|labels {} |features {}\n'.format(label_str, feature_str))
else:
print("File already exists", filename)
# Save the train and test files (prefer our default path for the data)
data_dir = os.path.join("..", "Examples", "Image", "DataSets", "MNIST")
if not os.path.exists(data_dir):
data_dir = os.path.join("data", "MNIST")
print ('Writing train text file...')
savetxt(os.path.join(data_dir, "Train-28x28_cntk_text.txt"), train)
print ('Writing test text file...')
savetxt(os.path.join(data_dir, "Test-28x28_cntk_text.txt"), test)
print('Done')
Explanation: Save the images
Save the images in a local directory. While saving the data we flatten the images to a vector (28x28 image pixels becomes an array of length 784 data points).
The labels are encoded as 1-hot encoding (label of 3 with 10 digits becomes 0001000000, where the first index corresponds to digit 0 and the last one corresponds to digit 9.
End of explanation |
10,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
License
Copyright (C) 2017 J. Patrick Hall, [email protected]
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions
Step1: Create sample data set
Step2: Impute | Python Code:
import pandas as pd # pandas for handling mixed data sets
import numpy as np # numpy for basic math and matrix operations
Explanation: License
Copyright (C) 2017 J. Patrick Hall, [email protected]
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Simple imputation - Pandas and numpy
Imports
End of explanation
scratch_df = pd.DataFrame({'x1': [0, 1, 2, 3, np.nan, 5, 6, 7, np.nan, 8, 9]})
scratch_df
Explanation: Create sample data set
End of explanation
scratch_df['x1_impute'] = scratch_df.fillna(scratch_df.mean())
scratch_df
Explanation: Impute
End of explanation |
10,506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sample weight adjustment
The objective of this tutorial is to familiarize ourselves with SampleWeight the samplics class for adjusting sample weights. In practice, it is necessary to adjust base or design sample weights obtained directly from the random sample mechanism. These adjustments are done to correct for nonresponse, reduce effects of extreme/large weights, better align with known auxiliary information, and more. Specifically in this tutorial, we will
Step1: Design (base) weight <a name="section1"></a>
The design weight is the inverse of the overall probability of selection which is the product of the first and second probabilities of selection.
Step2: For the purposes of this illustration of handling non-response, we first need to incorporate some household non-response into our example. That is, we simulate the non-response status and store it in the variable response_status. The variable response_status has four possible values
Step3: Nonresponse adjustment <a name="section2"></a>
In general, the sample weights are adjusted to redistribute the sample weights of all eligible units for which there is no sufficient response (unit level nonresponse) to the sampling units that sufficiently responded to the survey. This adjustment is done within adjustment classes or domains. Note that the determination of the response categories (unit response, item response, ineligible, etc.) is outside of the scope of this tutorial.
Also, the weights of the sampling units with unknown eligibility are redistributed to the rest of the sampling units. In general, ineligible sampling units receive weights from the sampling units with unknown eligibility since eligible sampling units can be part of the unknown pool.
The method adjust() has a boolean parameter unknown_to_inelig which controls how the sample weights of the unknown is redistributed. By default, adjust() redistribute the sample weights of the sampling units of the unknown to the ineligibles (unknown_to_inelig=True). If we do not wish to redistribute the sample weights of the unknowns to the ineligibles then we just set the flag to False (unknown_to_inelig=Fasle).
In the snippet of code below, we adjust the weight within clusters that is we use clusters as our adjustment classes. Note that we run the nonresponse adjustment twice, the first time with unknown_to_inelig=True (nr_weight) and the second time with the flag equal to False (nr_weight2). With unknown_to_inelig=True, the ineligible received part of the sample weights from the unknowns. Hence, the sample weights for the respondent is less than when the flag is False. With unknown_to_inelig=Fasle, the ineligible did Not receive any weights from the unknowns. Hence, the sample weights for the ineligible units remain the same before and after adjustment. In a real survey, the statistician may decide on the best non-response strategy based on the available information.
Step4: Important. The default call of adjust() expects standard codes for response status that is "in", "rr", "nr", and "uk" where "in" means ineligible, "rr" means respondent, "nr" means non-respondent, and "uk" means unknown eligibility.
In the call above, if we omit the parameter response_dict, then the run would fail with an assertion error message. The current error message is the following
Step5: To use response_status2, we need to map the values 100, 200, 300 and 999 to "in", "rr", "nr", and "uk". This mapping is done below using the Python dictionary status_mapping2. Using status_mapping2 in the function call adjust() will lead to the same adjustment as in the previous run i.e. nr_weight and nr_weight3 contain the same adjusted weights.
Step6: If the response status variable only takes values "in", "nr", "rr" and "uk", then it is not necessary to provide the mapping dictionary to the function i.e. resp_dict can be omitted from the function call adjust().
Step7: Poststratification <a name="section3"></a>
Poststratification is useful to compensate for under-representation of the sample or to correct for nonsampling error. The most common poststratification method consists of adjusting the sample weights to ensure that they sum to known control values from reliable souces by adjustment classes (domains). Poststratification classes can be formed using variables beyond the ones involved in the sampling design. For example, socio-economic variables such as age group, gender, race and education are often used to form poststratification classes/cells.
Important
Step8: The snippet of code below shows that the poststratified sample weights sum to the expected control totals that is 3700 households for East, 1500 for North, 2800 for South and 6500 for West.
Step9: The crosstable below shows that only one adjustment factor was calculated and applied per adjustment class or region.
Step10: In some surveys, there is interest in keeping relative distribution of strata to some known distribution. For example, WHO EPI vaccination surveys often postratify sample weights to ensure that relative sizes of strata reflect offcial statistics e.g. census data. In most cases, the strata are based on some administrative divisions.
For example, assume that according to census data that East contains 25% of the households, North contains 10%, South contains 20% and West contains 45%. We can poststratify using the snippet of code below.
Step11: Calibration <a name="section4"></a>
Calibration is a more general concept for adjusting sample weights to sum to known constants. In this tutorial, we consider the generalized regression (GREG) class of calibration. Assume that we have $\hat{\mathbf{Y}} = \sum_{i \in s} w_i y_i$ and know population totals $\mathbf{X} = (\mathbf{X}_1, ..., \mathbf{X}_p)^T$ are available. Working under the model $Y_i | \mathbf{x}_i = \mathbf{x}^T_i \mathbf{\beta} + \epsilon_i$, the GREG estimator of the population total is
$$\hat{\mathbf{Y}}_{GR} = \hat{\mathbf{Y}} + (\mathbf{X} - \hat{\mathbf{X}})^T\hat{\mathbf{B}}$$
where $\hat{\mathbf{B}}$ is the weighted least squares estimate of $\mathbf{\beta}$ and $\hat{\mathbf{X}}$ is the survey estimate of $\mathbf{X}$. The essential of the GREG approach is, under the regression model, to find the adjusted weights $w^{*}i$ that are the closest to $w_i$, to minimize $h(z) = \frac{\sum{i \in s} c_i(w_i - z_i)}{w_i}$.
Let us simulate three auxiliary variables that is education, poverty and under_five (number of children under five in the household) and assume that we have the following control totals.
* Total number of under five children
Step12: We now will calibrate the nonreponse weight (nr_weight) to ensure that the estimated number of households in poverty is equal to 4,700 and the estimated total number of children under five is 30,8500. The control numbers 4,700 and 30,800 are obtained from the table above.
The class SampleWeight() uses the method calibrate(samp_weight, aux_vars, control, domain, scale, bounded, modified) to adjust the weight using the GREG approach.
* The contol values must be stored in a python dictionnary i.e. totals = {"poverty"
Step13: We can confirm that the estimated totals for the auxiliary variables are equal to their control values.
Step14: If we want to control by domain then we can do so using the parameter domain of calibrate(). First we need to update the python dictionary holding the control values. Now, those values have to be provided for each domain. Note that the dictionary is now a nested dictionary where the higher level keys hold the domain values i.e. East, North, South and West. Then the higher level values of the dictionary are the dictionaries providing mapping for the auxiliary variables and the corresponding control values.
Step15: Note that the GREG domain estimates above do not have the additive property. That is the GREG domain estimates do not sum to the overal GREG estimate. To illustrate this, let's assume that we want to estimate the number of households.
Step16: Note that with the additive flag equal to True, the sum of the domain estimates is equal to the GREG overal estimate.
Step17: Normalization <a name="section5"></a>
DHS and MICS normalize the final sample weights to sum to the sample size. We can use the class method normalize() to ensure that the sample weight sum to some constant across the sample or by normalization domain e.g. stratum.
Important
Step18: When normalize() is called with only the parameter sample_weight then the sample weights are normalize to sum to the length of the sample weight vector. | Python Code:
import numpy as np
import pandas as pd
import samplics
from samplics.datasets import PSUSample, SSUSample
from samplics.weighting import SampleWeight
Explanation: Sample weight adjustment
The objective of this tutorial is to familiarize ourselves with SampleWeight the samplics class for adjusting sample weights. In practice, it is necessary to adjust base or design sample weights obtained directly from the random sample mechanism. These adjustments are done to correct for nonresponse, reduce effects of extreme/large weights, better align with known auxiliary information, and more. Specifically in this tutorial, we will:
* learn how to use the method adjust() to redistribute sample weights to account for nonresponse and unknown eligibility,
* learn how to use the method poststratify() to ensure that the sample weights sum to known control totals or that the relative distribution of domains is force to known distributions,
* learn how to use the method calibrate() to adjust sample weight using auxiliary information under the regression model,
* learn how to use the method normalize() to ensure that the sample weights sum to known constants.
To run the code in this notebook, we will use the dataset that was developed in the previous tutorial on sample selection.
End of explanation
# Load PSU sample data
psu_sample_cls = PSUSample()
psu_sample_cls.load_data()
psu_sample = psu_sample_cls.data
# Load PSU sample data
ssu_sample_cls = SSUSample()
ssu_sample_cls.load_data()
ssu_sample = ssu_sample_cls.data
full_sample = pd.merge(
psu_sample[["cluster", "region", "psu_prob"]],
ssu_sample[["cluster", "household", "ssu_prob"]],
on="cluster"
)
full_sample["inclusion_prob"] = full_sample["psu_prob"] * full_sample["ssu_prob"]
full_sample["design_weight"] = 1 / full_sample["inclusion_prob"]
full_sample.head(15)
Explanation: Design (base) weight <a name="section1"></a>
The design weight is the inverse of the overall probability of selection which is the product of the first and second probabilities of selection.
End of explanation
np.random.seed(7)
full_sample["response_status"] = np.random.choice(
["ineligible", "respondent", "non-respondent", "unknown"],
size=full_sample.shape[0],
p=(0.10, 0.70, 0.15, 0.05)
)
full_sample[["cluster", "region", "design_weight", "response_status"]].head(15)
Explanation: For the purposes of this illustration of handling non-response, we first need to incorporate some household non-response into our example. That is, we simulate the non-response status and store it in the variable response_status. The variable response_status has four possible values: ineligible which indicates that the sampling unit is not eligible for the survey, respondent which indicates that the sampling unit responded to the survey, non-respondent which indicates that the sampling unit did not respond to the survey, and unknown means that we are not able to infer the status of the sampling unit i.e. we do not know whether the sampling unit is eligible or not to the survey.
End of explanation
status_mapping = {"in": "ineligible", "rr": "respondent", "nr": "non-respondent", "uk": "unknown"}
full_sample["nr_weight"] = SampleWeight().adjust(
samp_weight=full_sample["design_weight"],
adjust_class=full_sample[["region", "cluster"]],
resp_status=full_sample["response_status"],
resp_dict=status_mapping,
)
full_sample["nr_weight2"] = SampleWeight().adjust(
samp_weight=full_sample["design_weight"],
adjust_class=full_sample[["region", "cluster"]],
resp_status=full_sample["response_status"],
resp_dict=status_mapping,
unknown_to_inelig=False,
)
full_sample[
["cluster", "region", "design_weight", "response_status", "nr_weight", "nr_weight2"]
].drop_duplicates().head(15)
Explanation: Nonresponse adjustment <a name="section2"></a>
In general, the sample weights are adjusted to redistribute the sample weights of all eligible units for which there is no sufficient response (unit level nonresponse) to the sampling units that sufficiently responded to the survey. This adjustment is done within adjustment classes or domains. Note that the determination of the response categories (unit response, item response, ineligible, etc.) is outside of the scope of this tutorial.
Also, the weights of the sampling units with unknown eligibility are redistributed to the rest of the sampling units. In general, ineligible sampling units receive weights from the sampling units with unknown eligibility since eligible sampling units can be part of the unknown pool.
The method adjust() has a boolean parameter unknown_to_inelig which controls how the sample weights of the unknown is redistributed. By default, adjust() redistribute the sample weights of the sampling units of the unknown to the ineligibles (unknown_to_inelig=True). If we do not wish to redistribute the sample weights of the unknowns to the ineligibles then we just set the flag to False (unknown_to_inelig=Fasle).
In the snippet of code below, we adjust the weight within clusters that is we use clusters as our adjustment classes. Note that we run the nonresponse adjustment twice, the first time with unknown_to_inelig=True (nr_weight) and the second time with the flag equal to False (nr_weight2). With unknown_to_inelig=True, the ineligible received part of the sample weights from the unknowns. Hence, the sample weights for the respondent is less than when the flag is False. With unknown_to_inelig=Fasle, the ineligible did Not receive any weights from the unknowns. Hence, the sample weights for the ineligible units remain the same before and after adjustment. In a real survey, the statistician may decide on the best non-response strategy based on the available information.
End of explanation
response_status2 = np.repeat(100, full_sample["response_status"].shape[0])
response_status2[full_sample["response_status"] == "non-respondent"] = 200
response_status2[full_sample["response_status"] == "respondent"] = 300
response_status2[full_sample["response_status"] == "unknown"] = 999
pd.crosstab(response_status2, full_sample["response_status"])
Explanation: Important. The default call of adjust() expects standard codes for response status that is "in", "rr", "nr", and "uk" where "in" means ineligible, "rr" means respondent, "nr" means non-respondent, and "uk" means unknown eligibility.
In the call above, if we omit the parameter response_dict, then the run would fail with an assertion error message. The current error message is the following: The response status must only contains values in ('in', 'rr', 'nr', 'uk') or the mapping should be provided using response_dict parameter. For the call to run without using response_dict, it is necessary that the response status takes only values in the standard codes i.e. ("in", "rr", "nr", "uk"). The variable associated with response_status can contain any code but a mapping is necessary when the response variable is not constructed using the standard codes.
To further illustrate the mapping of response status, let's assume that we have response_status2 which has the values 100 for ineligible, 200 for non-respondent, 300 for respondent, and 999 for unknown.
End of explanation
status_mapping2 = {"in": 100, "nr": 200, "rr": 300, "uk": 999}
full_sample["nr_weight3"] = SampleWeight().adjust(
samp_weight=full_sample["design_weight"],
adjust_class=full_sample[["region", "cluster"]],
resp_status=response_status2,
resp_dict=status_mapping2,
)
full_sample[["cluster", "region", "response_status", "nr_weight", "nr_weight3"]].drop_duplicates().head()
Explanation: To use response_status2, we need to map the values 100, 200, 300 and 999 to "in", "rr", "nr", and "uk". This mapping is done below using the Python dictionary status_mapping2. Using status_mapping2 in the function call adjust() will lead to the same adjustment as in the previous run i.e. nr_weight and nr_weight3 contain the same adjusted weights.
End of explanation
response_status3 = np.repeat("in", full_sample["response_status"].shape[0])
response_status3[full_sample["response_status"] == "non-respondent"] = "nr"
response_status3[full_sample["response_status"] == "respondent"] = "rr"
response_status3[full_sample["response_status"] == "unknown"] = "uk"
full_sample["nr_weight4"] = SampleWeight().adjust(
samp_weight=full_sample["design_weight"],
adjust_class=full_sample[["region", "cluster"]],
resp_status=response_status3,
)
full_sample[["cluster", "region", "response_status", "nr_weight", "nr_weight4"]].drop_duplicates().head()
# Just dropping a couple of variables not needed for the rest of the tutorial
full_sample.drop(
columns=["psu_prob", "ssu_prob", "inclusion_prob", "nr_weight2", "nr_weight3", "nr_weight4"], inplace=True
)
Explanation: If the response status variable only takes values "in", "nr", "rr" and "uk", then it is not necessary to provide the mapping dictionary to the function i.e. resp_dict can be omitted from the function call adjust().
End of explanation
census_households = {"East": 3700, "North": 1500, "South": 2800, "West": 6500}
full_sample["ps_weight"] = SampleWeight().poststratify(
samp_weight=full_sample["nr_weight"],
control=census_households,
domain=full_sample["region"]
)
full_sample.head(15)
Explanation: Poststratification <a name="section3"></a>
Poststratification is useful to compensate for under-representation of the sample or to correct for nonsampling error. The most common poststratification method consists of adjusting the sample weights to ensure that they sum to known control values from reliable souces by adjustment classes (domains). Poststratification classes can be formed using variables beyond the ones involved in the sampling design. For example, socio-economic variables such as age group, gender, race and education are often used to form poststratification classes/cells.
Important: poststratifying to totals that are known to be out of date, and thus likely inaccurate and unreliable will not improve the estimate. Use this with caution.
Let's assume that we have a reliable external source e.g. a recent census that provides the number of households by region. The external source has the following control data: 3700 households for East, 1500 for North, 2800 for South and 6500 for West.
We use the method poststratify() to ensure that the poststratified sample weights (ps_weight) sum to the know control totals by region. Note that the control totals are provided using the Python dictionary census_households.
End of explanation
sum_of_weights = full_sample[["region", "nr_weight", "ps_weight"]].groupby("region").sum()
sum_of_weights.reset_index(inplace=True)
sum_of_weights.head()
Explanation: The snippet of code below shows that the poststratified sample weights sum to the expected control totals that is 3700 households for East, 1500 for North, 2800 for South and 6500 for West.
End of explanation
full_sample["ps_adjust_fct"] = round(full_sample["ps_weight"] / full_sample["nr_weight"], 12)
pd.crosstab(full_sample["ps_adjust_fct"], full_sample["region"])
Explanation: The crosstable below shows that only one adjustment factor was calculated and applied per adjustment class or region.
End of explanation
known_ratios = {"East": 0.25, "North": 0.10, "South": 0.20, "West": 0.45}
full_sample["ps_weight2"] = SampleWeight().poststratify(
samp_weight=full_sample["nr_weight"], factor=known_ratios, domain=full_sample["region"]
)
full_sample.head()
sum_of_weights2 = full_sample[["region", "nr_weight", "ps_weight2"]].groupby("region").sum()
sum_of_weights2.reset_index(inplace=True)
sum_of_weights2["ratio"] = sum_of_weights2["ps_weight2"] / sum(sum_of_weights2["ps_weight2"])
sum_of_weights2.head()
Explanation: In some surveys, there is interest in keeping relative distribution of strata to some known distribution. For example, WHO EPI vaccination surveys often postratify sample weights to ensure that relative sizes of strata reflect offcial statistics e.g. census data. In most cases, the strata are based on some administrative divisions.
For example, assume that according to census data that East contains 25% of the households, North contains 10%, South contains 20% and West contains 45%. We can poststratify using the snippet of code below.
End of explanation
np.random.seed(150)
full_sample["education"] = np.random.choice(("Low", "Medium", "High"), size=150, p=(0.40, 0.50, 0.10))
full_sample["poverty"] = np.random.choice((0, 1), size=150, p=(0.70, 0.30))
full_sample["under_five"] = np.random.choice((0, 1, 2, 3, 4, 5), size=150, p=(0.05, 0.35, 0.25, 0.20, 0.10, 0.05))
full_sample.head()
Explanation: Calibration <a name="section4"></a>
Calibration is a more general concept for adjusting sample weights to sum to known constants. In this tutorial, we consider the generalized regression (GREG) class of calibration. Assume that we have $\hat{\mathbf{Y}} = \sum_{i \in s} w_i y_i$ and know population totals $\mathbf{X} = (\mathbf{X}_1, ..., \mathbf{X}_p)^T$ are available. Working under the model $Y_i | \mathbf{x}_i = \mathbf{x}^T_i \mathbf{\beta} + \epsilon_i$, the GREG estimator of the population total is
$$\hat{\mathbf{Y}}_{GR} = \hat{\mathbf{Y}} + (\mathbf{X} - \hat{\mathbf{X}})^T\hat{\mathbf{B}}$$
where $\hat{\mathbf{B}}$ is the weighted least squares estimate of $\mathbf{\beta}$ and $\hat{\mathbf{X}}$ is the survey estimate of $\mathbf{X}$. The essential of the GREG approach is, under the regression model, to find the adjusted weights $w^{*}i$ that are the closest to $w_i$, to minimize $h(z) = \frac{\sum{i \in s} c_i(w_i - z_i)}{w_i}$.
Let us simulate three auxiliary variables that is education, poverty and under_five (number of children under five in the household) and assume that we have the following control totals.
* Total number of under five children: 6300 in the East, 4000 in the North, 6500 in the South and 14000 in the West.
* Poverty (Yes: in poverty / No: not in poverty)
| Region &nbsp;| Poverty &nbsp;| Number of households |
|:--------|:--------:|:--------------------:|
| East | No | 2600 |
| | Yes | 1200 |
| North | No | 1500 |
| | Yes | 200 |
| South | No | 1800 |
| | Yes | 1100 |
| West | No | 4500 |
| | Yes | 2200 |
Education (Low: less than secondary, Medium: secondary completed, and High: More than secondary)
| Region | Education | Number of households |
|:--------|:--------:|:------:|
| East | Low | 2000 |
| | Medium | 1400 |
| | High | 350 |
| North | Low | 550 |
| | Medium | 700 |
| | High | 250 |
| South | Low | 1300 |
| | Medium | 1200 |
| | High | 350 |
| West | Low | 2100 |
| | Medium | 4000 |
| | High | 500 |
End of explanation
totals = {"poverty": 4700, "under_five": 30800}
full_sample["calib_weight"] = SampleWeight().calibrate(
full_sample["nr_weight"], full_sample[["poverty", "under_five"]], totals
)
full_sample[["cluster", "region", "household", "nr_weight", "calib_weight"]].head(15)
Explanation: We now will calibrate the nonreponse weight (nr_weight) to ensure that the estimated number of households in poverty is equal to 4,700 and the estimated total number of children under five is 30,8500. The control numbers 4,700 and 30,800 are obtained from the table above.
The class SampleWeight() uses the method calibrate(samp_weight, aux_vars, control, domain, scale, bounded, modified) to adjust the weight using the GREG approach.
* The contol values must be stored in a python dictionnary i.e. totals = {"poverty": 4700, "under_five": 30800}. In this case, we have two numerical variables poverty with values in {0, 1} and under_five with values in {0, 1, 2, 3, 4, 5}.
* aux_vars is the matrix of covariates.
End of explanation
poverty = full_sample["poverty"]
under_5 = full_sample["under_five"]
nr_weight = full_sample["nr_weight"]
calib_weight = full_sample["calib_weight"]
print(
f"\nTotal estimated number of poor households was {sum(poverty*nr_weight):.2f} before and {sum(poverty*calib_weight):.2f} after adjustment \n"
)
print(
f"Total estimated number of children under 5 was {sum(under_5*nr_weight):.2f} before and {sum(under_5*calib_weight):.2f} after adjustment \n"
)
Explanation: We can confirm that the estimated totals for the auxiliary variables are equal to their control values.
End of explanation
totals_by_domain = {
"East": {"poverty": 1200, "under_five": 6300},
"North": {"poverty": 200, "under_five": 4000},
"South": {"poverty": 1100, "under_five": 6500},
"West": {"poverty": 2200, "under_five": 14000},
}
full_sample["calib_weight_d"] = SampleWeight().calibrate(
full_sample["nr_weight"],
full_sample[["poverty", "under_five"]],
totals_by_domain, full_sample["region"]
)
full_sample[["cluster", "region", "household", "nr_weight", "calib_weight", "calib_weight_d"]].head(15)
Explanation: If we want to control by domain then we can do so using the parameter domain of calibrate(). First we need to update the python dictionary holding the control values. Now, those values have to be provided for each domain. Note that the dictionary is now a nested dictionary where the higher level keys hold the domain values i.e. East, North, South and West. Then the higher level values of the dictionary are the dictionaries providing mapping for the auxiliary variables and the corresponding control values.
End of explanation
print(f"\nThe number of households using the overall GREG is: {sum(full_sample['calib_weight']):.2f} \n")
print(f"The number of households using the domain GREG is: {sum(full_sample['calib_weight_d']):.2f} \n")
Explanation: Note that the GREG domain estimates above do not have the additive property. That is the GREG domain estimates do not sum to the overal GREG estimate. To illustrate this, let's assume that we want to estimate the number of households.
End of explanation
totals_by_domain = {
"East": {"poverty": 1200, "under_five": 6300},
"North": {"poverty": 200, "under_five": 4000},
"South": {"poverty": 1100, "under_five": 6500},
"West": {"poverty": 2200, "under_five": 14000},
}
calib_weight3 = SampleWeight().calibrate(
full_sample["nr_weight"],
full_sample[["poverty", "under_five"]],
totals_by_domain,
full_sample["region"],
additive=True,
)
under_5 = np.array(full_sample["under_five"])
print(f"\nEach column can be used to estimate a domain: {np.sum(np.transpose(calib_weight3) * under_5, axis=1)}\n")
print(f"The number of households using the overall GREG is: {sum(full_sample['calib_weight']):.2f} \n")
print(
f"The number of households using the domain GREG is: {sum(full_sample['calib_weight_d']):.2f} - with ADDITIVE=FALSE\n"
)
print(
f"The number of households using the domain GREG is: {np.sum(np.transpose(calib_weight3)):.2f} - with ADDITIVE=TRUE \n"
)
Explanation: Note that with the additive flag equal to True, the sum of the domain estimates is equal to the GREG overal estimate.
End of explanation
full_sample["norm_weight"] = SampleWeight().normalize(full_sample["nr_weight"])
full_sample[["cluster", "region", "nr_weight", "norm_weight"]].head(25)
print((full_sample.shape[0], full_sample["norm_weight"].sum()))
Explanation: Normalization <a name="section5"></a>
DHS and MICS normalize the final sample weights to sum to the sample size. We can use the class method normalize() to ensure that the sample weight sum to some constant across the sample or by normalization domain e.g. stratum.
Important: normalization is mostly added here for completeness but it is sheldom to see sample weight normalize in large scale household surveys. One major downside of normalized weights is the Note that estimation of totals does not make sense with normalized weights.
End of explanation
full_sample["norm_weight2"] = SampleWeight().normalize(full_sample["nr_weight"], control=300)
print(full_sample["norm_weight2"].sum())
full_sample["norm_weight3"] = SampleWeight().normalize(full_sample["nr_weight"], domain=full_sample["region"])
weight_sum = full_sample.groupby(["region"]).sum()
weight_sum.reset_index(inplace=True)
weight_sum[["region", "nr_weight", "norm_weight", "norm_weight3"]]
norm_level = {"East": 10, "North": 20, "South": 30, "West": 50}
full_sample["norm_weight4"] = SampleWeight().normalize(full_sample["nr_weight"], norm_level, full_sample["region"])
weight_sum = full_sample.groupby(["region"]).sum()
weight_sum.reset_index(inplace=True)
weight_sum[["region", "nr_weight", "norm_weight", "norm_weight2", "norm_weight3", "norm_weight4",]]
Explanation: When normalize() is called with only the parameter sample_weight then the sample weights are normalize to sum to the length of the sample weight vector.
End of explanation |
10,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Support Vector Machine
SVMs are a set of supervised learning method used for classification, regression and outliers detection
1 Classification
SVC, NuSVC and LinearSVC are classes capable of performing mulit-class classification on datasets.
They take as input two arrays
Step1: SVMs decision function depends on some subset of the training data, called the support vectors. Some properties of these support vectors can be found in members support_vectors_, suppport_ and n_supoort
Step2: 1.1 Multi-class classifiction
SVC and NuSVC implement the one - against - one approach for mulit-class classifiction. if $n_class$ is the number of the classes, then $n_class \times (n_class-1)/2$ classifiers are constructed adn each one trains data from two classes.
Step3: 1.2 Scores and probabilities
The SVC method decision_funciton gives per-class scores for each sample
1.3 Unbalanced problem
In problem where it is desired to give more importance to cetain classes or certian indivial samples keywords class_weight and sample_weight
1.4 Exmaples
Step4: 2 Regression
The method of Support Vector Classification can be extended to solve the regression problems. This method is called Support Regression.
Step5: 3 Density estimation, novelty detection
One-class SVM is used for novelty detection, that is, given a set of samples, it will detect the soft boundary of that set so as classify new points as belongs to thta set or not. The class is called OneClassSVM | Python Code:
from sklearn import svm
X = [[0,0], [1,1]]
y = [0, 1]
clf = svm.SVC()
clf.fit(X, y)
clf.predict([[2,2]])
Explanation: Support Vector Machine
SVMs are a set of supervised learning method used for classification, regression and outliers detection
1 Classification
SVC, NuSVC and LinearSVC are classes capable of performing mulit-class classification on datasets.
They take as input two arrays: an array X of size [n_samples, n_features] holding the training samples, and an array of y of class labels(strings or integers), size[n_sample]
End of explanation
clf.support_vectors_
clf.support_
clf.n_support_
Explanation: SVMs decision function depends on some subset of the training data, called the support vectors. Some properties of these support vectors can be found in members support_vectors_, suppport_ and n_supoort
End of explanation
X = [[0],[1], [2], [3]]
y = [0, 1, 2, 3]
clf = svm.SVC(decision_function_shape='ovo')
clf.fit(X, y)
dec = clf.decision_function([[1]])
dec.shape[1]
clf.decision_function_shape = 'ovr'
dec = clf.decision_function([[1]])
dec.shape[1]
Explanation: 1.1 Multi-class classifiction
SVC and NuSVC implement the one - against - one approach for mulit-class classifiction. if $n_class$ is the number of the classes, then $n_class \times (n_class-1)/2$ classifiers are constructed adn each one trains data from two classes.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
np.random.seed(0)
X = np.r_[np.random.randn(20,2)-[2,2], np.random.randn(20, 2)+[2,2]]
y = [0] * 20 + [1]*20
clf = svm.SVC(kernel='linear')
clf.fit(X, y)
w = clf.coef_[0]
a = -w[0]/ w[1]
xx = np.linspace(-5, 5)
yy = a*xx - (clf.intercept_[0]) / w[1]
b = clf.support_vectors_[0]
yy_down = a *xx +(b[1]-a*b[0])
b = clf.support_vectors_[-1]
yy_up = a*xx +(b[1] - a*b[0])
plt.plot(xx, yy, 'k-')
plt.plot(xx, yy_down, 'k--')
plt.plot(xx, yy_up, 'k--')
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:,1],
s=80, facecolor='None')
plt.scatter(X[:,0], X[:, 1], c=y, cmap=plt.cm.Paired)
plt.show()
Explanation: 1.2 Scores and probabilities
The SVC method decision_funciton gives per-class scores for each sample
1.3 Unbalanced problem
In problem where it is desired to give more importance to cetain classes or certian indivial samples keywords class_weight and sample_weight
1.4 Exmaples
End of explanation
from sklearn import svm
X = [[0,0],[2,2]]
y = [0.5, 2.5]
clf = svm.SVR()
clf.fit(X, y)
clf.predict([[1,1]])
Explanation: 2 Regression
The method of Support Vector Classification can be extended to solve the regression problems. This method is called Support Regression.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.font_manager
from sklearn import svm
xx, yy = np.meshgrid(np.linspace(-5, 5, 500), np.linspace(-5,5,500))
X = 0.3 * np.random.randn(100, 2)
X_train = np.r_[X + 2, X - 2]
# create some regular observations
X = 0.3 * np.random.randn(20, 2)
X_test = np.r_[X+2, X-2]
# generate some abnormal novel observations
X_outliers = np.random.uniform(low=-4, high=4, size=(20,2))
# the model
clf = svm.OneClassSVM(nu=0.1, kernel='rbf', gamma=0.4)
clf.fit(X_train)
y_pred_train= clf.predict(X_train)
y_pred_test = clf.predict(X_test)
y_pred_outliers = clf.predict(X_outliers)
n_error_trains = y_pred_train[y_pred_train==-1].size
n_error_test = y_pred_test[y_pred_test == -1].size
n_error_outliners = y_pred_outliers[y_pred_outliers==1].size
# plot the line
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.title('Novelty detection')
plt.contourf(xx, yy, Z, levels=np.linspace(Z.min(), 0, 7), cmap=plt.cm.PuBu)
a = plt.contour(xx, yy, Z, levels=[0], linewidths=2, color='darked')
plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors='palevioletred')
s = 40
b1 = plt.scatter(X_train[:, 0], X_train[:,1], c='white', s=s)
b2 = plt.scatter(X_test[:,0], X_test[:, 1], c='blueviolet', s=s)
c = plt.scatter(X_outliers[:,0], X_outliers[:, 1], c='gold', s=s)
plt.show()
Explanation: 3 Density estimation, novelty detection
One-class SVM is used for novelty detection, that is, given a set of samples, it will detect the soft boundary of that set so as classify new points as belongs to thta set or not. The class is called OneClassSVM
End of explanation |
10,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
JSON examples and exercise
get familiar with packages for dealing with JSON
study examples with JSON strings and files
work on exercise to be completed and submitted
reference
Step1: imports for Python, Pandas
Step2: JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source
Step3: JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source
Step4: JSON exercise
Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above,
1. Find the 10 countries with most projects
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
Step5: 1. Find the 10 countries with most projects
Step6: 2. Find the top 10 major project themes (using column 'mjtheme_namecode')
Step7: 3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in. | Python Code:
import pandas as pd
import numpy as np
Explanation: JSON examples and exercise
get familiar with packages for dealing with JSON
study examples with JSON strings and files
work on exercise to be completed and submitted
reference: http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-reader
data source: http://jsonstudio.com/resources/
End of explanation
import json
from pandas.io.json import json_normalize
Explanation: imports for Python, Pandas
End of explanation
# define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
{'state': 'Ohio',
'shortname': 'OH',
'info': {'governor': 'John Kasich'},
'counties': [{'name': 'Summit', 'population': 1234},
{'name': 'Cuyahoga', 'population': 1337}]}]
# use normalization to create tables from nested element
json_normalize(data, 'counties')
# further populate tables created from nested element
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
Explanation: JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source: http://pandas.pydata.org/pandas-docs/stable/io.html#normalization
End of explanation
# load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df
Explanation: JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source: http://jsonstudio.com/resources/
End of explanation
bank = pd.read_json('data/world_bank_projects.json')
bank.head()
Explanation: JSON exercise
Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above,
1. Find the 10 countries with most projects
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
End of explanation
bank.countryname.value_counts().head(10)
Explanation: 1. Find the 10 countries with most projects
End of explanation
names = []
for i in bank.index:
namecode = bank.loc[i,'mjtheme_namecode']
names.extend(list(json_normalize(namecode)['name']))
pd.Series(names).value_counts().head(10).drop('', axis=0)
Explanation: 2. Find the top 10 major project themes (using column 'mjtheme_namecode')
End of explanation
codes_names = pd.DataFrame(columns=['code', 'name'])
for i in bank.index:
namecode = bank.loc[i,'mjtheme_namecode']
codes_names = pd.concat([codes_names, json_normalize(namecode)])
codes_names_dict = (codes_names[codes_names.name != '']
.drop_duplicates()
.to_dict())
for i in bank.index:
namecode = bank.loc[i,'mjtheme_namecode']
cell = json_normalize(namecode).replace('', np.nan)
cell = cell.fillna(codes_names_dict)
bank.set_value(i, 'mjtheme_namecode', cell.to_dict(orient='record'))
Explanation: 3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
End of explanation |
10,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TF-DNNRegressor - SeLU - Spitzer Calibration Data
This script show a simple example of using tf.contrib.learn library to create our model.
The code is divided in following steps
Step1: Load CSVs data
Step2: We only take first 1000 rows for training/testing and last 500 row for evaluation.
This done so that this script does not consume a lot of kaggle system resources.
Step3: Filtering Categorical and Continuous features
We store Categorical, Continuous and Target features names in different variables. This will be helpful in later steps.
Step4: Converting Data into Tensors
When building a TF.Learn model, the input data is specified by means of an Input Builder function. This builder function will not be called until it is later passed to TF.Learn methods such as fit and evaluate. The purpose of this function is to construct the input data, which is represented in the form of Tensors or SparseTensors.
Note that input_fn will be called while constructing the TensorFlow graph, not while running the graph. What it is returning is a representation of the input data as the fundamental unit of TensorFlow computations, a Tensor (or SparseTensor).
More detail on input_fn.
Step5: Selecting and Engineering Features for the Model
We use tf.learn's concept of FeatureColumn which help in transforming raw data into suitable input features.
These engineered features will be used when we construct our model.
Step6: Defining The Regression Model
Following is the simple DNNRegressor model. More detail about hidden_units, etc can be found here.
model_dir is used to save and restore our model. This is because once we have trained the model we don't want to train it again, if we only want to predict on new data-set.
Step7: Training and Evaluating Our Model
add progress bar through python logging
Step8: Track Scalable Growth
Shrunk data set to 23559 Training samples and 7853 Val/Test samples
| n_iters | time (s) | val acc | multicore | gpu |
|------------------------------------------------|
| 100 | 5.869 | 6.332 | yes | no |
| 200 | 6.380 | 13.178 | yes | no |
| 500 | 8.656 | 54.220 | yes | no |
| 1000 | 12.170 | 66.596 | yes | no |
| 2000 | 19.891 | 62.996 | yes | no |
| 5000 | 43.589 | 76.586 | yes | no |
| 10000 | 80.581 | 66.872 | yes | no |
| 20000 | 162.435 | 78.927 | yes | no |
| 50000 | 535.584 | 75.493 | yes | no |
| 100000 | 1062.656 | 73.162 | yes | no |
Step9: Predicting output for test data
Most of the time prediction script would be separate from training script (we need not to train on same data again) but I am providing both in same script here; as I am not sure if we can create multiple notebook and somehow share data between them in Kaggle. | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
from matplotlib import pyplot as plt
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler, minmax_scale
from sklearn.metrics import r2_score
from time import time
start0 = time()
plt.rcParams['figure.dpi'] = 300
Explanation: TF-DNNRegressor - SeLU - Spitzer Calibration Data
This script show a simple example of using tf.contrib.learn library to create our model.
The code is divided in following steps:
Load CSVs data
Filtering Categorical and Continuous features
Converting Data into Tensors
Selecting and Engineering Features for the Model
Defining The Regression Model
Training and Evaluating Our Model
Predicting output for test data
v0.1: Added code for data loading, modeling and prediction model.
v0.2: Removed unnecessary output logs.
PS: I was able to get a score of 1295.07972 using this script with 70% (of train.csv) data used for training and rest for evaluation. Script took 2hrs for training and 3000 steps were used.
End of explanation
nSkip = 20
spitzerDataRaw = pd.read_csv('pmap_ch2_0p1s_x4_rmulti_s3_7.csv')#[::nSkip]
PLDpixels = pd.DataFrame({key:spitzerDataRaw[key] for key in spitzerDataRaw.columns.values if 'pix' in key})
PLDpixels
PLDnorm = np.sum(np.array(PLDpixels),axis=1)
PLDpixels = (PLDpixels.T / PLDnorm).T
PLDpixels
spitzerData = spitzerDataRaw.copy()
for key in spitzerDataRaw.columns:
if key in PLDpixels.columns:
spitzerData[key] = PLDpixels[key]
testPLD = np.array(pd.DataFrame({key:spitzerData[key] for key in spitzerData.columns.values if 'pix' in key}))
assert(not sum(abs(testPLD - np.array(PLDpixels))).all())
print('Confirmed that PLD Pixels have been Normalized to Spec')
notFeatures = ['flux', 'fluxerr', 'xerr', 'yerr', 'xycov']
feature_columns = spitzerData.drop(notFeatures,axis=1).columns.values
features = spitzerData.drop(notFeatures,axis=1).values
labels = spitzerData['flux'].values
stdScaler = StandardScaler()
features_scaled = stdScaler.fit_transform(features)
labels_scaled = stdScaler.fit_transform(labels[:,None]).ravel()
x_valtest, x_train, y_valtest, y_train = train_test_split(features_scaled, labels_scaled, test_size=0.6, random_state=42)
x_val, x_test, y_val, y_test = train_test_split(x_valtest, y_valtest, test_size=0.5, random_state=42)
# x_val = minmax_scale(x_val.astype('float32'))
# x_train = minmax_scale(x_train.astype('float32'))
# x_test = minmax_scale(x_test.astype('float32'))
# y_val = minmax_scale(y_val.astype('float32'))
# y_train = minmax_scale(y_train.astype('float32'))
# y_test = minmax_scale(y_test.astype('float32'))
print(x_val.shape[0] , 'validation samples')
print(x_train.shape[0], 'train samples')
print(x_test.shape[0] , 'test samples')
train_df = pd.DataFrame(np.c_[x_train, y_train], columns=list(feature_columns) + ['flux'])
test_df = pd.DataFrame(np.c_[x_test , y_test ], columns=list(feature_columns) + ['flux'])
evaluate_df = pd.DataFrame(np.c_[x_val , y_val ], columns=list(feature_columns) + ['flux'])
Explanation: Load CSVs data
End of explanation
# train_df = df_train_ori.head(1000)
# evaluate_df = df_train_ori.tail(500)
# test_df = df_test_ori.head(1000)
# MODEL_DIR = "tf_model_spitzer/withNormalization_drop50/relu"
# MODEL_DIR = "tf_model_spitzer/adamOptimizer_with_drop50/relu"
MODEL_DIR = "tf_model_spitzer/adamOptimizer_with_drop50/selu/"
print("train_df.shape = " , train_df.shape)
print("test_df.shape = " , test_df.shape)
print("evaluate_df.shape = ", evaluate_df.shape)
Explanation: We only take first 1000 rows for training/testing and last 500 row for evaluation.
This done so that this script does not consume a lot of kaggle system resources.
End of explanation
# categorical_features = [feature for feature in features if 'cat' in feature]
categorical_features = []
continuous_features = [feature for feature in train_df.columns]# if 'cat' in feature]
LABEL_COLUMN = 'flux'
Explanation: Filtering Categorical and Continuous features
We store Categorical, Continuous and Target features names in different variables. This will be helpful in later steps.
End of explanation
# Converting Data into Tensors
def input_fn(df, training = True):
# Creates a dictionary mapping from each continuous feature column name (k) to
# the values of that column stored in a constant Tensor.
continuous_cols = {k: tf.constant(df[k].values)
for k in continuous_features}
# Creates a dictionary mapping from each categorical feature column name (k)
# to the values of that column stored in a tf.SparseTensor.
# categorical_cols = {k: tf.SparseTensor(
# indices=[[i, 0] for i in range(df[k].size)],
# values=df[k].values,
# shape=[df[k].size, 1])
# for k in categorical_features}
# Merges the two dictionaries into one.
feature_cols = continuous_cols
# feature_cols = dict(list(continuous_cols.items()) + list(categorical_cols.items()))
if training:
# Converts the label column into a constant Tensor.
label = tf.constant(df[LABEL_COLUMN].values)
# Returns the feature columns and the label.
return feature_cols, label
# Returns the feature columns
return feature_cols
def train_input_fn():
return input_fn(train_df, training=True)
def eval_input_fn():
return input_fn(evaluate_df, training=True)
# def test_input_fn():
# return input_fn(test_df.drop(LABEL_COLUMN,axis=1), training=False)
def test_input_fn():
return input_fn(test_df, training=False)
Explanation: Converting Data into Tensors
When building a TF.Learn model, the input data is specified by means of an Input Builder function. This builder function will not be called until it is later passed to TF.Learn methods such as fit and evaluate. The purpose of this function is to construct the input data, which is represented in the form of Tensors or SparseTensors.
Note that input_fn will be called while constructing the TensorFlow graph, not while running the graph. What it is returning is a representation of the input data as the fundamental unit of TensorFlow computations, a Tensor (or SparseTensor).
More detail on input_fn.
End of explanation
engineered_features = []
for continuous_feature in continuous_features:
engineered_features.append(
tf.contrib.layers.real_valued_column(continuous_feature))
# for categorical_feature in categorical_features:
# sparse_column = tf.contrib.layers.sparse_column_with_hash_bucket(
# categorical_feature, hash_bucket_size=1000)
# engineered_features.append(tf.contrib.layers.embedding_column(sparse_id_column=sparse_column, dimension=16,
# combiner="sum"))
Explanation: Selecting and Engineering Features for the Model
We use tf.learn's concept of FeatureColumn which help in transforming raw data into suitable input features.
These engineered features will be used when we construct our model.
End of explanation
def selu(z,
scale=1.0507009873554804934193349852946,
alpha=1.6732632423543772848170429916717):
return scale * tf.where(z >= 0.0, z, alpha * tf.nn.elu(z))
nHidden1 = 10
nHidden2 = 5
nHidden3 = 10
regressor = tf.contrib.learn.DNNRegressor(activation_fn=selu, dropout=0.5, optimizer=tf.train.AdamOptimizer,
feature_columns=engineered_features, hidden_units=[nHidden1, nHidden2, nHidden3], model_dir=MODEL_DIR)
Explanation: Defining The Regression Model
Following is the simple DNNRegressor model. More detail about hidden_units, etc can be found here.
model_dir is used to save and restore our model. This is because once we have trained the model we don't want to train it again, if we only want to predict on new data-set.
End of explanation
# Training Our Model
nFitSteps = 50000
start = time()
wrap = regressor.fit(input_fn=train_input_fn, steps=nFitSteps)
print('TF Regressor took {} seconds'.format(time()-start))
# Evaluating Our Model
print('Evaluating ...')
results = regressor.evaluate(input_fn=eval_input_fn, steps=1)
for key in sorted(results):
print("{}: {}".format(key, results[key]))
print("Val Acc: {:.3f}".format((1-results['loss'])*100))
Explanation: Training and Evaluating Our Model
add progress bar through python logging
End of explanation
nItersList = [100,200,500,1000,2000,5000,10000,20000,50000,100000]
rtimesList = [5.869, 6.380, 8.656, 12.170, 19.891, 43.589, 80.581, 162.435, 535.584, 1062.656]
valAccList = [6.332, 13.178, 54.220, 66.596, 62.996, 76.586, 66.872, 78.927, 75.493, 73.162]
plt.loglog(nItersList, rtimesList,'o-');
plt.twinx()
plt.semilogx(nItersList, valAccList,'o-', color='orange');
Explanation: Track Scalable Growth
Shrunk data set to 23559 Training samples and 7853 Val/Test samples
| n_iters | time (s) | val acc | multicore | gpu |
|------------------------------------------------|
| 100 | 5.869 | 6.332 | yes | no |
| 200 | 6.380 | 13.178 | yes | no |
| 500 | 8.656 | 54.220 | yes | no |
| 1000 | 12.170 | 66.596 | yes | no |
| 2000 | 19.891 | 62.996 | yes | no |
| 5000 | 43.589 | 76.586 | yes | no |
| 10000 | 80.581 | 66.872 | yes | no |
| 20000 | 162.435 | 78.927 | yes | no |
| 50000 | 535.584 | 75.493 | yes | no |
| 100000 | 1062.656 | 73.162 | yes | no |
End of explanation
def de_median(x):
return x - np.median(x)
predicted_output = list(regressor.predict(input_fn=test_input_fn))
# x = list(predicted_output)
r2_score(test_df['flux'].values,predicted_output)*100
print('Full notebook took {} seconds'.format(time()-start0))
Explanation: Predicting output for test data
Most of the time prediction script would be separate from training script (we need not to train on same data again) but I am providing both in same script here; as I am not sure if we can create multiple notebook and somehow share data between them in Kaggle.
End of explanation |
10,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$$ \LaTeX \text{ command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\G}{\mathcal{G}}
\newcommand{\L}{\mathcal{L}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\Parents}{\mathrm{Parents}}
\newcommand{\NonDesc}{\mathrm{NonDesc}}
\newcommand{\I}{\mathcal{I}}
\newcommand{\dsep}{\text{d-sep}}
\newcommand{\Cat}{\mathrm{Categorical}}
\newcommand{\Bin}{\mathrm{Binomial}}
$$
HMMs and the Forward Backward Algorithm
Derivations
Let's start by deriving the forward and backward algorithms from lecture. It's really important you understand how the recursion comes into play. We covered it in lecture, please try not to cheat during this exercise, you won't get anything out of it if you don't try!
Review
Recall that a Hidden Markov Model (HMM) is a particular factorization of a joint distribution representing noisy observations $X_k$ generated from discrete hidden Markov chain $Z_k$.
$$
P(\vec{X}, \vec{Z}) = P(Z_1) P(X_1 \mid Z_1) \prod_{k=2}^T P(Z_k \mid Z_{k-1}) P(X_k \mid Z_k)
$$
and as a bayesian network
Step1: Problem
Step2: Problem | Python Code:
import numpy as np
parts_of_speech = DETERMINER, NOUN, VERB, END = 0, 1, 2, 3
words = THE, DOG, WALKED, IN, PARK, END = 0, 1, 2, 3, 4, 5
# transition probabilities
A = np.array([
# D N V E
[0.1, 0.8, 0.1, 0.0], # D: determiner most likely to go to noun
[0.1, 0.1, 0.6, 0.2], # N: noun most likely to go to verb
[0.4, 0.3, 0.2, 0.1], # V
[0.0, 0.0, 0.0, 1.0]]) # E: end always goes to end
# distribution of parts of speech for the first word of a sentence
pi = np.array([0.4, 0.3, 0.3, 0.0])
# emission probabilities
B = np.array([
# the dog walked in park end
[ 0.8, 0.1, 0.0, 1.0 , 0.0, 0.0], # D
[ 0.1, 0.8, 0.0, 0.0, 0.1, 0.0], # N
[ 0.1, 0.1, 1.0, 0.0, 0.9, 0.0], # V
[ 0. , 0.0, 0.0, 0.0, 0.0, 1.0] # E
])
Explanation: $$ \LaTeX \text{ command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\G}{\mathcal{G}}
\newcommand{\L}{\mathcal{L}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\Parents}{\mathrm{Parents}}
\newcommand{\NonDesc}{\mathrm{NonDesc}}
\newcommand{\I}{\mathcal{I}}
\newcommand{\dsep}{\text{d-sep}}
\newcommand{\Cat}{\mathrm{Categorical}}
\newcommand{\Bin}{\mathrm{Binomial}}
$$
HMMs and the Forward Backward Algorithm
Derivations
Let's start by deriving the forward and backward algorithms from lecture. It's really important you understand how the recursion comes into play. We covered it in lecture, please try not to cheat during this exercise, you won't get anything out of it if you don't try!
Review
Recall that a Hidden Markov Model (HMM) is a particular factorization of a joint distribution representing noisy observations $X_k$ generated from discrete hidden Markov chain $Z_k$.
$$
P(\vec{X}, \vec{Z}) = P(Z_1) P(X_1 \mid Z_1) \prod_{k=2}^T P(Z_k \mid Z_{k-1}) P(X_k \mid Z_k)
$$
and as a bayesian network:
<img src="hmm.png" width=400>
HMM: Parameters
For a Hidden Markov Model with $N$ hidden states and $M$ observed states, there are three row-stochastic parameters $\theta=(A,B,\pi)$,
- Transition matrix $A \in \R^{N \times N}$
$$
A_{ij} = p(Z_t = j | Z_{t-1} = i)
$$
- Emission matrix $B \in \R^{N \times M}$
$$
B_{jk} = p(X_t = k | Z_t = j)
$$
- Initial distribution $\pi \in \R^N$,
$$
\pi_j = p(Z_1 = j)
$$
HMM: Filtering Problem
Filtering means to compute the current belief state $p(z_t | x_1, \dots, x_t,\theta)$.
$$
p(z_t | x_1,\dots,x_t) = \frac{p(x_1,\dots,x_t,z_t)}{p(x_1,\dots,x_t)}
$$
- Given observations $x_{1:t}$ so far, infer $z_t$.
Solved by the forward algorithm.
How do we infer values of hidden variables?
One of the most challenging part of HMMs is to try to "predict" what are the values of the hidden variables $z_t$, having observed all the $x_1, \ldots, x_T$.
Computing $p(z_t \mid \X)$ is known on smoothing. More on this soon.
But it turns out that this probability can be computed from two other quantities:
$p(x_1,\dots,x_t,z_t)$, which we are going to label $\alpha_t(z_t)$
$p(x_{t+1},\dots,x_{T} | z_t)$, which we are going to label $\beta_t(z_t)$
Problem: Derive the Forward Algorithm
The forward algorithm computes $\alpha_t(z_t) \equiv p(x_1,\dots,x_t,z_t)$.
The challenge is to frame this in terms of $\alpha_{t-1}(z_{t-1})$. We'll get you started by marginalizing over "one step back". You need to fill in the rest!
$$
\begin{align}
\alpha_t(z_t)
&= \sum_{z_{t-1}} p(x_1, \dots, x_t, z_{t-1}, z_t) \
\dots \
&= B_{z_t,x_t} \sum_{z_{t-1}} \alpha_{t-1}(z_{t-1}) A_{z_{t-1}, z_t}
\end{align}
$$
Hints:
You are trying to pull out $\alpha_{t-1}(z_{t-1}) = p(x_1, \dots, x_{k-1}, z_{t-1})$ Can you factor out $p(x_k)$ and $p(z_k)$ using bayes theorem ($P(A,B) = P(A|B)P(B)$)?
Once you do, conditional independence (look for d-separation!) should help simplify
Problem: Derive the Backward Algorithm
The backward algorithm computes $\beta_t(z_t) \equiv p(x_{t+1},\dots,x_{T} | z_t)$
$$
\begin{align}
\beta(z_t)
&= \sum_{z_{t+1}} p(x_{t+1},\dots,x_{T},z_{t+1} | z_t) \
\dots \
&= \sum_{z_{t+1}} A_{z_t, z_{t+1}} B_{z_{t+1}, x_{t+1}} \beta_{t+1}(z_{t+1})
\end{align}
$$
Similar to deriving the forward algorithm, we've gotten you started by marginalizing over "one step forward". Use applications of bayes rule $P(A,B) = P(A|B)P(B)$ and simplifications from conditional independence to get the rest of the way there.
Code example: part of speech tagger
Now that we're comfortable with the theory behind the forward and backward algorithm, let's set up a real example and implement both procedures.
In this example, we observe a sequence of words backed by a latent part of speech variable.
$X$: discrete distribution over bag of words
$Z$: discrete distribution over parts of speech
$A$: the probability of a part of speech given a previous part of speech, e.g, what do we expect to see after a noun?
$B$: the distribution of words given a particular part of speech, e.g, what words are we likely to see if we know it is a verb?
$x_{i}s$ a sequence of observed words (a sentence). Note: in for both variables we have a special "end" outcome that signals the end of a sentence. This makes sense as a part of speech tagger would like to have a sense of sentence boundaries.
End of explanation
import numpy as np
np.set_printoptions(suppress=True)
def forward(params, observations):
pi, A, B = params
N = len(observations)
S = pi.shape[0]
alpha = np.zeros((N, S))
# base case
alpha[0, :] = pi * B[:,observations[0]]
# recursive case - YOUR CODE GOES HERE
return (alpha, np.sum(alpha[N-1,:]))
forward((pi, A, B), [THE, DOG, WALKED, IN, THE, PARK, END])
Explanation: Problem: Implement the Forward Algorithm
Now it's time to put it all together. We create a table to hold the results and build them up from the front to back.
Along with the results, we return the marginal probability that can be compared with the backward algorithm's below.
End of explanation
def backward(params, observations):
pi, A, B = params
N = len(observations)
S = pi.shape[0]
beta = np.zeros((N, S))
# base case
beta[N-1, :] = 1
# recursive case -- YOUR CODE GOES HERE!
return (beta, np.sum(pi * B[:, observations[0]] * beta[0,:]))
backward((pi, A, B), [THE, DOG, WALKED, IN, THE, PARK, END])
Explanation: Problem: Implement the Backward Algorithm
If you implemented both correctly, the second return value from each method should match.
End of explanation |
10,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Group Significant GO terms by Frequently Seen Words
We use data from a 2014 Nature paper
Step1: 2. Count all words in the significant GO term names.
2a. Get list of significant GO term names
Step2: 2b. Get word count in significant GO term names
Step3: 3. Inspect word-count list generated at step 2.
Words like "mitochondrial" can be interesting. Some words will not be interesting, such as "of".
Step4: 4. Create curated list of words based on frequently seen GO term words.
Step5: 5. For each word of interest, create a list of significant GOs whose name contains the word.
Step6: 6. Plot GO terms seen for each word of interest.
6a. Create a convenient goid-to-goobject dictionary
Step7: 6b. Create plots formed by a shared word in the significant GO term's name
Step9: 6c. Example plot for "apoptotic"
Colors | Python Code:
%run goea_nbt3102_fncs.ipynb
goeaobj = get_goeaobj_nbt3102('fdr_bh')
# Read Nature data from Excel file (~400 study genes)
studygeneid2symbol = read_data_nbt3102()
# Run Gene Ontology Enrichment Analysis using Benjamini/Hochberg FDR correction
geneids_study = studygeneid2symbol.keys()
goea_results_all = goeaobj.run_study(geneids_study)
goea_results_sig = [r for r in goea_results_all if r.p_fdr_bh < 0.05]
Explanation: Group Significant GO terms by Frequently Seen Words
We use data from a 2014 Nature paper:
Computational analysis of cell-to-cell heterogeneity
in single-cell RNA-sequencing data reveals hidden
subpopulations of cells
2016 GOATOOLS Manuscript
This iPython notebook demonstrates one approach to explore Gene Ontology Enrichment Analysis (GOEA) results:
1. Create sub-plots containing significant GO terms which share a common word, like RNA.
2. Create detailed reports showing all significant GO terms and all study gene symbols for the common word.
2016 GOATOOLS Manuscript text, summary
The code in this notebook generates the data used in this statement in the GOATOOLS manuscript:
We observed:
93 genes associated with RNA,
47 genes associated with translation,
70 genes associated with mitochondrial or mitochondrian, and
37 genes associated with ribosomal, as reported by GOATOOLS.
2016 GOATOOLS Manuscript text, details
Details summarized here are also found in the file,
nbt3102_GO_word_genes.txt,
which is generated by this iPython notebook.
RNA: 93 study genes, 6 GOs:
0) BP GO:0006364 rRNA processing (8 genes)
1) MF GO:0003723 RNA binding (32 genes)
2) MF GO:0003729 mRNA binding (11 genes)
3) MF GO:0008097 5S rRNA binding (4 genes)
4) MF GO:0019843 rRNA binding (6 genes)
5) MF GO:0044822 poly(A) RNA binding (86 genes)
translation: 47 study genes, 5 GOs:
0) BP GO:0006412 translation (41 genes)
1) BP GO:0006414 translational elongation (7 genes)
2) BP GO:0006417 regulation of translation (9 genes)
3) MF GO:0003746 translation elongation factor activity (5 genes)
4) MF GO:0031369 translation initiation factor binding (4 genes)
mitochond: 70 study genes, 8 GOs:
0) BP GO:0051881 regulation of mitochondrial membrane potential (7 genes)
1) CC GO:0000275 mitochondrial proton-transporting ATP synthase complex, catalytic core F(1) (3 genes)
2) CC GO:0005739 mitochondrion (68 genes)
3) CC GO:0005743 mitochondrial inner membrane (28 genes)
4) CC GO:0005747 mitochondrial respiratory chain complex I (5 genes)
5) CC GO:0005753 mitochondrial proton-transporting ATP synthase complex (4 genes)
6) CC GO:0005758 mitochondrial intermembrane space (7 genes)
7) CC GO:0031966 mitochondrial membrane (6 gene0) CC GO:0005925 focal adhesion (53 genes)
ribosomal: 37 study genes, 6 GOs:
0) BP GO:0000028 ribosomal small subunit assembly (9 genes)
1) BP GO:0042274 ribosomal small subunit biogenesis (6 genes)
2) CC GO:0015934 large ribosomal subunit (4 genes)
3) CC GO:0015935 small ribosomal subunit (13 genes)
4) CC GO:0022625 cytosolic large ribosomal subunit (16 genes)
5) CC GO:0022627 cytosolic small ribosomal subunit (19 genes)
Also seen, but not reported in the manuscript:
ribosome*: 41 study genes, 2 GOs:
0) CC GO:0005840 ribosome (35 genes)
1) MF GO:0003735 structural constituent of ribosome (38 genes)
adhesion: 53 study genes, 1 GOs:
0) CC GO:0005925 focal adhesion (53 genes)
endoplasmic: 49 study genes, 3 GOs:
0) CC GO:0005783 endoplasmic reticulum (48 genes)
1) CC GO:0005790 smooth endoplasmic reticulum (5 genes)
2) CC GO:0070971 endoplasmic reticulum exit site (4 genes)
nucleotide: 46 study genes, 1 GOs:
0) MF GO:0000166 nucleotide binding (46 genes)
apoptotic: 42 study genes, 2 GOs:
0) BP GO:0006915 apoptotic process (26 genes)
1) BP GO:0043066 negative regulation of apoptotic process (28 genes)
Methodology
For this exploration, we choose specific sets of GO terms for each plot based on frequently seen words in the GO term name. Examples of GO term names include "rRNA processing", "poly(A) RNA binding", and "5S rRNA binding". The common word for these GO terms is "RNA".
Steps:
1. Run a Gene Ontology Enrichment Analysis.
2. Count all words in the significant GO term names.
3. Inspect word-count list from step 2.
4. Create curated list of words based on frequently seen GO term words.
5. Get significant GO terms which contain the words of interest.
6. Plot GO terms seen for each word of interest.
7. Print a report with full details
1. Run GOEA. Save results.
End of explanation
from __future__ import print_function
go_names = [r.name for r in goea_results_sig]
print(len(go_names)) # Includes ONLY signficant results
Explanation: 2. Count all words in the significant GO term names.
2a. Get list of significant GO term names
End of explanation
import collections as cx
word2cnt = cx.Counter([word for name in go_names for word in name.split()])
Explanation: 2b. Get word count in significant GO term names
End of explanation
# Print 10 most common words found in significant GO term names
print(word2cnt.most_common(10))
Explanation: 3. Inspect word-count list generated at step 2.
Words like "mitochondrial" can be interesting. Some words will not be interesting, such as "of".
End of explanation
freq_seen = ['RNA', 'translation', 'mitochond', 'ribosomal', 'ribosome',
'adhesion', 'endoplasmic', 'nucleotide', 'apoptotic']
Explanation: 4. Create curated list of words based on frequently seen GO term words.
End of explanation
# Collect significant GOs for words in freq_seen (unordered)
word2siggos = cx.defaultdict(set)
# Loop through manually curated words of interest
for word in freq_seen:
# Check each significant GOEA result for the word of interest
for rec in goea_results_sig:
if word in rec.name:
word2siggos[word].add(rec.GO)
# Sort word2gos to have the same order as words in freq_seen
word2siggos = cx.OrderedDict([(w, word2siggos[w]) for w in freq_seen])
Explanation: 5. For each word of interest, create a list of significant GOs whose name contains the word.
End of explanation
goid2goobj_all = {nt.GO:nt.goterm for nt in goea_results_all}
print(len(goid2goobj_all))
Explanation: 6. Plot GO terms seen for each word of interest.
6a. Create a convenient goid-to-goobject dictionary
End of explanation
# Plot set of GOs for each frequently seen word
from goatools.godag_plot import plot_goid2goobj
for word, gos in word2siggos.items():
goid2goobj = {go:goid2goobj_all[go] for go in gos}
plot_goid2goobj(
"nbt3102_word_{WORD}.png".format(WORD=word),
goid2goobj, # source GOs to plot and their GOTerm object
study_items=15, # Max number of gene symbols to print in each GO term
id2symbol=studygeneid2symbol, # Contains GeneID-to-Symbol from Step 1
goea_results=goea_results_all, # pvals used for GO Term coloring
dpi=150)
Explanation: 6b. Create plots formed by a shared word in the significant GO term's name
End of explanation
fout = "nbt3102_GO_word_genes.txt"
go2res = {nt.GO:nt for nt in goea_results_all}
with open(fout, "w") as prt:
prt.write(This file is generated by test_nbt3102.py and is intended to confirm
this statement in the GOATOOLS manuscript:
We observed:
93 genes associated with RNA,
47 genes associated with translation,
70 genes associated with mitochondrial or mitochondrian, and
37 genes associated with ribosomal, as reported by GOATOOLS.
)
for word, gos in word2siggos.items():
# Sort first by BP, MF, CC. Sort second by GO id.
gos = sorted(gos, key=lambda go: [go2res[go].NS, go])
genes = set()
for go in gos:
genes |= go2res[go].study_items
genes = sorted([studygeneid2symbol[g] for g in genes])
prt.write("\n{WD}: {N} study genes, {M} GOs\n".format(WD=word, N=len(genes), M=len(gos)))
prt.write("{WD} GOs: {GOs}\n".format(WD=word, GOs=", ".join(gos)))
for i, go in enumerate(gos):
res = go2res[go]
prt.write("{I}) {NS} {GO} {NAME} ({N} genes)\n".format(
I=i, NS=res.NS, GO=go, NAME=res.name, N=res.study_count))
prt.write("{N} study genes:\n".format(N=len(genes)))
N = 10 # Number of genes per line
mult = [genes[i:i+N] for i in range(0, len(genes), N)]
prt.write(" {}\n".format("\n ".join([", ".join(str(g) for g in sl) for sl in mult])))
print(" WROTE: {F}\n".format(F=fout))
Explanation: 6c. Example plot for "apoptotic"
Colors:
Please note that to have colors related to GOEA significance, you must provide the GOEA results, as shown here with the "goea_results=goea_results_all" argument.
1. Levels of Statistical Significance:
1. light red => extremely significant fdr_bh values (p<0.005)
2. orange => very significant fdr_bh values (p<0.01)
2. yellow => significant fdr_bh values (p<0.05)
3. grey => study terms which are not statistically significant (p>0.05)
2. High-level GO terms:
1. Cyan => Level-01 GO terms
Please note that the variable, goea_results_all, contains gene ids and fdr_bh alpha values for all study GO terms, significant or not. If the argument had only included the significant results, "goea_results=goea_results_sig", the currently colored grey GO terms would be white and would not have study genes annotated inside.
Gene Symbol Names
Please notice that the study gene symbol names are written in thier associated GO term box. Symbol names and not gene count nor gene ids are used because of the argument, "id2symbol=studygeneid2symbol", to the function, "plot_goid2goobj".
7. Print a report with full details
7a. Create detailed report
End of explanation |
10,512 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a Pandas DataFrame that looks something like: | Problem:
import pandas as pd
df = pd.DataFrame({'col1': {0: 'a', 1: 'b', 2: 'c'},
'col2': {0: 1, 1: 3, 2: 5},
'col3': {0: 2, 1: 4, 2: 6},
'col4': {0: 3, 1: 6, 2: 2},
'col5': {0: 7, 1: 2, 2: 3},
'col6': {0: 2, 1: 9, 2: 5},
})
df.columns = [list('AAAAAA'), list('BBCCDD'), list('EFGHIJ')]
def g(df):
result = pd.melt(df, value_vars=df.columns.tolist())
cols = result.columns[:-1]
for idx in result.index:
t = result.loc[idx, cols]
for i in range(len(cols)):
result.loc[idx, cols[i]] = t[cols[-i-1]]
return result
result = g(df.copy()) |
10,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-3', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-3
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
10,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ICPW annual time series
At the Task Force meeting in May 2017, it was decided that the TOC trends analysis should include rolling regressions based on the annually aggregated data (rather than calculating a single set of statistics for a small number of pre-specified time periods). Some code outlining an approach for this in Python can be found here here, and John has also created a version using R.
Heleen has asked me to provided John with annual time series for each site for the period from 1990 to 2012 (see e-mail received from Heleen on 12/05/2017 at 10.01 and also the e-mail chain involving John, Don and Heleen between 19th and 20th May 2017).
As a first step, I've modified my previous trends code so that, duirng the processing, the annual time series are saved as a series of CSVs. The additional lines of code can be found on lines 567 to 574 of toc_trend_analysis.py.
The output CSVs require a small amount of manual cleaning
Step1: The final step is to join in the site metadata used in the previous analysis.
Step2: Heleen previously defined various criteria for whether a series should be included in the analysis or not. In order to keep things consistent, it's probably a good idea to include this information here.
Step3: Finally, it's worth checking that this output matches the time series available on the ICPW website and in the plots of the climate data. | Python Code:
# Data paths
clim_xls = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\CRU_Climate_Data\cru_climate_summaries.xlsx')
stn_xls = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\CRU_Climate_Data\cru_stn_elevs.csv')
chem_fold = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Results\annual_chemistry_series')
# For performance, pre-load the climate output file
# (this saves having to read it within the inner loop below,
# which is very slow)
clim_dict = {}
for var in ['pre', 'tmp']:
for tm in ['ann', 'jja', 'jas']:
# Open the climate data
clim_df = pd.read_excel(clim_xls, sheetname='%s_%s' % (var, tm))
clim_dict[(var, tm)] = clim_df
# Read stn elev data
stn_df = pd.read_csv(stn_xls)
# Get list of sites
stn_list = stn_df['stn_id'].unique()
# List to store output
data_list = []
# Loop over stations
for stn in stn_list:
# Read chem data
chem_path = os.path.join(chem_fold, 'stn_%s.csv' % stn)
# Only process the 431 files with chem data
if os.path.exists(chem_path):
# Allow for manually edited stations (see above)
# which now have ';' as the delimiter
if stn in [23467, 36561]:
chem_df = pd.read_csv(chem_path, sep=';')
else:
chem_df = pd.read_csv(chem_path)
chem_df.index = chem_df['YEAR']
# Process climate data
# Dict to store output
data_dict = {}
# Loop over data
for var in ['pre', 'tmp']:
for tm in ['ann', 'jja', 'jas']:
# Get the climate data
clim_df = clim_dict[(var, tm)]
# Filter the climate data for this station
stn_clim_df = clim_df.query('stn_id == @stn')
# Set index
stn_clim_df.index = stn_clim_df['year']
stn_clim_df = stn_clim_df.sort_index()
# Correct temperatures according to lapse rate
if var == 'tmp':
# Get elevations
stn_elev = stn_df.query('stn_id == @stn')['elev_m'].values[0]
px_elev = stn_df.query('stn_id == @stn')['px_elev_m'].values[0]
# If pixel elev is negative (i.e. in sea), correct back to s.l.
if px_elev < 0:
px_elev = 0
# Calculate temperature difference based on 0.6C/100m
t_diff = 0.6 * (px_elev - stn_elev) / 100.
# Apply correction
stn_clim_df['tmp'] = stn_clim_df['tmp'] + t_diff
# Truncate
stn_clim_df = stn_clim_df.query('(year>=1990) & (year<=2012)')
# Add to dict
key = '%s_%s' % (var, tm)
val = stn_clim_df[var]
data_dict[key] = val
# Build output df
stn_clim_df = pd.DataFrame(data_dict)
# Join chem and clim data
df = pd.merge(stn_clim_df, chem_df, how='outer',
left_index=True, right_index=True)
# Get desired columns
# Modified 06/06/2017 to include all pars for Leah
df = df[['pre_ann', 'pre_jas', 'pre_jja', 'tmp_ann', 'tmp_jas',
'tmp_jja', 'Al', 'TOC', 'EH', 'ESO4', 'ECl', 'ESO4_ECl',
'ENO3', 'ESO4X', 'ESO4_ECl', 'ECa_EMg', 'ECaX_EMgX',
'ANC']]
# Transpose
df = df.T
# Add station ID
df.reset_index(inplace=True)
df['station_id'] = stn
# Rename cols
df.columns.name = ''
cols = list(df.columns)
cols[0] = 'var'
df.columns = cols
data_list.append(df)
# Combine results for each site
ann_df = pd.concat(data_list, axis=0)
ann_df.head()
Explanation: ICPW annual time series
At the Task Force meeting in May 2017, it was decided that the TOC trends analysis should include rolling regressions based on the annually aggregated data (rather than calculating a single set of statistics for a small number of pre-specified time periods). Some code outlining an approach for this in Python can be found here here, and John has also created a version using R.
Heleen has asked me to provided John with annual time series for each site for the period from 1990 to 2012 (see e-mail received from Heleen on 12/05/2017 at 10.01 and also the e-mail chain involving John, Don and Heleen between 19th and 20th May 2017).
As a first step, I've modified my previous trends code so that, duirng the processing, the annual time series are saved as a series of CSVs. The additional lines of code can be found on lines 567 to 574 of toc_trend_analysis.py.
The output CSVs require a small amount of manual cleaning:
Delete the TOC series for station ID 23467 and <br><br>
Delete the SO4 series for station ID 36561.
(See sections 1.2 and 1.3 of this notebook for justification).
Annual time series for climate based on the updated climate data have already been created and are saved here:
C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\CRU_Climate_Data\cru_climate_summaries.xlsx
The temperature series also need correcting based on site elevation, as was done in this notebook and the two (climate and water chemistry) datasets then need restructuring and joining into a single output file for John to work with.
End of explanation
# Read site data from previous output
in_xls = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Results\toc_trends_long_format_update1.xlsx')
props_df = pd.read_excel(in_xls, sheetname='toc_trends_long_format_update1',
keep_default_na=False) # Otherwise 'NA' for North America becomes NaN
# Get just cols of interest
props_df = props_df[['project_id', 'project_name', 'station_id', 'station_code',
'station_name', 'nfc_code', 'type', 'continent', 'country',
'region', 'subregion', 'lat', 'lon']]
# Drop duplicates
props_df.drop_duplicates(inplace=True)
# Join
ann_df = pd.merge(ann_df, props_df, how='left',
on='station_id')
# Reorder cols
ann_df = ann_df[['project_id', 'project_name', 'station_id', 'station_code',
'station_name', 'nfc_code', 'type', 'continent', 'country',
'region', 'subregion', 'lat', 'lon', 'var']+range(1990, 2013)]
ann_df.head()
Explanation: The final step is to join in the site metadata used in the previous analysis.
End of explanation
# Read site data from previous output
in_xls = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Results\toc_trends_long_format_update1.xlsx')
inc_df = pd.read_excel(in_xls, sheetname='toc_trends_long_format_update1')
# Get just cols of interest
inc_df = inc_df[['station_id', 'par_id', 'analysis_period', 'include']]
# Filter to just results for 1990-2012
inc_df = inc_df.query('analysis_period == "1990-2012"')
# Join
ann_df = pd.merge(ann_df, inc_df, how='left',
left_on=['station_id', 'var'],
right_on=['station_id', 'par_id'])
# Reorder cols
ann_df = ann_df[['project_id', 'project_name', 'station_id', 'station_code',
'station_name', 'nfc_code', 'type', 'continent', 'country',
'region', 'subregion', 'lat', 'lon', 'var', 'include']+range(1990, 2013)]
# The climate vars all have data and can be included
ann_df['include'].fillna(value='yes', inplace=True)
# Write output
out_path = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Results\annual_clim_chem_series_leah.csv')
ann_df.to_csv(out_path, encoding='utf-8')
ann_df.head()
Explanation: Heleen previously defined various criteria for whether a series should be included in the analysis or not. In order to keep things consistent, it's probably a good idea to include this information here.
End of explanation
in_xls = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Results\annual_clim_chem_series.xlsx')
df = pd.read_excel(in_xls, sheetname='annual_clim_chem_series')
# Select series
stn_id = 23455
var = 'tmp_jja'
df2 = df.query("(station_id==@stn_id) & (var==@var)")
df2 = df2[range(1990, 2013)].T
df2.columns = [var]
df2.plot(ls='-', marker='o')
Explanation: Finally, it's worth checking that this output matches the time series available on the ICPW website and in the plots of the climate data.
End of explanation |
10,515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
← Back to Index
Longest Common Subsequence
To motivate dynamic time warping, let's look at a classic dynamic programming problem
Step1: Test
Step2: The time complexity of the above recursive method is $O(2^{n_x+n_y})$. That is slow because we might compute the solution to the same subproblem multiple times.
Memoization
We can do better through memoization, i.e. storing solutions to previous subproblems in a table.
Here, we create a table where cell (i, j) stores the length lcs(x[
Step3: Let's visualize this table
Step4: Finally, we backtrack, i.e. read the table from the bottom right to the top left | Python Code:
def lcs(x, y):
if x == "" or y == "":
return ""
if x[0] == y[0]:
return x[0] + lcs(x[1:], y[1:])
else:
z1 = lcs(x[1:], y)
z2 = lcs(x, y[1:])
return z1 if len(z1) > len(z2) else z2
Explanation: ← Back to Index
Longest Common Subsequence
To motivate dynamic time warping, let's look at a classic dynamic programming problem: find the longest common subsequence (LCS) of two strings (Wikipedia). A subsequence is not required to maintain consecutive positions in the original strings, but they must retain their order. Examples:
lcs('cake', 'baker') -> 'ake'
lcs('cake', 'cape') -> 'cae'
We can solve this using recursion. We must find the optimal substructure, i.e. decompose the problem into simpler subproblems.
Let x and y be two strings. Let z be the true LCS of x and y.
If the first characters of x and y were the same, then that character must also be the first character of the LCS, z. In other words, if x[0] == y[0], then z[0] must equal x[0] (which equals y[0]). Therefore, append x[0] to lcs(x[1:], y[1:]).
If the first characters of x and y differ, then solve for both lcs(x, y[1:]) and lcs(x[1:], y), and keep the result which is longer.
Here is the recursive solution:
End of explanation
pairs = [
('cake', 'baker'),
('cake', 'cape'),
('catcga', 'gtaccgtca'),
('zxzxzxmnxzmnxmznmzxnzm', 'nmnzxmxzmnzmx'),
('dfkjdjkfdjkjfdkfdkfjd', 'dkfjdjkfjdkjfkdjfkjdkfjdkfj'),
]
for x, y in pairs:
print(lcs(x, y))
Explanation: Test:
End of explanation
def lcs_table(x, y):
nx = len(x)
ny = len(y)
# Initialize a table.
table = [[0 for _ in range(ny+1)] for _ in range(nx+1)]
# Fill the table.
for i in range(1, nx+1):
for j in range(1, ny+1):
if x[i-1] == y[j-1]:
table[i][j] = 1 + table[i-1][j-1]
else:
table[i][j] = max(table[i-1][j], table[i][j-1])
return table
Explanation: The time complexity of the above recursive method is $O(2^{n_x+n_y})$. That is slow because we might compute the solution to the same subproblem multiple times.
Memoization
We can do better through memoization, i.e. storing solutions to previous subproblems in a table.
Here, we create a table where cell (i, j) stores the length lcs(x[:i], y[:j]). When either i or j is equal to zero, i.e. an empty string, we already know that the LCS is the empty string. Therefore, we can initialize the table to be equal to zero in all cells. Then we populate the table from the top left to the bottom right.
End of explanation
x = 'cake'
y = 'baker'
table = lcs_table(x, y)
table
xa = ' ' + x
ya = ' ' + y
print(' '.join(ya))
for i, row in enumerate(table):
print(xa[i], ' '.join(str(z) for z in row))
Explanation: Let's visualize this table:
End of explanation
def lcs(x, y, table, i=None, j=None):
if i is None:
i = len(x)
if j is None:
j = len(y)
if table[i][j] == 0:
return ""
elif x[i-1] == y[j-1]:
return lcs(x, y, table, i-1, j-1) + x[i-1]
elif table[i][j-1] > table[i-1][j]:
return lcs(x, y, table, i, j-1)
else:
return lcs(x, y, table, i-1, j)
for x, y in pairs:
table = lcs_table(x, y)
print(lcs(x, y, table))
Explanation: Finally, we backtrack, i.e. read the table from the bottom right to the top left:
End of explanation |
10,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuTiP example
Step1: Hamoltonian
Step2: Dispersive cQED Model Implementation
Step3: The results obtained from the physical implementation agree with the ideal result.
Step4: The gates are first transformed into the ISWAP basis, which is redundant in this example.
Step5: An RZ gate, followed by a Globalphase, is applied to all ISWAP and SQRTISWAP gates to normalize the propagator matrix.
Arg_value for the ISWAP case is pi/2, while for the SQRTISWAP case, it is pi/4.
Step6: The time for each applied gate
Step7: The pulse can be plotted as
Step8: Software versions | Python Code:
%matplotlib inline
import numpy as np
from qutip import *
from qutip.qip.models.circuitprocessor import *
from qutip.qip.models.cqed import *
Explanation: QuTiP example: Physical implementation of Cavity-Qubit model
Author: Anubhav Vardhan ([email protected])
For more information about QuTiP see http://qutip.org
End of explanation
N = 3
qc = QubitCircuit(N)
qc.add_gate("ISWAP", targets=[0,1])
qc.png
U_ideal = gate_sequence_product(qc.propagators())
U_ideal
Explanation: Hamoltonian:
hamiltonian
The cavity-qubit model using a resonator as a bus can be implemented using the DispersivecQED class.
Circuit Setup
End of explanation
p1 = DispersivecQED(N, correct_global_phase=True)
U_list = p1.run(qc)
U_physical = gate_sequence_product(U_list)
U_physical.tidyup(atol=1e-3)
(U_ideal - U_physical).norm()
Explanation: Dispersive cQED Model Implementation:
End of explanation
p1.qc0.gates
Explanation: The results obtained from the physical implementation agree with the ideal result.
End of explanation
p1.qc1.gates
Explanation: The gates are first transformed into the ISWAP basis, which is redundant in this example.
End of explanation
p1.qc2.gates
Explanation: An RZ gate, followed by a Globalphase, is applied to all ISWAP and SQRTISWAP gates to normalize the propagator matrix.
Arg_value for the ISWAP case is pi/2, while for the SQRTISWAP case, it is pi/4.
End of explanation
p1.T_list
Explanation: The time for each applied gate:
End of explanation
p1.plot_pulses();
Explanation: The pulse can be plotted as:
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Software versions:
End of explanation |
10,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
numpynet from sklearn mnist data set works
numpynet/classy from digits data set doesn't work
trying numpynet with digits data set here works
what is the numpynet/classy difference?
Step1: compare shapes | Python Code:
'''
Little example on how to use the Network class to create a model and perform
a basic classification of the MNIST dataset
'''
#from NumPyNet.layers.input_layer import Input_layer
from NumPyNet.layers.connected_layer import Connected_layer
from NumPyNet.layers.convolutional_layer import Convolutional_layer
from NumPyNet.layers.maxpool_layer import Maxpool_layer
from NumPyNet.layers.softmax_layer import Softmax_layer
# from NumPyNet.layers.dropout_layer import Dropout_layer
# from NumPyNet.layers.cost_layer import Cost_layer
# from NumPyNet.layers.cost_layer import cost_type
from NumPyNet.layers.batchnorm_layer import BatchNorm_layer
from NumPyNet.network import Network
from NumPyNet.optimizer import Adam
# from NumPyNet.optimizer import Adam, SGD, Momentum
from NumPyNet.utils import to_categorical
from NumPyNet.utils import from_categorical
from NumPyNet.metrics import mean_accuracy_score
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
__author__ = ['Mattia Ceccarelli', 'Nico Curti']
__email__ = ['[email protected]', '[email protected]']
def accuracy (y_true, y_pred):
'''
Temporary metrics to overcome "from_categorical" missing in standard metrics
'''
truth = from_categorical(y_true)
predicted = from_categorical(y_pred)
return mean_accuracy_score(truth, predicted)
import classy as cl
digits = datasets.load_digits()
X, y = digits.images, digits.target
X.shape
imshow(X[100,:,:])
y
images=cl.image.load_images('data/digits')
X=np.dstack(images.data)
X=X.transpose([2,0,1])
X.shape
n=500
imshow(X[n,:,:])
title(images.target_names[images.targets[n]])
Explanation: numpynet from sklearn mnist data set works
numpynet/classy from digits data set doesn't work
trying numpynet with digits data set here works
what is the numpynet/classy difference?
End of explanation
digits = datasets.load_digits()
X, y = digits.images, digits.target
print(X.shape,y.shape)
X = np.asarray([np.dstack((x, x, x)) for x in X])
y=to_categorical(y).reshape(len(y), 1, 1, -1)
print(X.shape,y.shape)
images=cl.image.load_images('data/digits')
X=np.dstack(images.data)
X=X.transpose([2,0,1]).astype(np.float)
y=images.targets
print(X.shape,y.shape)
X = np.asarray([np.dstack((x, x, x)) for x in X])
y=to_categorical(y).reshape(len(y), 1, 1, -1)
print(X.shape,y.shape)
def Xy_image(images,shuffle=True):
import numpy as np
X=np.dstack(images.data)
X=X.transpose([2,0,1]).astype(np.float)
y=images.targets
X = np.asarray([np.dstack((x, x, x)) for x in X])
if shuffle:
idx=np.array(range(len(y)))
np.random.shuffle(idx)
y=y[idx]
X=X[idx,...]
return X,y
np.random.seed(124)
images=cl.image.load_images('data/digits')
images_train,images_test=cl.image.split(images,verbose=False)
cl.summary(images_train)
cl.summary(images_test)
X,y=Xy_image(images)
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=.2,
random_state=42)
X_train,y_train=Xy_image(images_train)
X_test,y_test=Xy_image(images_test)
batch = 128
num_classes = len(set(y))
# del X, y
# normalization to [0, 1]
X_train *= 1. / 255.
X_test *= 1. / 255.
# reduce the size of the data set for testing
############################################
# train_size = 512
# test_size = 300
# X_train = X_train[:train_size, ...]
# y_train = y_train[:train_size]
# X_test = X_test[ :test_size, ...]
# y_test = y_test[ :test_size]
############################################
n_train = X_train.shape[0]
n_test = X_test.shape[0]
# transform y to array of dimension 10 and in 4 dimension
y_train = to_categorical(y_train).reshape(n_train, 1, 1, -1)
y_test = to_categorical(y_test).reshape(n_test, 1, 1, -1)
# Create the model and training
model = Network(batch=batch, input_shape=X_train.shape[1:])
model.add(Convolutional_layer(size=3, filters=32, stride=1, pad=True, activation='Relu'))
model.add(BatchNorm_layer())
model.add(Maxpool_layer(size=2, stride=1, padding=True))
model.add(Connected_layer(outputs=100, activation='Relu'))
model.add(BatchNorm_layer())
model.add(Connected_layer(outputs=num_classes, activation='Linear'))
model.add(Softmax_layer(spatial=True, groups=1, temperature=1.))
# model.add(Cost_layer(cost_type=cost_type.mse))
# model.compile(optimizer=SGD(lr=0.01, decay=0., lr_min=0., lr_max=np.inf))
model.compile(optimizer=Adam(), metrics=[accuracy])
print('*************************************')
print('\n Total input dimension: {}'.format(X_train.shape), '\n')
print('**************MODEL SUMMARY***********')
model.summary()
print('\n***********START TRAINING***********\n')
# Fit the model on the training set
model.fit(X=X_train, y=y_train, max_iter=10, verbose=True)
print('\n***********START TESTING**************\n')
print(X_test.shape,y_test.shape)
# Test the prediction with timing
loss, out = model.evaluate(X=X_test, truth=y_test, verbose=True)
truth = from_categorical(y_test)
predicted = from_categorical(out)
accuracy_score = mean_accuracy_score(truth, predicted)
print('\nLoss Score: {:.3f}'.format(loss))
print('Accuracy Score: {:.3f}'.format(accuracy_score))
# SGD : best score I could obtain was 94% with 10 epochs, lr = 0.01 %
# Momentum : best score I could obtain was 93% with 10 epochs
# Adam : best score I could obtain was 95% with 10 epochs
L=model._net[1]
L.weights.shape
num_filters=L.weights.shape[-1]
w=L.weights
w=w-w.min()
w=w/w.max()
figure(figsize=(6,12))
for f in range(num_filters):
subplot(8,4,f+1)
im=w[:,:,:,f]
imshow(im)
axis('off')
X.max()
X.min()
X_train.min()
X_train.max()
digits = datasets.load_digits()
X, y = digits.images, digits.target
X.shape
imshow(X[0,:,:])
colorbar()
X[0,:,:]
Explanation: compare shapes
End of explanation |
10,518 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm using tensorflow 2.10.0. | Problem:
import tensorflow as tf
x=[b'\xd8\xa8\xd9\x85\xd8\xb3\xd8\xa3\xd9\x84\xd8\xa9',
b'\xd8\xa5\xd9\x86\xd8\xb4\xd8\xa7\xd8\xa1',
b'\xd9\x82\xd8\xb6\xd8\xa7\xd8\xa1',
b'\xd8\xac\xd9\x86\xd8\xa7\xd8\xa6\xd9\x8a',
b'\xd8\xaf\xd9\x88\xd9\x84\xd9\x8a']
def g(x):
return [tf.compat.as_str_any(a) for a in x]
result = g(x.copy()) |
10,519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
grayscala_min = 0
grayscala_max = 255
return a + (image_data - grayscala_min) * (b - a) / (grayscala_max - grayscala_min)
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(train_features, train_labels, test_size=0.05, random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros([labels_count]))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
10,520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of many applications with Notebook Launcher
This notebook is a quick(ish) test of most of the main applications people use, taken from fastbook, and ran with Accelerate across multiple GPUs through the notebook_launcher
Step1: Important
Step2: Image Classification
Step3: Image Segmentation
Step4: Text Classification
Step5: Tabular
Step6: Collab Filtering
Step7: Keypoints | Python Code:
#|all_slow
#|all_multicuda
from fastai.vision.all import *
from fastai.text.all import *
from fastai.tabular.all import *
from fastai.collab import *
from accelerate import notebook_launcher
from fastai.distributed import *
Explanation: Examples of many applications with Notebook Launcher
This notebook is a quick(ish) test of most of the main applications people use, taken from fastbook, and ran with Accelerate across multiple GPUs through the notebook_launcher
End of explanation
# from accelerate.utils import write_basic_config
# write_basic_config()
Explanation: Important: Before running, ensure that Accelerate has been configured through either accelerate config in the command line or by running write_basic_config
End of explanation
path = untar_data(URLs.PETS)/'images'
def train():
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2,
label_func=lambda x: x[0].isupper(), item_tfms=Resize(224))
learn = vision_learner(dls, resnet34, metrics=error_rate).to_fp16()
with learn.distrib_ctx(in_notebook=True, sync_bn=False):
learn.fine_tune(1)
notebook_launcher(train, num_processes=2)
Explanation: Image Classification
End of explanation
path = untar_data(URLs.CAMVID_TINY)
def train():
dls = SegmentationDataLoaders.from_label_func(
path, bs=8, fnames = get_image_files(path/"images"),
label_func = lambda o: path/'labels'/f'{o.stem}_P{o.suffix}',
codes = np.loadtxt(path/'codes.txt', dtype=str)
)
learn = unet_learner(dls, resnet34)
with learn.distrib_ctx(in_notebook=True, sync_bn=False):
learn.fine_tune(8)
notebook_launcher(train, num_processes=2)
Explanation: Image Segmentation
End of explanation
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
def train():
imdb_clas = DataBlock(blocks=(TextBlock.from_df('text', seq_len=72), CategoryBlock),
get_x=ColReader('text'), get_y=ColReader('label'), splitter=ColSplitter())
dls = imdb_clas.dataloaders(df, bs=64)
learn = rank0_first(lambda: text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy))
with learn.distrib_ctx(in_notebook=True):
learn.fine_tune(4, 1e-2)
notebook_launcher(train, num_processes=2)
Explanation: Text Classification
End of explanation
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
def train():
dls = TabularDataLoaders.from_csv(path/'adult.csv', path=path, y_names="salary",
cat_names = ['workclass', 'education', 'marital-status', 'occupation',
'relationship', 'race'],
cont_names = ['age', 'fnlwgt', 'education-num'],
procs = [Categorify, FillMissing, Normalize])
learn = tabular_learner(dls, metrics=accuracy)
with learn.distrib_ctx(in_notebook=True):
learn.fit_one_cycle(3)
notebook_launcher(train, num_processes=2)
Explanation: Tabular
End of explanation
path = untar_data(URLs.ML_SAMPLE)
df = pd.read_csv(path/'ratings.csv')
def train():
dls = CollabDataLoaders.from_df(df)
learn = collab_learner(dls, y_range=(0.5,5.5))
with learn.distrib_ctx(in_notebook=True):
learn.fine_tune(6)
notebook_launcher(train, num_processes=2)
Explanation: Collab Filtering
End of explanation
path = untar_data(URLs.BIWI_HEAD_POSE)
def img2pose(x): return Path(f'{str(x)[:-7]}pose.txt')
def get_ctr(f):
ctr = np.genfromtxt(img2pose(f), skip_header=3)
c1 = ctr[0] * cal[0][0]/ctr[2] + cal[0][2]
c2 = ctr[1] * cal[1][1]/ctr[2] + cal[1][2]
return tensor([c1,c2])
img_files = get_image_files(path)
cal = np.genfromtxt(path/'01'/'rgb.cal', skip_footer=6)
def train():
biwi = DataBlock(
blocks=(ImageBlock, PointBlock),
get_items=get_image_files,
get_y=get_ctr,
splitter=FuncSplitter(lambda o: o.parent.name=='13'),
batch_tfms=[*aug_transforms(size=(240,320)),
Normalize.from_stats(*imagenet_stats)])
dls = biwi.dataloaders(path)
learn = vision_learner(dls, resnet18, y_range=(-1,1))
with learn.distrib_ctx(in_notebook=True, sync_bn=False):
learn.fine_tune(1)
notebook_launcher(train, num_processes=2)
Explanation: Keypoints
End of explanation |
10,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Write your Own Perceptron
In our examples, we have seen different algorithms and we could use scikit learn functions to get the paramters. However, do you know how is it implemented? To understand it, we could start from perceptron, which could also be an important concept for future studies in deep learning.
The perceptron can be used for supervised learning. It can solve binary linear classification problems. A comprehensive description of the functionality of a perceptron is out of scope here. To follow this tutorial you already should know what a perceptron is and understand the basics of its functionality. Additionally a fundamental understanding of stochastic gradient descent is needed. To get in touch with the theoretical background, I advise the Wikipedia article
Step1: Stochastic Gradient Descent
We will implement the perceptron algorithm in python 3 and numpy. The perceptron will learn using the stochastic gradient descent algorithm (SGD). Gradient Descent minimizes a function by following the gradients of the cost function. For further details see
Step2: Next we fold a bias term -1 into the data set. This is needed for the SGD to work. Details see The Perceptron algorithm
Step3: This small toy data set contains two samples labeled with $-1$ and three samples labeled with $+1$. This means we have a binary classification problem, as the data set contains two sample classes. Lets plot the dataset to see, that is is linearly seperable
Step4: Your task | Python Code:
# import our packages
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Write your Own Perceptron
In our examples, we have seen different algorithms and we could use scikit learn functions to get the paramters. However, do you know how is it implemented? To understand it, we could start from perceptron, which could also be an important concept for future studies in deep learning.
The perceptron can be used for supervised learning. It can solve binary linear classification problems. A comprehensive description of the functionality of a perceptron is out of scope here. To follow this tutorial you already should know what a perceptron is and understand the basics of its functionality. Additionally a fundamental understanding of stochastic gradient descent is needed. To get in touch with the theoretical background, I advise the Wikipedia article:
Wikipedia - Perceptron
Furthermore I highly advise you the book of Schölkopf & Smola. Do not let the math scrare you, as they explain the basics of machine learning in a really comprehensive way:
Schölkopf & Smola (2002). Learning with Kernels. Support Vector Machines, Regularization, Optimization, and Beyond.
To better understand the internal processes of a perceptron in practice, we will step by step develop a perceptron from scratch now.
End of explanation
X = np.array([
[-2, 4],
[4, 1],
[1, 6],
[2, 4],
[6, 2]
])
Explanation: Stochastic Gradient Descent
We will implement the perceptron algorithm in python 3 and numpy. The perceptron will learn using the stochastic gradient descent algorithm (SGD). Gradient Descent minimizes a function by following the gradients of the cost function. For further details see:
Wikipedia - stochastic gradient descent
Calculating the Error
To calculate the error of a prediction we first need to define the objective function of the perceptron.
Hinge Loss Function
To do this, we need to define the loss function, to calculate the prediction error. We will use hinge loss for our perceptron:
$$c(x, y, f(x)) = (1 - y * f(x))_+$$
$c$ is the loss function, $x$ the sample, $y$ is the true label, $f(x)$ the predicted label.
This means the following:
$$
c(x, y, f(x))=
\begin{cases}
0,& \text{if } y * f(x)\geq 1\
1-y*f(x), & \text{else}
\end{cases}
$$
So consider, if y and f(x) are signed values $(+1,-1)$:
<ul>
<li>the loss is 0, if $y*f(x)$ are positive, respective both values have the same sign.</li>
<li>loss is $1-y*f(x)$ if $y*f(x)$ is negative</li>
</ul>
Objective Function
As we defined the loss function, we can now define the objective function for the perceptron:
$$l_i(w) = \big(-y_i \langle x_i,w \rangle\big)+$$
We can write this without the dot product with a sum sign:
$$l_i(w) = (-y_i \sum{i=1}^n x_iw)_+$$
So the sample $x_i$ is misclassified, if $y_i \langle x_i,w \rangle \leq 0$. The general goal is, to find the global minima of this function, respectively find a parameter $w$, where the error is zero.
Derive the Objective Function
To do this we need the gradients of the objective function. The gradient of a function $f$ is the vector of its partial derivatives. The gradient can be calculated by the partially derivative of the objective function.
$$ \nabla l_i(w) = -y_i x_i $$
This means, if we have a misclassified sample $x_i$, respectively $ y_i \langle x_i,w \rangle \leq 0 $, update the weight vector
$w$ by moving it in the direction of the misclassified sample.
$$w = w + y_i x_i$$
With this update rule in mind, we can start writing our perceptron algorithm in python.
Our Data Set
First we need to define a labeled data set.
End of explanation
X = np.array([
[-2,4,-1],
[4,1,-1],
[1, 6, -1],
[2, 4, -1],
[6, 2, -1],
])
y = np.array([-1,-1,1,1,1])
Explanation: Next we fold a bias term -1 into the data set. This is needed for the SGD to work. Details see The Perceptron algorithm
End of explanation
for d, sample in enumerate(X):
# Plot the negative samples
if d < 2:
plt.scatter(sample[0], sample[1], s=120, marker='_', linewidths=2)
# Plot the positive samples
else:
plt.scatter(sample[0], sample[1], s=120, marker='+', linewidths=2)
# Print a possible hyperplane, that is seperating the two classes.
plt.plot([-2,6],[6,0.5])
Explanation: This small toy data set contains two samples labeled with $-1$ and three samples labeled with $+1$. This means we have a binary classification problem, as the data set contains two sample classes. Lets plot the dataset to see, that is is linearly seperable:
End of explanation
def perceptron_sgd(X, Y):
#line 1:
w = np.zeros(len(X[0]))
#line 2:
eta = 1
#line 3
epochs = 10
####Please finish the algithm here as description
#line 4
#line 5
#line 6
#line 7
#############
return w
Explanation: Your task: Lets Start implementing Stochastic Gradient Descent
Finally we can code our SGD algorithm using our update rule. To keep it simple, we will linearly loop over the sample set. For larger data sets it makes sence, to randomly pick a sample during each iteration in the for-loop.
Code Description Line by Line
line <b>1</b>: Initialize the weight vector for the perceptron with zeros<br>
line <b>2</b>: Set the learning rate to 1<br>
line <b>3</b>: Set the number of epochs<br>
line <b>4</b>: Iterate n times over the whole data set.
line <b>5</b>: Iterate over each sample in the data set<br>
line <b>6</b>: Misclassification condition $y_i \langle x_i,w \rangle \leq 0$
line <b>7</b>: Update rule for the weights $w = w + y_i * x_i$ including the learning rate
End of explanation |
10,522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Cytoscape.js in Jupyter Notebook
by Keiichiro Ono
Introduction
If you use Jupyetr Notebook with cyREST, you can script your workflow. And in some cases, you may want to embed the result in the notebook. There are two ways to do it
Step1: Loading Sample Network
In this example, we use Cytoscape.js JSON files generated by Cytoscape. Input data for this widget should be Cytoscape.js JSON format. You can create those in Cytoscape 3.2.1 OR you can build them programmatically with Python.
Step2: Loading Visual Styles (Optional)
Although this widget includes some preset styles, we recommend to use your custom Visual Style. If you just use Cytoscape.js widget, it is a bit hard to create complex styles manually, but you can interactively create those with Cytoscape 3.
Step3: Render the result
There are several options for visualization | Python Code:
# Package to render networks in Cytoscape.js
from py2cytoscape import cytoscapejs as cyjs
# And standard JSON utility
import json
Explanation: Using Cytoscape.js in Jupyter Notebook
by Keiichiro Ono
Introduction
If you use Jupyetr Notebook with cyREST, you can script your workflow. And in some cases, you may want to embed the result in the notebook. There are two ways to do it:
Embed static PNG image from Cytoscape
Convert the result into Cytoscape.js JSON objects, and use it with Cytoscape.js widget
Second option is more appropreate for relatively large networks because users can zoom-in/out to see the detail. In this section, you will learn how to use interactive network visualization widget in py2cytoscape.
End of explanation
# Load network JSON file into this notebook session
yeast_network = json.load(open('sample_data/yeast.json')) # Simple yeast PPI network
kegg_pathway = json.load(open('sample_data/kegg_cancer.cyjs')) # KEGG Pathway
Explanation: Loading Sample Network
In this example, we use Cytoscape.js JSON files generated by Cytoscape. Input data for this widget should be Cytoscape.js JSON format. You can create those in Cytoscape 3.2.1 OR you can build them programmatically with Python.
End of explanation
# And Visual Style file in Cytoscape.js format.
vs_collection = json.load(open('sample_data/kegg_style.json'))
# Create map from Title to Style
def add_to_map(key, value, target):
target[key] = value
styles = {}
map( lambda(x): add_to_map(x["title"], x["style"], styles), vs_collection)
# Display available style names
print(json.dumps(styles.keys(), indent=4))
Explanation: Loading Visual Styles (Optional)
Although this widget includes some preset styles, we recommend to use your custom Visual Style. If you just use Cytoscape.js widget, it is a bit hard to create complex styles manually, but you can interactively create those with Cytoscape 3.
End of explanation
cyjs.render(yeast_network)
# With custom Style, background color, and layout
cyjs.render(yeast_network, style=styles["default black"], background="black", layout_algorithm="circle")
# With CSS-style Background - with gradient
cyjs.render(yeast_network, style="default2", background="radial-gradient(#FFFFFF 15%, #EEEEEE 105%)", layout_algorithm="breadthfirst")
# And you can reproduce more complex styles created with Cytoscape 3!
cyjs.render(kegg_pathway, style=styles["KEGG Style"])
Explanation: Render the result
There are several options for visualization:
style - Name of the preset visual style OR Style as JSON object. Default is default
layout_algorithm - name of Cytoscape.js layout algorithm. Default is preset
background - Background color. Also accepts CSS!
Here is the simplest example: just pass Cytoscape.js object
End of explanation |
10,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Release of hammer-cli gem
Requirements
push access to https
Step1: Update the following notebook settings
Step2: Ensure the repo is up to date
Step3: Run tests localy
Step4: Update release related stuff
Step5: Manual step
Step6: Commit changes
Step7: Update translations
Step8: Tag new version
Step9: Prepare stable branch for major release
Step10: Build the gem
Step11: Bump the develop version for major release
Step12: PUSH the changes upstream If everything is correct
Step13: Now the new release is in upstream repo
Some manual steps follow to improve the UX
New relase on GitHub
Copy the following changelog lines to the description in form on link below
The release title is the new version. | Python Code:
%cd ..
Explanation: Release of hammer-cli gem
Requirements
push access to https://github.com/theforeman/hammer-cli
push access to rubygems.org for hammer-cli
sudo yum install transifex-client python-slugify asciidoc
ensure neither the git push or gem push don't require interractive auth. If you can't use api key or ssh key to auth skip these steps and run them form the shell manually
to push translations you need an account on Transifex
Release process
Follow the steps with <Shift>+<Enter> or <Ctrl>+<Enter>,<Down>
If anything fails, fix it and re-run the step if applicable
Release settings
End of explanation
NEW_VERSION = '2.0.0'
LAST_VERSION = '0.19.0'
DEVELOP_VERSION = '2.1.0-develop'
NEXT_FUTURE_VERSION = '2.1.0'
MAJOR_RELEASE = True
STABLE_BRANCH = '2.0-stable'
GIT_REMOTE_UPSTREAM = 'origin'
WORK_BRANCH = 'master' if MAJOR_RELEASE else STABLE_BRANCH
Explanation: Update the following notebook settings
End of explanation
! git checkout {WORK_BRANCH}
! git fetch {GIT_REMOTE_UPSTREAM}
! git rebase {GIT_REMOTE_UPSTREAM}/{WORK_BRANCH}
Explanation: Ensure the repo is up to date
End of explanation
! bundle update
! bundle exec rake test
Explanation: Run tests localy
End of explanation
! sed -i 's/Gem::Version.new .*/Gem::Version.new "{NEW_VERSION}"/' lib/hammer_cli/version.rb
# Parse git changelog
from IPython.display import Markdown as md
from subprocess import check_output
from shlex import split
import re
def format_log_entry(entry):
issues = re.findall(r'[^(]#([0-9]+)', entry)
entry = re.sub(r'([fF]ixes|[rR]efs)[^-]*-\s*(.*)', r'\2', entry)
entry = '* ' + entry.capitalize()
entry = re.sub(r'\(#([0-9]+)\)', r'([PR #\1](https://github.com/theforeman/hammer-cli/pull/\1))', entry)
for i in issues:
referenced_issues.append(i)
entry = entry + ', [#%s](http://projects.theforeman.org/issues/%s)' % (i, i)
return entry
def skip(entry):
if re.match(r'Merge pull', entry) or \
re.match(r'^i18n', entry) or \
re.match(r'^Bump to version', entry):
return True
else:
return False
referenced_issues = []
git_log_cmd = 'git log --pretty=format:"%%s" %s..HEAD' % LAST_VERSION
log = check_output(split(git_log_cmd)).decode('utf8').split('\n')
change_log = [format_log_entry(e) for e in log if not skip(e)]
md('\n'.join(change_log))
# Write release notes
from datetime import datetime
import fileinput
import sys
fh = fileinput.input('doc/release_notes.md', inplace=True)
for line in fh:
print(line.rstrip())
if re.match(r'========', line):
print('### %s (%s)' % (NEW_VERSION, datetime.today().strftime('%Y-%m-%d')))
for entry in change_log:
print(entry)
print('')
fh.close()
Explanation: Update release related stuff
End of explanation
! git add -u
! git status
! git diff --cached
Explanation: Manual step: Update deps in the gemspec if neccessary
Check what is going to be commited
End of explanation
! git commit -m "Bump to {NEW_VERSION}"
Explanation: Commit changes
End of explanation
if MAJOR_RELEASE:
! make -C locale/ tx-update
Explanation: Update translations
End of explanation
! git tag {NEW_VERSION}
Explanation: Tag new version
End of explanation
if MAJOR_RELEASE:
! git checkout -b {STABLE_BRANCH}
! git push {GIT_REMOTE_UPSTREAM} {STABLE_BRANCH}
! git checkout {WORK_BRANCH}
Explanation: Prepare stable branch for major release
End of explanation
! rake build
! gem push pkg/hammer_cli-{NEW_VERSION}.gem
Explanation: Build the gem
End of explanation
if MAJOR_RELEASE:
! sed -i 's/Gem::Version.new .*/Gem::Version.new "{DEVELOP_VERSION}"/' lib/hammer_cli/version.rb
if MAJOR_RELEASE:
! git add -u
! git status
if MAJOR_RELEASE:
! git diff --cached
if MAJOR_RELEASE:
! git commit -m "Bump to {DEVELOP_VERSION}"
Explanation: Bump the develop version for major release
End of explanation
! git push {GIT_REMOTE_UPSTREAM} {WORK_BRANCH}
! git push --tags {GIT_REMOTE_UPSTREAM} {WORK_BRANCH}
Explanation: PUSH the changes upstream If everything is correct
End of explanation
print('\n')
print('\n'.join(change_log))
print('\n\nhttps://github.com/theforeman/hammer-cli/releases/new?tag=%s' % NEW_VERSION)
from IPython.display import Markdown as md
md('### Create new hammer-cli release in Redmine \n' + \
'<a href="https://projects.theforeman.org/projects/hammer-cli/versions/new" target="_blank">https://projects.theforeman.org/projects/hammer-cli/versions/new</a>\n\n' + \
'Set name to hammer-cli-%s' % (NEXT_FUTURE_VERSION if MAJOR_RELEASE else NEW_VERSION))
if not MAJOR_RELEASE:
print('Set fixed in versions to %s in following issues:' % NEW_VERSION)
for i in referenced_issues:
print('- https://projects.theforeman.org/issues/%s' % i)
Explanation: Now the new release is in upstream repo
Some manual steps follow to improve the UX
New relase on GitHub
Copy the following changelog lines to the description in form on link below
The release title is the new version.
End of explanation |
10,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Playback Using the Notebook Audio Widget
This interface is used often when developing algorithms that involve processing signal samples that result in audible sounds. You will see this in the tutorial. Processing is done before hand as an analysis task, then the samples are written to a .wav file for playback using the PC audio system.
Step1: Below I import the .wav file so I can work with the signal samples
Step2: Here I visualize the C-major chord using the spectrogram to see the chord as being composed or the fundamental or root plus two third and fifth harmonics or overtones.
Step4: Using Pyaudio | Python Code:
Audio('c_major.wav')
Explanation: Playback Using the Notebook Audio Widget
This interface is used often when developing algorithms that involve processing signal samples that result in audible sounds. You will see this in the tutorial. Processing is done before hand as an analysis task, then the samples are written to a .wav file for playback using the PC audio system.
End of explanation
fs,x = sigsys.from_wav('c_major.wav')
Explanation: Below I import the .wav file so I can work with the signal samples:
End of explanation
specgram(x,NFFT=2**13,Fs=fs);
ylim([0,1000])
title(r'Visualize the 3 Pitches of a C-Major Chord')
xlabel(r'Time (s)')
ylabel(r'Frequency (Hz)');
Explanation: Here I visualize the C-major chord using the spectrogram to see the chord as being composed or the fundamental or root plus two third and fifth harmonics or overtones.
End of explanation
import pyaudio
import wave
import time
import sys
PyAudio Example: Play a wave file (callback version)
wf = wave.open('Music_Test.wav', 'rb')
#wf = wave.open('c_major.wav', 'rb')
print('Sample width in bits: %d' % (8*wf.getsampwidth(),))
print('Number of channels: %d' % wf.getnchannels())
print('Sampling rate: %1.1f sps' % wf.getframerate())
p = pyaudio.PyAudio()
def callback(in_data, frame_count, time_info, status):
data = wf.readframes(frame_count)
#In general do some processing before returning data
#Here the data is in signed integer format
#In Python it is more comfortable to work with float (float64)
return (data, pyaudio.paContinue)
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active():
time.sleep(0.1)
stream.stop_stream()
stream.close()
wf.close()
p.terminate()
Explanation: Using Pyaudio: Callback with a wav File Source
With Pyaudio you set up a real-time interface between the audio source, a processing algorithm in Python, and a playback means. In the test case below the wave file is read into memory then played back frame-by-frame using a callback function. In this case the signals samples read from memory, or perhaps a buffer, are passed directly to the audio interface. In general processing algorithms may be implemented that operate on each frame. We will explore this in the tutorial.
End of explanation |
10,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
One of the comments by our manuscript reviewers was on our claim of the 2009 H1N1 and 2013 H7N9 viruses. In order to substantiate our claim of recapitulating their lineages, I will draw their subtypic lineage traces.
Step4: 2009 H1N1 lineage trace
We will first begin with a lineage trace for the 2009 pH1n1 strains. We will go one degree up, and figure out what subtypes are represented there.
Step5: 2013 H7N9 lineage trace | Python Code:
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import json
from collections import defaultdict
from datetime import datetime, date
from random import randint
from networkx.readwrite.json_graph import node_link_data
%matplotlib inline
G = nx.read_gpickle('20150902_all_ird Final Graph.pkl')
G.nodes(data=True)[0]
Explanation: Introduction
One of the comments by our manuscript reviewers was on our claim of the 2009 H1N1 and 2013 H7N9 viruses. In order to substantiate our claim of recapitulating their lineages, I will draw their subtypic lineage traces.
End of explanation
pH1N1s = [n for n, d in G.nodes(data=True) \
if d['reassortant'] \
and d['subtype'] == 'H1N1' \
and d['collection_date'].year >= 2009 \
and d['host_species'] in ['Human', 'Swine'] \
and len(G.predecessors(n)) > 0]
len(pH1N1s)
pH1N1s[0:5]
def get_predecessors(nodes, num_degrees):
Gets the predecessors of the nodes, up to num_degrees specified.
assert isinstance(num_degrees, int), "num_degrees must be an integer."
ancestors = defaultdict(list) # a dictionary of number of degrees up and a list of nodes.
degree = 0
while degree <= num_degrees:
degree += 1
if degree == 1:
for n in nodes:
ancestors[degree].extend(G.predecessors(n))
else:
for n in ancestors[degree - 1]:
ancestors[degree].extend(G.predecessors(n))
return ancestors
ancestors = get_predecessors(pH1N1s, 3)
ancestors_subtypes = defaultdict(set)
for deg, parents in ancestors.items():
for parent in parents:
ancestors_subtypes[deg].add(G.node[parent]['subtype'])
ancestors_subtypes
def collate_nodes_of_interest(nodes, ancestors_dict):
Given a starting list of nodes and a dictionary of its ancestors and their degrees of separation
from the starting list of nodes, return a subgraph comprising of those nodes.
nodes_of_interest = []
nodes_of_interest.extend(nodes)
for k in ancestors_dict.keys():
nodes_of_interest.extend(ancestors[k])
G_sub = G.subgraph(nodes_of_interest)
return G_sub
G_sub = collate_nodes_of_interest(pH1N1s, ancestors,)
def serialize_and_write_to_disk(graph, handle):
Correctly serializes the datetime objects in a graph's edges.
Then, write the graph to disk.
# Serialize timestamp for JSON compatibility
date_handler = lambda obj: (
obj.isoformat()
if isinstance(obj, datetime)
or isinstance(obj, date)
else None
)
for n, d in graph.nodes(data=True):
graph.node[n]['collection_date'] = date_handler(graph.node[n]['collection_date'])
# Serialize the data to disk as a JSON file
data = node_link_data(graph)
s = json.dumps(data)
with open(handle, 'w+') as f:
f.write(s)
serialize_and_write_to_disk(G_sub, 'supp_data/viz/H1N1_graph.json')
Explanation: 2009 H1N1 lineage trace
We will first begin with a lineage trace for the 2009 pH1n1 strains. We will go one degree up, and figure out what subtypes are represented there.
End of explanation
h7n9s = [n for n, d in G.nodes(data=True) \
if d['subtype'] == 'H7N9' \
and d['host_species'] == 'Human' \
and d['collection_date'].year == 2013]
ancestors = get_predecessors(h7n9s, 3)
G_sub = collate_nodes_of_interest(h7n9s, ancestors,)
serialize_and_write_to_disk(G_sub, 'supp_data/viz/H7N9_graph.json')
# Visualize the data
# First, start the HTPP server
! python -m http.server 8002
# Next, load "localhost:80000/supp_data/viz/h1n1.html"
Explanation: 2013 H7N9 lineage trace
End of explanation |
10,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The python-awips package provides access to the entire AWIPS Maps Database for use in Python GIS applications. Map objects are returned as <a href="http
Step1: Request County Boundaries for a WFO
Use request.setParameters() to define fields to be returned by the request.
Step2: Create a merged CWA with cascaded_union
Step3: WFO boundary spatial filter for interstates
Using the previously-defined envelope=merged_counties.buffer(2) in newDataRequest() to request geometries which fall inside the buffered boundary.
Step4: Nearby cities
Request the city table and filter by population and progressive disclosure level
Step5: Lakes
Step6: Major Rivers
Step7: Topography
Spatial envelopes are required for topo requests, which can become slow to download and render for large (CONUS) maps. | Python Code:
from __future__ import print_function
from awips.dataaccess import DataAccessLayer
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import numpy as np
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
from cartopy.feature import ShapelyFeature,NaturalEarthFeature
from shapely.geometry import Polygon
from shapely.ops import cascaded_union
# Standard map plot
def make_map(bbox, projection=ccrs.PlateCarree()):
fig, ax = plt.subplots(figsize=(12,12),
subplot_kw=dict(projection=projection))
ax.set_extent(bbox)
ax.coastlines(resolution='50m')
gl = ax.gridlines(draw_labels=True)
gl.xlabels_top = gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
return fig, ax
# Server, Data Request Type, and Database Table
DataAccessLayer.changeEDEXHost("edex-cloud.unidata.ucar.edu")
request = DataAccessLayer.newDataRequest('maps')
request.addIdentifier('table', 'mapdata.county')
Explanation: The python-awips package provides access to the entire AWIPS Maps Database for use in Python GIS applications. Map objects are returned as <a href="http://toblerity.org/shapely/manual.html">Shapely</a> geometries (Polygon, Point, MultiLineString, etc.) and can be easily plotted by Matplotlib, Cartopy, MetPy, and other packages.
Each map database table has a geometry field called the_geom, which can be used to spatially select map resources for any column of type geometry,
Notes
This notebook requires: python-awips, numpy, matplotplib, cartopy, shapely
Use datatype maps and addIdentifier('table', <postgres maps schema>) to define the map table:
DataAccessLayer.changeEDEXHost("edex-cloud.unidata.ucar.edu")
request = DataAccessLayer.newDataRequest('maps')
request.addIdentifier('table', 'mapdata.county')
Use request.setLocationNames() and request.addIdentifier() to spatially filter a map resource. In the example below, WFO ID BOU (Boulder, Colorado) is used to query counties within the BOU county watch area (CWA) request.addIdentifier('geomField', 'the_geom')
request.addIdentifier('inLocation', 'true')
request.addIdentifier('locationField', 'cwa')
request.setLocationNames('BOU')
request.addIdentifier('cwa', 'BOU')
See the <a href="http://unidata.github.io/awips2/python/maps-database/#mapdatacwa">Maps Database Reference Page</a> for available database tables, column names, and types.
Note the geometry definition of the_geom for each data type, which can be Point, MultiPolygon, or MultiLineString.
Setup
End of explanation
# Define a WFO ID for location
# tie this ID to the mapdata.county column "cwa" for filtering
request.setLocationNames('BOU')
request.addIdentifier('cwa', 'BOU')
# enable location filtering (inLocation)
# locationField is tied to the above cwa definition (BOU)
request.addIdentifier('geomField', 'the_geom')
request.addIdentifier('inLocation', 'true')
request.addIdentifier('locationField', 'cwa')
# This is essentially the same as "'"select count(*) from mapdata.cwa where cwa='BOU';" (=1)
# Get response and create dict of county geometries
response = DataAccessLayer.getGeometryData(request, [])
counties = np.array([])
for ob in response:
counties = np.append(counties,ob.getGeometry())
print("Using " + str(len(counties)) + " county MultiPolygons")
%matplotlib inline
# All WFO counties merged to a single Polygon
merged_counties = cascaded_union(counties)
envelope = merged_counties.buffer(2)
boundaries=[merged_counties]
# Get bounds of this merged Polygon to use as buffered map extent
bounds = merged_counties.bounds
bbox=[bounds[0]-1,bounds[2]+1,bounds[1]-1.5,bounds[3]+1.5]
fig, ax = make_map(bbox=bbox)
# Plot political/state boundaries handled by Cartopy
political_boundaries = NaturalEarthFeature(category='cultural',
name='admin_0_boundary_lines_land',
scale='50m', facecolor='none')
states = NaturalEarthFeature(category='cultural',
name='admin_1_states_provinces_lines',
scale='50m', facecolor='none')
ax.add_feature(political_boundaries, linestyle='-', edgecolor='black')
ax.add_feature(states, linestyle='-', edgecolor='black',linewidth=2)
# Plot CWA counties
for i, geom in enumerate(counties):
cbounds = Polygon(geom)
intersection = cbounds.intersection
geoms = (intersection(geom)
for geom in counties
if cbounds.intersects(geom))
shape_feature = ShapelyFeature(geoms,ccrs.PlateCarree(),
facecolor='none', linestyle="-",edgecolor='#86989B')
ax.add_feature(shape_feature)
Explanation: Request County Boundaries for a WFO
Use request.setParameters() to define fields to be returned by the request.
End of explanation
# Plot CWA envelope
for i, geom in enumerate(boundaries):
gbounds = Polygon(geom)
intersection = gbounds.intersection
geoms = (intersection(geom)
for geom in boundaries
if gbounds.intersects(geom))
shape_feature = ShapelyFeature(geoms,ccrs.PlateCarree(),
facecolor='none', linestyle="-",linewidth=3.,edgecolor='#cc5000')
ax.add_feature(shape_feature)
fig
Explanation: Create a merged CWA with cascaded_union
End of explanation
request = DataAccessLayer.newDataRequest('maps', envelope=envelope)
request.addIdentifier('table', 'mapdata.interstate')
request.addIdentifier('geomField', 'the_geom')
request.setParameters('name')
interstates = DataAccessLayer.getGeometryData(request, [])
print("Using " + str(len(interstates)) + " interstate MultiLineStrings")
# Plot interstates
for ob in interstates:
shape_feature = ShapelyFeature(ob.getGeometry(),ccrs.PlateCarree(),
facecolor='none', linestyle="-",edgecolor='orange')
ax.add_feature(shape_feature)
fig
Explanation: WFO boundary spatial filter for interstates
Using the previously-defined envelope=merged_counties.buffer(2) in newDataRequest() to request geometries which fall inside the buffered boundary.
End of explanation
request = DataAccessLayer.newDataRequest('maps', envelope=envelope)
request.addIdentifier('table', 'mapdata.city')
request.addIdentifier('geomField', 'the_geom')
request.setParameters('name','population','prog_disc')
cities = DataAccessLayer.getGeometryData(request, [])
print("Queried " + str(len(cities)) + " total cities")
citylist = []
cityname = []
# For BOU, progressive disclosure values above 50 and pop above 5000 looks good
for ob in cities:
if ob.getString(b"population"):
if ob.getString(b"prog_disc") > 50:
if int(ob.getString(b"population").decode('UTF-8')) > 5000:
citylist.append(ob.getGeometry())
cityname.append(ob.getString(b"name").decode('UTF-8'))
print("Plotting " + str(len(cityname)) + " cities")
# Plot city markers
ax.scatter([point.x for point in citylist],
[point.y for point in citylist],
transform=ccrs.Geodetic(),marker="+",facecolor='black')
# Plot city names
for i, txt in enumerate(cityname):
ax.annotate(txt, (citylist[i].x,citylist[i].y),
xytext=(3,3), textcoords="offset points")
fig
Explanation: Nearby cities
Request the city table and filter by population and progressive disclosure level:
Warning: the prog_disc field is not entirely understood and values appear to change significantly depending on WFO site.
End of explanation
request = DataAccessLayer.newDataRequest('maps', envelope=envelope)
request.addIdentifier('table', 'mapdata.lake')
request.addIdentifier('geomField', 'the_geom')
request.setParameters('name')
# Get lake geometries
response = DataAccessLayer.getGeometryData(request, [])
lakes = np.array([])
for ob in response:
lakes = np.append(lakes,ob.getGeometry())
print("Using " + str(len(lakes)) + " lake MultiPolygons")
# Plot lakes
for i, geom in enumerate(lakes):
cbounds = Polygon(geom)
intersection = cbounds.intersection
geoms = (intersection(geom)
for geom in lakes
if cbounds.intersects(geom))
shape_feature = ShapelyFeature(geoms,ccrs.PlateCarree(),
facecolor='blue', linestyle="-",edgecolor='#20B2AA')
ax.add_feature(shape_feature)
fig
Explanation: Lakes
End of explanation
request = DataAccessLayer.newDataRequest('maps', envelope=envelope)
request.addIdentifier('table', 'mapdata.majorrivers')
request.addIdentifier('geomField', 'the_geom')
request.setParameters('pname')
rivers = DataAccessLayer.getGeometryData(request, [])
print("Using " + str(len(rivers)) + " river MultiLineStrings")
# Plot rivers
for ob in rivers:
shape_feature = ShapelyFeature(ob.getGeometry(),ccrs.PlateCarree(),
facecolor='none', linestyle=":",edgecolor='#20B2AA')
ax.add_feature(shape_feature)
fig
Explanation: Major Rivers
End of explanation
import numpy.ma as ma
request = DataAccessLayer.newDataRequest()
request.setDatatype("topo")
request.addIdentifier("group", "/")
request.addIdentifier("dataset", "full")
request.setEnvelope(envelope)
gridData = DataAccessLayer.getGridData(request)
print(gridData)
print("Number of grid records: " + str(len(gridData)))
print("Sample grid data shape:\n" + str(gridData[0].getRawData().shape) + "\n")
print("Sample grid data:\n" + str(gridData[0].getRawData()) + "\n")
grid=gridData[0]
topo=ma.masked_invalid(grid.getRawData())
lons, lats = grid.getLatLonCoords()
print(topo.min()) # minimum elevation in our domain (meters)
print(topo.max()) # maximum elevation in our domain (meters)
# Plot topography
cs = ax.contourf(lons, lats, topo, 80, cmap=plt.get_cmap('terrain'),alpha=0.1)
cbar = fig.colorbar(cs, extend='both', shrink=0.5, orientation='horizontal')
cbar.set_label("topography height in meters")
fig
Explanation: Topography
Spatial envelopes are required for topo requests, which can become slow to download and render for large (CONUS) maps.
End of explanation |
10,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-Layer Perceptron Inpainting with MNIST
In the MLP demo, we saw how to use the multi-layer VAMP (ML-VAMP) method for denoising with a prior based on a multi-layer perceptron. We illustrated the method on synthetic data generated from a random MLP model. Here we will repeat the experiment with the MNIST data. Specifically, we consider the problem of estimating an MNIST digit image $x$ from linear measurements $y$ of the form,
$$
y = Ax,
$$
where $A$ is a sub-sampling operation. The sub-sampling operation outputs a subset of the pixels in $x$ corresponding to some non-occuluded area. This problem of reconstructing an image $x$ with a portion of the image removed is called inpainting. Inpainting requires a prior on the image. In this demo, we will use an MLP generative model for that prior.
Importing the Package
We first import the vampyre and other packages as in the sparse linear inverse demo.
Step1: We will also need other packages including tensorflow.
Step2: Loading the MLP model of MNIST
There are several widely-used methods for learning generative MLP models for complex data. In this demo, we will simply load the model in the save parameter file. The model was trained using a so-called variation auto-encoder method by Kigma and Welling. You can recreate the model yourself by running the program vae_train.py.
Step3: Create a Sub-Sampling Transform
In this demo, the measurements $y$ will be a sub-set of the pixels in the image $x$. Thus, the estimation problem is to recover $x$ from an occluded image. We set the vectors Ierase and Ikeep as list of pixels to erase and keep.
Step4: We obtain a set of test images, and erase the pixels
Step6: We will use the following function to plot the images
Step7: We plot a few examples of the original image and the occuluded image. The occlusion is shown as the gra
Step9: Represent the MLP model for ML-VAMP
We next model the MLP as a multi-layer network for ML-VAMP. One slight complication for the MNIST model is that in the VAE model, the final output stage is modeled as a logistic function which is difficult to capture in ML-VAMP (it does not have a simple analytic denoiser). So, we replace it with a probit output with the probit variance set to match the logistic variance.
Step10: We next create the network to represent the MLP model for ML-VAMP. This is identical to MLP demo where we create one estimator and message handler for each stage.
Step11: Reconstruct the MNIST data
To solve for the unknown pixels, we first create a solver.
Step12: We can print the summary of the model. Notice that the final layer, which corresponds to the measurements, has 504 pixels which are the observed pixels.
Step13: Now, we run the solver. This should take just a few seconds.
Step14: Extract the final reconstruction. We can do this by extracting the estimate for the second last stage zhat[-2] and then passing it through the final linear stage.
Step15: Plot the Reconstruction | Python Code:
# Add the vampyre path to the system path
import os
import sys
vp_path = os.path.abspath('../../')
if not vp_path in sys.path:
sys.path.append(vp_path)
import vampyre as vp
# Load the other packages
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Multi-Layer Perceptron Inpainting with MNIST
In the MLP demo, we saw how to use the multi-layer VAMP (ML-VAMP) method for denoising with a prior based on a multi-layer perceptron. We illustrated the method on synthetic data generated from a random MLP model. Here we will repeat the experiment with the MNIST data. Specifically, we consider the problem of estimating an MNIST digit image $x$ from linear measurements $y$ of the form,
$$
y = Ax,
$$
where $A$ is a sub-sampling operation. The sub-sampling operation outputs a subset of the pixels in $x$ corresponding to some non-occuluded area. This problem of reconstructing an image $x$ with a portion of the image removed is called inpainting. Inpainting requires a prior on the image. In this demo, we will use an MLP generative model for that prior.
Importing the Package
We first import the vampyre and other packages as in the sparse linear inverse demo.
End of explanation
import tensorflow as tf
import numpy as np
import scipy
import matplotlib.pyplot as plt
import pickle
from tensorflow.examples.tutorials.mnist import input_data
Explanation: We will also need other packages including tensorflow.
End of explanation
# Load the VAE model
if not os.path.exists('param.p'):
raise Exception("The parameter file, param.p, is not available. "+
"Run the program vae_train.py to build the vae model and save the"+
" parameters")
[Wdec,bdec,Wenc,benc] = pickle.load(open("param.p","rb"))
print("Model successfully loaded.")
Explanation: Loading the MLP model of MNIST
There are several widely-used methods for learning generative MLP models for complex data. In this demo, we will simply load the model in the save parameter file. The model was trained using a so-called variation auto-encoder method by Kigma and Welling. You can recreate the model yourself by running the program vae_train.py.
End of explanation
npix = 784
nrow = 28
row0 = 10 # First row to erase
row1 = 20 # Last row to erase
Ierase = range(nrow*row0,nrow*row1)
Ikeep = np.setdiff1d(range(npix), Ierase)
Explanation: Create a Sub-Sampling Transform
In this demo, the measurements $y$ will be a sub-set of the pixels in the image $x$. Thus, the estimation problem is to recover $x$ from an occluded image. We set the vectors Ierase and Ikeep as list of pixels to erase and keep.
End of explanation
# Load MNIST
if not 'mnist' in locals():
mnist = input_data.read_data_sets('MNIST')
# Batch size to test on
batch_size = 10
# Get the test images and load them into the final layer
xtrue, labels = mnist.test.next_batch(batch_size)
xtrue = xtrue.T
# Erase the pixels
y = xtrue[Ikeep,:]
# Create the erased image
xerase = np.ones((npix,batch_size))*0.5
xerase[Ikeep,:] = y
Explanation: We obtain a set of test images, and erase the pixels
End of explanation
def plt_digit(x):
Plots a digit in the MNIST dataset.
:param:`x` is the digit to plot represented as 784 dim vector
nrow = 28
ncol = 28
xsq = x.reshape((nrow,ncol))
plt.imshow(np.maximum(0,xsq), cmap='Greys_r')
plt.xticks([])
plt.yticks([])
Explanation: We will use the following function to plot the images
End of explanation
nplot = 5 # number of samples to plot
nrow_plot = 2 # number of images per row
for icol in range(nplot):
plt.figure(figsize=(5,5))
plt.subplot(nplot,nrow_plot,icol*nrow_plot+1)
plt_digit(xtrue[:,icol])
if (icol == 0):
plt.title('Original')
plt.subplot(nplot,nrow_plot,icol*nrow_plot+2)
plt_digit(xerase[:,icol])
if (icol == 0):
plt.title('Erased')
Explanation: We plot a few examples of the original image and the occuluded image. The occlusion is shown as the gra
End of explanation
def logistic_var():
Finds a variance to match probit and logistic regression.
Finds a variance :math:`\\tau_w` such that,
:math:`p=P(W < z) \\approx \\frac{1}{1+e^{-z}},`
where :math:`W \\sim {\\mathcal N}(0,\\tau_w)`.
z = np.linspace(-5,5,1000) # z points to test
p1 = 1/(1+np.exp(-z)) # target probability
var_test = np.linspace(2,3,1000)
err = []
for v in var_test:
p2 = 0.5*(1+scipy.special.erf(z/np.sqrt(v*2)))
err.append(np.mean((p1-p2)**2))
i = np.argmin(err)
wvar = var_test[i]
return wvar
# Since the logistic loss is not easily modeled in ML-VAMP, we replace
# the logistic loss with an approximate probit loss. TO this end, we find
# a variance wvar such that the probit and logistic loss match.
wvar = logistic_var()
z = np.linspace(-5,5,1000)
p1 = 1/(1+np.exp(-z))
p2 = 0.5*(1+scipy.special.erf(z/np.sqrt(2*wvar)))
plt.plot(z, np.log10(1-p1), linewidth=2)
plt.plot(z, np.log10(1-p2), linewidth=2)
plt.grid()
plt.xlabel('z')
plt.ylabel('Loss')
plt.legend(['Logistic loss','Probit loss'])
Explanation: Represent the MLP model for ML-VAMP
We next model the MLP as a multi-layer network for ML-VAMP. One slight complication for the MNIST model is that in the VAE model, the final output stage is modeled as a logistic function which is difficult to capture in ML-VAMP (it does not have a simple analytic denoiser). So, we replace it with a probit output with the probit variance set to match the logistic variance.
End of explanation
# Construct the first layer which is a Gaussian prior
batch_size = 10
n0 = Wdec[0].shape[0]
est0 = vp.estim.GaussEst(0,1,shape=(n0,batch_size),name='Gauss input')
est_list = [est0]
# To improve the robustness, we add damping and place a lower bound on the variance
damp = 0.75
damp_var = 0.5
alpha_max = 1-1e-3
rvar_min = 0.01
# Loop over layers in the decoder model
nlayers = len(Wdec)
msg_hdl_list = []
ind = 0
for i in range(nlayers):
# Get matrices for the layer
Wi = Wdec[i].T
bi = bdec[i]
# On the final layer, perform the erasing and add noise
if (i < nlayers-1):
wvari = 0
else:
Wi = Wi[Ikeep,:]
bi = bi[Ikeep]
wvari = wvar
n1,n0 = Wi.shape
zshape0 = (n0,batch_size)
zshape1 = (n1,batch_size)
name = 'Dense_%d' % ind
Wiop = vp.trans.MatrixLT(Wi,zshape0)
esti = vp.estim.LinEstTwo(Wiop,bi[:,None],wvari,name=name)
est_list.append(esti)
# Add the nonlinear layer
if (i < nlayers-1):
name = 'ReLU_%d' % ind
# For all but the last layer, this is a ReLU
esti = vp.estim.ReLUEst(zshape1,map_est=False,name=name)
else:
# For the final layer it is a hard threshold
esti = vp.estim.BinaryQuantEst(y,zshape1,name='Quantize')
est_list.append(esti)
# Add the message handlers
msg_hdl = vp.estim.MsgHdlSimp(shape=zshape0,damp=damp,damp_var=damp_var,\
alpha_max=alpha_max,rvar_min=rvar_min)
msg_hdl_list.append(msg_hdl)
msg_hdl = vp.estim.MsgHdlSimp(shape=zshape1,damp=damp,damp_var=damp_var,\
alpha_max=alpha_max,rvar_min=rvar_min)
msg_hdl_list.append(msg_hdl)
ind += 1
# For further robustness, we limit the variance ratio in layer 1 (the ReLU layer)
msg_hdl_list[1].alpha_max = 0.95
Explanation: We next create the network to represent the MLP model for ML-VAMP. This is identical to MLP demo where we create one estimator and message handler for each stage.
End of explanation
# Create the MLVamp solver
nit = 50
solver = vp.solver.MLVamp(est_list,msg_hdl_list,comp_cost=True,\
hist_list=['zhat','zhatvar'],nit=nit)
Explanation: Reconstruct the MNIST data
To solve for the unknown pixels, we first create a solver.
End of explanation
solver.summary()
Explanation: We can print the summary of the model. Notice that the final layer, which corresponds to the measurements, has 504 pixels which are the observed pixels.
End of explanation
solver.solve()
Explanation: Now, we run the solver. This should take just a few seconds.
End of explanation
zhat = solver.zhat
Wi = Wdec[nlayers-1].T
bi = bdec[nlayers-1]
zfinal = Wi.dot(zhat[-2]) + bi[:,None]
xhat = 1/(1+np.exp(-zfinal))
Explanation: Extract the final reconstruction. We can do this by extracting the estimate for the second last stage zhat[-2] and then passing it through the final linear stage.
End of explanation
ncol = 10
nrow = 3
plt.figure(figsize=(10,5))
for icol in range(ncol):
plt.subplot(ncol,nrow,icol*nrow+1)
plt_digit(xtrue[:,icol])
if (icol == 0):
plt.title('Original')
plt.subplot(ncol,nrow,icol*nrow+2)
plt_digit(xerase[:,icol])
if (icol == 0):
plt.title('Erased')
plt.subplot(ncol,nrow,icol*nrow+3)
plt_digit(xhat[:,icol])
if (icol == 0):
plt.title('ML-VAMP')
Explanation: Plot the Reconstruction
End of explanation |
10,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BigQuery Pipeline
Google Cloud Datalab, with the pipeline subcommand, enables productionizing (i.e. scheduling and orchestrating) notebooks that accomplish ETL with BigQuery and GCS. It uses Apache Airflow (https
Step1: Building a data transformation pipeline
The pipeline subcommand deploys and orchestrates an ETL pipeline. It supports specifying either an existing BQ table or a GCS path (with accompanying schema) as the data input, executing a transformation with BQ SQL and producing an output of the results (again, either a BQ table or a GCS path). This pipeline can be executed on a schedule. Additionally, parameters can be specified to templatize or customize the pipeline.
Step2: When the above cell is run, a pipeline is deployed and the results of the query are written into the BQ results table (i.e. $results_table). It could take 5-10 min between when the cell is executed for the result_table to show up. Below, we'll see additional examples for alternate ways of specifying the source, the source-types supported, and for customizing the pipeline.
Parameterization
The parameters section provides the ability to customize the inputs and outputs of the pipeline. These parameters are merged with the SQL query parameters into a list, and are specified in the cell body (along the same lines as the %bq execute command, for example).
In addition to parameters that the users can define, the following mapping keys have been made available for formatting strings and are designed to capture common scenarios around parameterizing the pipeline with the execution timestamp.
'_ds'
Step3: SQL-based data transformation pipeline for GCS data
pipeline also supports specifying GCS paths as both the input (accompanied by a schema) and output, thus completely bypassing the specification of any BQ tables. Garbage collection of all intermediate BQ tables is handled for the user.
Step4: Load data from GCS into BigQuery
pipeline can also be used to parameterize and schedule the loading of data from GCS to BQ, i.e the equivalent of the %bq load command.
Step5: Extract data from BigQuery into GCS
Similar to load, pipeline can also be used to perform the equivalent of the %bq extract command. To illustrate, we extract the data in the table that was the result of the 'load' pipeline, and write it to a GCS file.
Now, it's possible that if you "Ran All Cells" in this notebook, this pipeline gets deployed at the same time as the previous load-pipeline, in which case the source table isn't yet ready. Hence we set retries to 3, with a delay of 90 seconds and hope that the table eventually does get created and this pipeline is successful.
Step6: Output of successful pipeline runs
Step7: Cleanup
Step8: Stop Billing | Python Code:
import datetime
import google.datalab.bigquery as bq
import google.datalab.contrib.bigquery.commands
import google.datalab.contrib.pipeline.airflow
import google.datalab.contrib.pipeline.composer
import google.datalab.kernel
import google.datalab.storage as storage
from google.datalab import Context
project = Context.default().project_id
# Composer variables (change this as per your preference)
environment = 'rajivpb-composer-next'
location = 'us-central1'
# Airflow setup variables
vm_name = 'datalab-airflow'
gcs_dag_bucket_name = project + '-' + vm_name
gcs_dag_file_path = 'dags'
# Setup GCS bucket and BQ datastes
bucket_name = project + '-bq_pipeline'
bucket = storage.Bucket(bucket_name)
bucket.create()
print(bucket.exists())
dataset_name = 'bq_pipeline'
dataset = bq.Dataset(dataset_name)
dataset.create()
print(dataset.exists())
# Start and end timestamps for our pipelines.
start = datetime.datetime.now()
formatted_start = start.strftime('%Y%m%dT%H%M%S')
end = start + datetime.timedelta(minutes=5)
%bq pipeline -h
Explanation: BigQuery Pipeline
Google Cloud Datalab, with the pipeline subcommand, enables productionizing (i.e. scheduling and orchestrating) notebooks that accomplish ETL with BigQuery and GCS. It uses Apache Airflow (https://airflow.apache.org/start.html) as the underlying technology for orchestrating and scheduling.
Disclaimer: This is still in the experimental stage.
Setup
Google Cloud Composer
Set up a Google Cloud Composer environment using the instructions here: https://cloud.google.com/composer/docs/quickstart, and specify 'datalab' as a python dependency using the instructions here: https://cloud.google.com/composer/docs/how-to/using/installing-python-dependencies. The examples in the cells below assume that a Composer environment is available.
Airflow
Alternately, you could also set up your own VM with Airflow as a long-running process. Run the "Airflow Setup" notebook (under samples/contrib/pipeline/); it will setup a GCE VM with the Airflow Scheduler and the dashboard webserver.
The pipeline subcommand in the cells below (and for the pipelines to be deployed successfully) needs either the Composer setup or the Airflow setup.
End of explanation
github_archive = 'githubarchive.month.201802'
%%bq query --name my_pull_request_events
SELECT id, created_at, repo.name FROM input
WHERE actor.login = 'rajivpb' AND type = 'PullRequestEvent'
# We designate the following 'output' for our pipeline.
results_table = project + '.' + dataset_name + '.' + 'pr_events_' + formatted_start
# Pipeline name is made unique by suffixing a timestamp
pipeline_name = 'github_once_' + formatted_start
%%bq pipeline --name $pipeline_name -e $environment -l $location
input:
table: $github_archive
transformation:
query: my_pull_request_events
output:
table: $results_table
mode: overwrite
schedule:
start: $start
end: $end
interval: '@once'
catchup: True
Explanation: Building a data transformation pipeline
The pipeline subcommand deploys and orchestrates an ETL pipeline. It supports specifying either an existing BQ table or a GCS path (with accompanying schema) as the data input, executing a transformation with BQ SQL and producing an output of the results (again, either a BQ table or a GCS path). This pipeline can be executed on a schedule. Additionally, parameters can be specified to templatize or customize the pipeline.
End of explanation
# The source/input is formatted with the built-in mapping keys _ts_year and
# _ts_month and these are evaluated (or "bound") at the time of pipeline
# execution. This could be at some point in the future, or at some point in the
# "past" in cases where a backfill job is being executed.
github_archive_current_month = 'githubarchive.month.%(_ts_year)s%(_ts_month)s'
# The destination/output is formatted with additional user-defined parameters
# 'project', 'dataset', and 'user'. These are evaluated/bound at the time of
# execution of the %bq pipeline cell.
results_table = '%(project)s.%(dataset_name)s.%(user)s_pr_events_%(_ts_nodash)s'
pipeline_name = 'github_parameterized_' + formatted_start
%%bq query --name my_pull_request_events
SELECT id, created_at, repo.name FROM input
WHERE actor.login = @user AND type = 'PullRequestEvent'
%%bq pipeline --name $pipeline_name -e $environment -l $location
input:
table: $github_archive_current_month
transformation:
query: my_pull_request_events
output:
table: $results_table
mode: overwrite
parameters:
- name: user
type: STRING
value: 'rajivpb'
- name: project
type: STRING
value: $project
- name: dataset_name
type: STRING
value: $dataset_name
schedule:
start: $start
end: $end
interval: '@once'
catchup: True
Explanation: When the above cell is run, a pipeline is deployed and the results of the query are written into the BQ results table (i.e. $results_table). It could take 5-10 min between when the cell is executed for the result_table to show up. Below, we'll see additional examples for alternate ways of specifying the source, the source-types supported, and for customizing the pipeline.
Parameterization
The parameters section provides the ability to customize the inputs and outputs of the pipeline. These parameters are merged with the SQL query parameters into a list, and are specified in the cell body (along the same lines as the %bq execute command, for example).
In addition to parameters that the users can define, the following mapping keys have been made available for formatting strings and are designed to capture common scenarios around parameterizing the pipeline with the execution timestamp.
'_ds': the date formatted as YYYY-MM-DD
'_ts': the full ISO-formatted timestamp YYYY-MM-DDTHH:MM:SS.mmmmmm
'_ds_nodash': the date formatted as YYYYMMDD (i.e. YYYY-MM-DD with 'no dashes')
'_ts_nodash': the timestamp formatted as YYYYMMDDTHHMMSSmmmmmm (i.e full ISO-formatted timestamp without dashes or colons)
'_ts_year': 4-digit year
'_ts_month': '1'-'12'
'_ts_day': '1'-'31'
'_ts_hour': '0'-'23'
'_ts_minute': '0'-'59'
'_ts_second': '0'-'59'
End of explanation
gcs_input_path = 'gs://cloud-datalab-samples/cars.csv'
gcs_output_path = 'gs://%(bucket_name)s/all_makes_%(_ts_nodash)s.csv'
pipeline_name = 'gcs_to_gcs_transform_' + formatted_start
%%bq query --name all_makes
SELECT Make FROM input
%%bq pipeline --name $pipeline_name -e $environment -l $location
input:
path: $gcs_input_path
schema:
- name: Year
type: INTEGER
- name: Make
type: STRING
- name: Model
type: STRING
- name: Description
type: STRING
- name: Price
type: FLOAT
csv:
skip: 1
transformation:
query: all_makes
output:
path: $gcs_output_path
parameters:
- name: bucket_name
type: STRING
value: $bucket_name
schedule:
start: $start
end: $end
interval: '@once'
catchup: True
Explanation: SQL-based data transformation pipeline for GCS data
pipeline also supports specifying GCS paths as both the input (accompanied by a schema) and output, thus completely bypassing the specification of any BQ tables. Garbage collection of all intermediate BQ tables is handled for the user.
End of explanation
bq_load_results_table = '%(project)s.%(dataset_name)s.cars_load'
pipeline_name = 'load_gcs_to_bq_' + formatted_start
%%bq pipeline --name $pipeline_name -e $environment -l $location
load:
path: $gcs_input_path
schema:
- name: Year
type: INTEGER
- name: Make
type: STRING
- name: Model
type: STRING
- name: Description
type: STRING
- name: Price
type: FLOAT
csv:
skip: 1
table: $bq_load_results_table
mode: overwrite
parameters:
- name: project
type: STRING
value: $project
- name: dataset_name
type: STRING
value: $dataset_name
schedule:
start: $start
end: $end
interval: '@once'
catchup: True
Explanation: Load data from GCS into BigQuery
pipeline can also be used to parameterize and schedule the loading of data from GCS to BQ, i.e the equivalent of the %bq load command.
End of explanation
gcs_extract_path = 'gs://%(bucket_name)s/cars_extract_%(_ts_nodash)s.csv'
pipeline_name = 'extract_bq_to_gcs_' + formatted_start
%%bq pipeline --name $pipeline_name -e $environment -l $location
extract:
table: $bq_load_results_table
path: $gcs_extract_path
format: csv
csv:
delimiter: '#'
parameters:
- name: bucket_name
type: STRING
value: $bucket_name
- name: project
type: STRING
value: $project
- name: dataset_name
type: STRING
value: $dataset_name
schedule:
start: $start
interval: '@once'
catchup: True
retries: 3
retry_delay_seconds: 90
Explanation: Extract data from BigQuery into GCS
Similar to load, pipeline can also be used to perform the equivalent of the %bq extract command. To illustrate, we extract the data in the table that was the result of the 'load' pipeline, and write it to a GCS file.
Now, it's possible that if you "Ran All Cells" in this notebook, this pipeline gets deployed at the same time as the previous load-pipeline, in which case the source table isn't yet ready. Hence we set retries to 3, with a delay of 90 seconds and hope that the table eventually does get created and this pipeline is successful.
End of explanation
# You will see two files named all_makes_<timestamp> and cars_extract_<timestamp>
# under the bucket:
!gsutil ls gs://$bucket_name
# You will see three tables named cars_load, pr_events_<timestamp> and
# <user>_pr_events_<timestamp> under the BigQuery dataset:
!bq ls $dataset_name
Explanation: Output of successful pipeline runs
End of explanation
# Delete the contents of the GCS bucket, the GCS bucket itself, and the BQ
# dataset. Uncomment the lines below and execute.
#!gsutil rm -r gs://$bucket_name
#!bq rm -r -f $dataset_name
Explanation: Cleanup
End of explanation
# If you chose the Airflow VM (in the Setup), this will delete the VM. Uncomment the
# line below and execute.
#!gcloud compute instances stop $vm_name --zone us-central1-b --quiet
# This just verifies that cleanup actually worked. Run this after running the
# 'Cleanup' cell
#Should show two error messages like "BucketNotFoundException: 404 gs://..."
!gsutil ls gs://$bucket_name
!gsutil ls gs://$gcs_dag_bucket_name/dags
#Should show an error message like "BigQuery error in ls operation: Not found ..."
!bq ls $dataset_name
Explanation: Stop Billing
End of explanation |
10,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of U.S. Incomes by Occupation and Gender
Notebook by Harish Kesava Rao
Use of this dataset should cite the Bureau of Labor Statistics as per their copyright information
Step1: We assign more meaningful and self explanatory column headers. The names list can be assigned to the names parameter when reading in the csv file.
Step2: To manipulate and perform operations on numerical columns, we will need to handle null/empty/non-numerical values in the numerical columns. In our dataset, this means everything except the Occupations column.
Step3: The original header that was part of the csv can be dropped
Step4: The following looks more readable
Step5: Now, we will slice the dataset to understand more about the data
Step6: The sorting did not happen by the All_workers column. Let's see why that is the case.
Step7: The columns are objects. We will need to convert them to primitive types, such as integer, string etc.
Step8: The conversion happened for All_workers, Male_Workers (in 1000s), Female_Workers (in 1000s), but not for the other columns. We will see why.
Step9: The columns for which the type conversion did not happend contain 'Na' values in them. These look like NULL or NA values, but actually, they are strings with the value 'Na'. We can work around these values by replacing them with 0.
Step10: This looks better, without the Na string values. Now, we will convert all the columns, except Occupations to int.
Occupations is a string column and we can leave it as an object type.
Step11: We will now slice the data and analyze the count of workers in each occupation.
Step12: The first row shows the total number of workers. We will drop the row since it is not useful for our analysis and will skew the overall pattern.
Step13: We have a plot showing top employee count for 5 occupations. We will append gender wise numbers on the same plot. For this purpose, we use pandas' melt function. https
Step14: In the above plot, we see that there are some professions involving only male employees - such as pales, production, transportation, construction etc. Similarly, there are occuapations that involve only female employees - such as healthcare professionals, education etc. This is just a sample of 20 occupations. We can analyze more, as seen below.
Step15: We can observe a similar trend in the above plot as well, where transportation, construction, production, maintenance and groundskeeping involve male employees and education and healthcare involves female employees. We can also see that business, office, sale etc. have both female and male employees.
Now, let us sort the Occupations by weekly median income, irrespective of gender of employees | Python Code:
import pandas as pd
from pandas import DataFrame, Series
Explanation: Analysis of U.S. Incomes by Occupation and Gender
Notebook by Harish Kesava Rao
Use of this dataset should cite the Bureau of Labor Statistics as per their copyright information: The Bureau of Labor Statistics (BLS) is a Federal government agency and everything that we publish, both in hard copy and electronically, is in the public domain, except for previously copyrighted photographs and illustrations. You are free to use our public domain material without specific permission, although we do ask that you cite the Bureau of Labor Statistics as the source.
What kind of questions are we aiming to answer through the data analysis?
<ol>
<li>Is there an inequality of income in the labor force?<br></li>
<li>Is it more pronounced in certain sectors than others?<br></li>
<li>Can the dataset help us answer the questions?<br>
a. Is the dataset in a useable format?<br>
b. How much of pre-analysis preparation is required before we start analyzing the dataset?<br></li>
<li>What other inferences can we derive out of the dataset?</li>
</ol>
Analyzing the dataset
Median weekly earnings of full-time wage and salary workers by detailed occupation and sex.<br>
<b>Occupation:</b> Job title as given from BLS. Industry summaries are given in ALL CAPS.<br>
<b>All_workers:</b> Number of workers male and female, in thousands.<br>
<b>All_weekly:</b> Median weekly income including male and female workers, in USD.<br>
<b>M_workers:</b> Number of male workers, in thousands.<br>
<b>M_weekly:</b> Median weekly income for male workers, in USD.<br>
<b>F_workers:</b> Number of female workers, in thousands.<br>
<b>F_weekly:</b> Median weekly income for female workers, in USD.<br>
End of explanation
names = ['Occupations','All_workers', 'Weekly_Income_Overall',
'Male_Workers (in 1000s)', 'Median_weekly_income (Male)',
'Female_Workers (in 1000s)', 'Median_weekly_income (Female)']
#mac
income = pd.read_csv('/Users/Harish/Documents/HK_Work/Python/Python-for-Data-Analysis/Kaggle-US-Incomes/inc_occ_gender.csv'
,names=names, header=None)
#win
#income = pd.read_csv(r'C:\Users\\Documents\Personal\Python-for-Data-Analysis\Kaggle-US-Incomes\inc_occ_gender.csv'
# ,names=names, header=None)
Explanation: We assign more meaningful and self explanatory column headers. The names list can be assigned to the names parameter when reading in the csv file.
End of explanation
income.head()
Explanation: To manipulate and perform operations on numerical columns, we will need to handle null/empty/non-numerical values in the numerical columns. In our dataset, this means everything except the Occupations column.
End of explanation
income.drop(0, inplace=True)
income['income_index'] = income.index
Explanation: The original header that was part of the csv can be dropped
End of explanation
income.head()
Explanation: The following looks more readable
End of explanation
#selecting just the occupations and worker count into a new dataframe
job_type_by_worker_count = income[['Occupations','All_workers']]
job_type_by_worker_count.sort_values(by='All_workers', ascending =False)
Explanation: Now, we will slice the dataset to understand more about the data
End of explanation
income.dtypes
Explanation: The sorting did not happen by the All_workers column. Let's see why that is the case.
End of explanation
job_type_by_worker_count2 = income.apply(pd.to_numeric, errors = 'ignore')
job_type_by_worker_count2.dtypes
Explanation: The columns are objects. We will need to convert them to primitive types, such as integer, string etc.
End of explanation
job_type_by_worker_count2.head(n=15)
Explanation: The conversion happened for All_workers, Male_Workers (in 1000s), Female_Workers (in 1000s), but not for the other columns. We will see why.
End of explanation
income.replace('Na',0,inplace=True)
income.head(n=15)
Explanation: The columns for which the type conversion did not happend contain 'Na' values in them. These look like NULL or NA values, but actually, they are strings with the value 'Na'. We can work around these values by replacing them with 0.
End of explanation
income_clean = income.apply(pd.to_numeric, errors = 'ignore')
income_clean.dtypes
Explanation: This looks better, without the Na string values. Now, we will convert all the columns, except Occupations to int.
Occupations is a string column and we can leave it as an object type.
End of explanation
top10_worker_count = income_clean.sort_values(by='All_workers', axis=0,
ascending=False)[['Occupations','All_workers',
'Male_Workers (in 1000s)',
'Female_Workers (in 1000s)']]
top10_worker_count.drop(1, axis=0, inplace=True)
Explanation: We will now slice the data and analyze the count of workers in each occupation.
End of explanation
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
top_worker_count_barplot = sb.factorplot(x='Occupations',
y='All_workers', data=top10_worker_count.head(n=5), hue='All_workers',
size=7, aspect=2.3,kind='bar')
top_worker_count_barplot.set(xlabel = "Occupations", ylabel = "Workers - All, Male, Female",
title = "Top worker counts and the occupations")
sb.set(font_scale=2.3)
Explanation: The first row shows the total number of workers. We will drop the row since it is not useful for our analysis and will skew the overall pattern.
End of explanation
top10_worker_count.head()
top10_worker_count = pd.melt(top10_worker_count, id_vars=["Occupations"],
value_vars=['All_workers','Male_Workers (in 1000s)','Female_Workers (in 1000s)'])
top10_mf_count = income_clean.sort_values(by='All_workers', axis=0,
ascending=False)[['Occupations',
'Male_Workers (in 1000s)',
'Female_Workers (in 1000s)']]
top10_mf_count.drop(1, axis=0, inplace=True)
top10_male_female = pd.melt(top10_mf_count, id_vars=["Occupations"],
value_vars=['Male_Workers (in 1000s)','Female_Workers (in 1000s)'])
top10_worker_count.replace('NaN',0,inplace=True)
top10_male_female.replace('NaN',0,inplace=True)
sorted_worker_count_n = top10_worker_count.sort_values(by='value', axis=0,
ascending=False)[['Occupations','variable','value']]
sorted_male_female_count_n = top10_male_female.sort_values(by='value', axis=0,
ascending=False)[['Occupations','variable','value']]
sorted_worker_count_n.head(n=5)
sorted_male_female_count_n.head(n=5)
sorted_worker_count_n_barplot = sb.factorplot(x='Occupations',
y='value', data=sorted_worker_count_n.head(n=10), hue='variable',
palette="bright",
size=10, aspect=5,kind='bar')
sorted_worker_count_n_barplot.set(xlabel = "Occupations", ylabel = "Workers - All, Male, Female",
title = "Top worker counts and the occupations")
sb.set(font_scale=4.0)
Explanation: We have a plot showing top employee count for 5 occupations. We will append gender wise numbers on the same plot. For this purpose, we use pandas' melt function. https://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html
End of explanation
sorted_male_female_count_n_barplot = sb.factorplot(x='Occupations',
y='value', data=sorted_male_female_count_n.head(n=10), hue='variable',
palette="bright",
size=12, aspect=5,kind='bar')
sorted_male_female_count_n_barplot.set(xlabel = "Occupations", ylabel = "Workers - All, Male, Female",
title = "Top male, female employee counts and the occupations")
sb.set(font_scale=4)
Explanation: In the above plot, we see that there are some professions involving only male employees - such as pales, production, transportation, construction etc. Similarly, there are occuapations that involve only female employees - such as healthcare professionals, education etc. This is just a sample of 20 occupations. We can analyze more, as seen below.
End of explanation
top10_income_count = income_clean.sort_values(by='Weekly_Income_Overall', axis=0,
ascending=False)[['Occupations',
'Weekly_Income_Overall']]
low10_income_count = income_clean.sort_values(by='Weekly_Income_Overall', axis=0,
ascending=True)[['Occupations',
'Weekly_Income_Overall']]
top10_income_count_barplot = sb.factorplot(x='Occupations',
y='Weekly_Income_Overall', data=top10_income_count.head(n=5), hue='Occupations',
palette="bright",
size=12, aspect=5, kind='bar')
top10_income_count_barplot.set(xlabel = "Occupations", ylabel = "Weekly Income",
title = "Top Weekly Incomes and the respective occupations")
sb.set(font_scale=2)
#size=12, aspect=5,
low10_income_count = low10_income_count[(low10_income_count != 0).all(1)]
# lowest10_income_count_barplot = sb.factorplot(x='Weekly_Income_Overall',
# y='Occupations', data=low10_income_count.head(n=10), hue='Occupations',palette="bright",
# size=12, aspect=5, kind='bar')
# lowest10_income_count_barplot.set(xlabel='Occupations', ylabel = 'Weekly Income',
# title = 'Lowest Weekly Incomes and the respective occupations')
Explanation: We can observe a similar trend in the above plot as well, where transportation, construction, production, maintenance and groundskeeping involve male employees and education and healthcare involves female employees. We can also see that business, office, sale etc. have both female and male employees.
Now, let us sort the Occupations by weekly median income, irrespective of gender of employees
End of explanation |
10,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploratory Data Analysis
Author
Step1: The first three lines of code import libraries we are using and renames to shorter names.
Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. We will use it for basic graphics
Numpy is the fundamental package for scientific computing with Python. It contains among other things
Step2: Now let us use the describe function to see the 3 most basic summary statistics
Step3: It appears that the datasets are almost identical by looking only at the mean and the standard deviation. Instead, let us make a scatter plot for each of the data sets.
Since the data is stored in a data frame (similar to an excel sheet), we can see the column names on top and we can access the columns using the following syntax
anscombe_i.x
anscombe_i.y
or
anscombe_i['x']
anscombe_i['y']
Step4: Shockily we can clearly see that the datasets are quite different! The first data set has pure irreducable error, the second data set is not linear, the third dataset has an outlier, and the fourth dataset all of x values are the same except for an outlier. If you do not believe me, I uploaded an excel worksheet with the full datasets and summary statistics here
Now let us learn how to make a box plot. Before writing this tutorial I didn't know how to make a box plot in matplotlib (I usually use seaborn which we will learn soon). I did a quick google search for "box plot matplotlib) and found an example here which outlines a couple of styling options.
Step5: Trying reading the documentation for the box plot above and make your own visuaizations.
Next we are going to learn how to use Seaborn which is a very powerful visualization library. Matplotlib is a great library and has many examples of different plots, but seaborn is built on top of matplot lib and offers better plots for statistical analysis. If you do not have seaborn installed, you can follow the instructions here
Step6: Seaborn does linear regression automatically (which we will learn soon). We can also see that the linear regression is the same for each dataset even though they are quite different.
The big takeway here is that summary statistics can be deceptive! Always make visualizations of your data before making any models.
Irist Dataset
Next we are going to visualize the Iris dataset. Let us first read the .csv and print the first elements of the dataframe. We also get the basic summary statistics.
Step7: As we can see, it is difficult to interpret the results. We can see that sepal length, sepal width, petal length and petal width are all numeric features, and the iris variable is the specific type of iris (or categorical variable). To better understand the data, we can split the data based on each type of iris, make a histogram for each numeric feature, scatter plot between features and make many visualizations. I will demonstrate the process for generating a histogram for sepal length of Iris-setosa and a scatter plot for sepal length vs width for Iris-setosa
Step8: This would help us to better undestand the data and is necessary for good analysis, but to do this for all the features and iris types (classes) would take a significant amount of time. Seaborn has a function called the pairplot which will do all of that for us! | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
anscombe_i = pd.read_csv('../datasets/anscombe_i.csv')
anscombe_ii = pd.read_csv('../datasets/anscombe_ii.csv')
anscombe_iii = pd.read_csv('../datasets/anscombe_iii.csv')
anscombe_iv = pd.read_csv('../datasets/anscombe_iv.csv')
Explanation: Exploratory Data Analysis
Author: Andrew Andrade ([email protected])
This is complimentory tutorial for datascienceguide.github.io outlining the basics of exploratory data analysis
In this tutorial, we will learn to open a comma seperated value (CSV) data file and make find summary statistics and basic visualizations on the variables in the Ansombe dataset (to see the importance of visualization). Next we will investigate Fisher's Iris data set using more powerful visualizations.
These tutorials assumes a basic understanding of python so for those new to python, understanding basic syntax will be very helpful. I recommend writing python code in Jupyter notebook as it allows you to rapidly prototype and annotate your code.
Python is a very easy language to get started with and there are many guides:
Full list:
http://docs.python-guide.org/en/latest/intro/learning/
My favourite resources:
https://docs.python.org/2/tutorial/introduction.html
https://docs.python.org/2/tutorial/
http://learnpythonthehardway.org/book/
https://www.udacity.com/wiki/cs101/%3A-python-reference
http://rosettacode.org/wiki/Category:Python
Once you are familiar with python, the first part of this guide is useful in learning some of the libraries we will be using:
http://cs231n.github.io/python-numpy-tutorial
In addition, the following post helps teach the basics for data analysis in python:
http://www.analyticsvidhya.com/blog/2014/07/baby-steps-libraries-data-structure/
http://www.gregreda.com/2013/10/26/intro-to-pandas-data-structures/
Downloading csvs
We should store this in a known location on our local computer or server. The simplist way is to download and save it in the same folder you launch Jupyter notebook from, but I prefer to save my datasets in a datasets folder 1 directory up from my tutorial code (../datasets/).
You should dowload the following CSVs:
http://datascienceguide.github.io/datasets/anscombe_i.csv
http://datascienceguide.github.io/datasets/anscombe_ii.csv
http://datascienceguide.github.io/datasets/anscombe_iii.csv
http://datascienceguide.github.io/datasets/anscombe_iv.csv
http://datascienceguide.github.io/datasets/iris.csv
If using a server, you can download the file by using the following command:
bash
wget http://datascienceguide.github.io/datasets/iris.csv
Now we can run the following code to open the csv.
End of explanation
print anscombe_i[0:5]
Explanation: The first three lines of code import libraries we are using and renames to shorter names.
Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. We will use it for basic graphics
Numpy is the fundamental package for scientific computing with Python. It contains among other things:
a powerful N-dimensional array object
sophisticated (broadcasting) functions
tools for integrating C/C++ and Fortran code
useful linear algebra, Fourier transform, and random number capabilities
Pandas is open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
It extends the numpy array to allow for columns of different variable types.
Since we are using Jupyter notebook we use the line %matplotlib inline to tell python to put the figures inline with the notebook (instead of a popup)
pd.read_csv opens a .csv file and stores it into a dataframe object which we call anscombe_i, anscombe_ii, etc.
Next, let us see the structure of the data by printing the first 5 rows (using [:5]) data set:
End of explanation
print "Data Set I"
print anscombe_i.describe()[:3]
print "Data Set II"
print anscombe_ii.describe()[:3]
print "Data Set III"
print anscombe_iii.describe()[:3]
print "Data Set IV"
print anscombe_iv.describe()[:3]
Explanation: Now let us use the describe function to see the 3 most basic summary statistics
End of explanation
plt.figure(1)
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
plt.title("anscombe_i")
plt.xlabel("x")
plt.ylabel("y")
plt.figure(2)
plt.scatter(anscombe_ii.x, anscombe_ii.y, color='black')
plt.title("anscombe_ii")
plt.xlabel("x")
plt.ylabel("y")
plt.figure(3)
plt.scatter(anscombe_iii.x, anscombe_iii.y, color='black')
plt.title("anscombe_iii")
plt.xlabel("x")
plt.ylabel("y")
plt.figure(4)
plt.scatter(anscombe_iv.x, anscombe_iv.y, color='black')
plt.title("anscombe_iv")
plt.xlabel("x")
plt.ylabel("y")
Explanation: It appears that the datasets are almost identical by looking only at the mean and the standard deviation. Instead, let us make a scatter plot for each of the data sets.
Since the data is stored in a data frame (similar to an excel sheet), we can see the column names on top and we can access the columns using the following syntax
anscombe_i.x
anscombe_i.y
or
anscombe_i['x']
anscombe_i['y']
End of explanation
# basic box plot
plt.figure(1)
plt.boxplot(anscombe_i.y)
plt.title("anscombe_i y box plot")
Explanation: Shockily we can clearly see that the datasets are quite different! The first data set has pure irreducable error, the second data set is not linear, the third dataset has an outlier, and the fourth dataset all of x values are the same except for an outlier. If you do not believe me, I uploaded an excel worksheet with the full datasets and summary statistics here
Now let us learn how to make a box plot. Before writing this tutorial I didn't know how to make a box plot in matplotlib (I usually use seaborn which we will learn soon). I did a quick google search for "box plot matplotlib) and found an example here which outlines a couple of styling options.
End of explanation
import seaborn as sns
sns.set(style="ticks")
# Load the example dataset for Anscombe's quartet
df = sns.load_dataset("anscombe")
# Show the results of a linear regression within each dataset
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", size=4,
scatter_kws={"s": 50, "alpha": 1})
Explanation: Trying reading the documentation for the box plot above and make your own visuaizations.
Next we are going to learn how to use Seaborn which is a very powerful visualization library. Matplotlib is a great library and has many examples of different plots, but seaborn is built on top of matplot lib and offers better plots for statistical analysis. If you do not have seaborn installed, you can follow the instructions here: http://stanford.edu/~mwaskom/software/seaborn/installing.html#installing . Seaborn also has many examples and also has a tutorial.
To show the power of the library we are going to plot the anscombe datasets in 1 plot following this example: http://stanford.edu/~mwaskom/software/seaborn/examples/anscombes_quartet.html . Do not worry to much about what the code does (it loads the same dataset and changes setting to make the visualization clearer), we will get more experince with seaborn soon.
End of explanation
iris = pd.read_csv('../datasets/iris.csv')
print iris[0:5]
print iris.describe()
Explanation: Seaborn does linear regression automatically (which we will learn soon). We can also see that the linear regression is the same for each dataset even though they are quite different.
The big takeway here is that summary statistics can be deceptive! Always make visualizations of your data before making any models.
Irist Dataset
Next we are going to visualize the Iris dataset. Let us first read the .csv and print the first elements of the dataframe. We also get the basic summary statistics.
End of explanation
#select all Iris-setosa
iris_setosa = iris[iris.iris == "Iris-setosa"]
plt.figure(1)
#make histogram of sepal lenth
plt.hist(iris_setosa["sepal length"])
plt.xlabel("sepal length")
plt.figure(2)
plt.scatter(iris_setosa["sepal width"], iris_setosa["sepal length"] )
plt.xlabel("sepal width")
plt.ylabel("sepal lenth")
Explanation: As we can see, it is difficult to interpret the results. We can see that sepal length, sepal width, petal length and petal width are all numeric features, and the iris variable is the specific type of iris (or categorical variable). To better understand the data, we can split the data based on each type of iris, make a histogram for each numeric feature, scatter plot between features and make many visualizations. I will demonstrate the process for generating a histogram for sepal length of Iris-setosa and a scatter plot for sepal length vs width for Iris-setosa
End of explanation
sns.pairplot(iris, hue="iris")
Explanation: This would help us to better undestand the data and is necessary for good analysis, but to do this for all the features and iris types (classes) would take a significant amount of time. Seaborn has a function called the pairplot which will do all of that for us!
End of explanation |
10,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
New Term Topics Methods and Document Coloring
Step1: We're setting up our corpus now. We want to show off the new get_term_topics and get_document_topics functionalities, and a good way to do so is to play around with words which might have different meanings in different context.
The word bank is a good candidate here, where it can mean either the financial institution or a river bank.
In the toy corpus presented, there are 11 documents, 5 river related and 6 finance related.
Step2: We set up the LDA model in the corpus. We set the number of topics to be 2, and expect to see one which is to do with river banks, and one to do with financial banks.
Step3: And like we expected, the LDA model has given us near perfect results. Bank is the most influential word in both the topics, as we can see. The other words help define what kind of bank we are talking about. Let's now see where our new methods fit in.
get_term_topics
The function get_term_topics returns the odds of that particular word belonging to a particular topic.
A few examples
Step4: Makes sense, the value for it belonging to topic_0 is a lot more.
Step5: This also works out well, the word finance is more likely to be in topic_1 to do with financial banks.
Step6: And this is particularly interesting. Since the word bank is likely to be in both the topics, the values returned are also very similar.
get_document_topics and Document Word-Topic Coloring
get_document_topics is an already existing gensim functionality which uses the inference function to get the sufficient statistics and figure out the topic distribution of the document.
The addition to this is the ability for us to now know the topic distribution for each word in the document.
Let us test this with two different documents which have the word bank in it, one in the finance context and one in the river context.
The get_document_topics method returns (along with the standard document topic proprtion) the word_type followed by a list sorted with the most likely topic ids, when per_word_topics is set as true.
Step7: Now what does that output mean? It means that like word_type 1, our word_type 3, which is the word bank, is more likely to be in topic_0 than topic_1.
You must have noticed that while we unpacked into doc_topics and word_topics, there is another variable - phi_values. Like the name suggests, phi_values contains the phi values for each topic for that particular word, scaled by feature length. Phi is essentially the probability of that word in that document belonging to a particular topic. The next few lines should illustrate this.
Step8: This means that word_type 0 has the following phi_values for each of the topics.
What is intresting to note is word_type 3 - because it has 2 occurences (i.e, the word bank appears twice in the bow), we can see that the scaling by feature length is very evident. The sum of the phi_values is 2, and not 1.
Now that we know exactly what get_document_topics does, let us now do the same with our second document, bow_finance.
Step9: And lo and behold, because the word bank is now used in the financial context, it immedietly swaps to being more likely associated with topic_1.
We've seen quite clearly that based on the context, the most likely topic associated with a word can change.
This differs from our previous method, get_term_topics, where it is a 'static' topic distribution.
It must also be noted that because the gensim implementation of LDA uses Variational Bayes sampling, a word_type in a document is only given one topic distribution. For example, the sentence 'the bank by the river bank' is likely to be assigned to topic_0, and each of the bank word instances have the same distribution.
get_document_topics for entire corpus
You can get doc_topics, word_topics and phi_values for all the documents in the corpus in the following manner
Step10: In case you want to store doc_topics, word_topics and phi_values for all the documents in the corpus in a variable and later access details of a particular document using its index, it can be done in the following manner
Step11: Now, I can access details of a particular document, say Document #3, as follows
Step12: We can print details for all the documents (as shown above), in the following manner
Step13: Coloring topic-terms
These methods can come in handy when we want to color the words in a corpus or a document. If we wish to color the words in a corpus (i.e, color all the words in the dictionary of the corpus), then get_term_topics would be a better choice. If not, get_document_topics would do the trick.
We'll now attempt to color these words and plot it using matplotlib.
This is just one way to go about plotting words - there are more and better ways.
WordCloud is such a python package which also does this.
For our simple illustration, let's keep topic_1 as red, and topic_0 as blue.
Step14: Let us revisit our old examples to show some examples of document coloring
Step15: What is fun to note here is that while bank was colored blue in our first example, it is now red because of the financial context - something which the numbers proved to us before.
Step16: We see that the document word coloring is done just the way we expected. | Python Code:
from gensim.corpora import Dictionary
from gensim.models import ldamodel
import numpy
%matplotlib inline
Explanation: New Term Topics Methods and Document Coloring
End of explanation
texts = [['bank','river','shore','water'],
['river','water','flow','fast','tree'],
['bank','water','fall','flow'],
['bank','bank','water','rain','river'],
['river','water','mud','tree'],
['money','transaction','bank','finance'],
['bank','borrow','money'],
['bank','finance'],
['finance','money','sell','bank'],
['borrow','sell'],
['bank','loan','sell']]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
Explanation: We're setting up our corpus now. We want to show off the new get_term_topics and get_document_topics functionalities, and a good way to do so is to play around with words which might have different meanings in different context.
The word bank is a good candidate here, where it can mean either the financial institution or a river bank.
In the toy corpus presented, there are 11 documents, 5 river related and 6 finance related.
End of explanation
numpy.random.seed(1) # setting random seed to get the same results each time.
model = ldamodel.LdaModel(corpus, id2word=dictionary, num_topics=2)
model.show_topics()
Explanation: We set up the LDA model in the corpus. We set the number of topics to be 2, and expect to see one which is to do with river banks, and one to do with financial banks.
End of explanation
model.get_term_topics('water')
Explanation: And like we expected, the LDA model has given us near perfect results. Bank is the most influential word in both the topics, as we can see. The other words help define what kind of bank we are talking about. Let's now see where our new methods fit in.
get_term_topics
The function get_term_topics returns the odds of that particular word belonging to a particular topic.
A few examples:
End of explanation
model.get_term_topics('finance')
Explanation: Makes sense, the value for it belonging to topic_0 is a lot more.
End of explanation
model.get_term_topics('bank')
Explanation: This also works out well, the word finance is more likely to be in topic_1 to do with financial banks.
End of explanation
bow_water = ['bank','water','bank']
bow_finance = ['bank','finance','bank']
bow = model.id2word.doc2bow(bow_water) # convert to bag of words format first
doc_topics, word_topics, phi_values = model.get_document_topics(bow, per_word_topics=True)
word_topics
Explanation: And this is particularly interesting. Since the word bank is likely to be in both the topics, the values returned are also very similar.
get_document_topics and Document Word-Topic Coloring
get_document_topics is an already existing gensim functionality which uses the inference function to get the sufficient statistics and figure out the topic distribution of the document.
The addition to this is the ability for us to now know the topic distribution for each word in the document.
Let us test this with two different documents which have the word bank in it, one in the finance context and one in the river context.
The get_document_topics method returns (along with the standard document topic proprtion) the word_type followed by a list sorted with the most likely topic ids, when per_word_topics is set as true.
End of explanation
phi_values
Explanation: Now what does that output mean? It means that like word_type 1, our word_type 3, which is the word bank, is more likely to be in topic_0 than topic_1.
You must have noticed that while we unpacked into doc_topics and word_topics, there is another variable - phi_values. Like the name suggests, phi_values contains the phi values for each topic for that particular word, scaled by feature length. Phi is essentially the probability of that word in that document belonging to a particular topic. The next few lines should illustrate this.
End of explanation
bow = model.id2word.doc2bow(bow_finance) # convert to bag of words format first
doc_topics, word_topics, phi_values = model.get_document_topics(bow, per_word_topics=True)
word_topics
Explanation: This means that word_type 0 has the following phi_values for each of the topics.
What is intresting to note is word_type 3 - because it has 2 occurences (i.e, the word bank appears twice in the bow), we can see that the scaling by feature length is very evident. The sum of the phi_values is 2, and not 1.
Now that we know exactly what get_document_topics does, let us now do the same with our second document, bow_finance.
End of explanation
all_topics = model.get_document_topics(corpus, per_word_topics=True)
for doc_topics, word_topics, phi_values in all_topics:
print('New Document \n')
print 'Document topics:', doc_topics
print 'Word topics:', word_topics
print 'Phi values:', phi_values
print(" ")
print('-------------- \n')
Explanation: And lo and behold, because the word bank is now used in the financial context, it immedietly swaps to being more likely associated with topic_1.
We've seen quite clearly that based on the context, the most likely topic associated with a word can change.
This differs from our previous method, get_term_topics, where it is a 'static' topic distribution.
It must also be noted that because the gensim implementation of LDA uses Variational Bayes sampling, a word_type in a document is only given one topic distribution. For example, the sentence 'the bank by the river bank' is likely to be assigned to topic_0, and each of the bank word instances have the same distribution.
get_document_topics for entire corpus
You can get doc_topics, word_topics and phi_values for all the documents in the corpus in the following manner :
End of explanation
topics = model.get_document_topics(corpus, per_word_topics=True)
all_topics = [(doc_topics, word_topics, word_phis) for doc_topics, word_topics, word_phis in topics]
Explanation: In case you want to store doc_topics, word_topics and phi_values for all the documents in the corpus in a variable and later access details of a particular document using its index, it can be done in the following manner:
End of explanation
doc_topic, word_topics, phi_values = all_topics[2]
print 'Document topic:', doc_topics, "\n"
print 'Word topic:', word_topics, "\n"
print 'Phi value:', phi_values
Explanation: Now, I can access details of a particular document, say Document #3, as follows:
End of explanation
for doc in all_topics:
print('New Document \n')
print 'Document topic:', doc[0]
print 'Word topic:', doc[1]
print 'Phi value:', doc[2]
print(" ")
print('-------------- \n')
Explanation: We can print details for all the documents (as shown above), in the following manner:
End of explanation
# this is a sample method to color words. Like mentioned before, there are many ways to do this.
def color_words(model, doc):
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# make into bag of words
doc = model.id2word.doc2bow(doc)
# get word_topics
doc_topics, word_topics, phi_values = model.get_document_topics(doc, per_word_topics=True)
# color-topic matching
topic_colors = { 1:'red', 0:'blue'}
# set up fig to plot
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
# a sort of hack to make sure the words are well spaced out.
word_pos = 1/len(doc)
# use matplotlib to plot words
for word, topics in word_topics:
ax.text(word_pos, 0.8, model.id2word[word],
horizontalalignment='center',
verticalalignment='center',
fontsize=20, color=topic_colors[topics[0]], # choose just the most likely topic
transform=ax.transAxes)
word_pos += 0.2 # to move the word for the next iter
ax.set_axis_off()
plt.show()
Explanation: Coloring topic-terms
These methods can come in handy when we want to color the words in a corpus or a document. If we wish to color the words in a corpus (i.e, color all the words in the dictionary of the corpus), then get_term_topics would be a better choice. If not, get_document_topics would do the trick.
We'll now attempt to color these words and plot it using matplotlib.
This is just one way to go about plotting words - there are more and better ways.
WordCloud is such a python package which also does this.
For our simple illustration, let's keep topic_1 as red, and topic_0 as blue.
End of explanation
# our river bank document
bow_water = ['bank','water','bank']
color_words(model, bow_water)
bow_finance = ['bank','finance','bank']
color_words(model, bow_finance)
Explanation: Let us revisit our old examples to show some examples of document coloring
End of explanation
# sample doc with a somewhat even distribution of words among the likely topics
doc = ['bank', 'water', 'bank', 'finance', 'money','sell','river','fast','tree']
color_words(model, doc)
Explanation: What is fun to note here is that while bank was colored blue in our first example, it is now red because of the financial context - something which the numbers proved to us before.
End of explanation
def color_words_dict(model, dictionary):
import matplotlib.pyplot as plt
import matplotlib.patches as patches
word_topics = []
for word_id in dictionary:
word = str(dictionary[word_id])
# get_term_topics returns static topics, as mentioned before
probs = model.get_term_topics(word)
# we are creating word_topics which is similar to the one created by get_document_topics
try:
if probs[0][1] >= probs[1][1]:
word_topics.append((word_id, [0, 1]))
else:
word_topics.append((word_id, [1, 0]))
# this in the case only one topic is returned
except IndexError:
word_topics.append((word_id, [probs[0][0]]))
# color-topic matching
topic_colors = { 1:'red', 0:'blue'}
# set up fig to plot
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
# a sort of hack to make sure the words are well spaced out.
word_pos = 1/len(doc)
# use matplotlib to plot words
for word, topics in word_topics:
ax.text(word_pos, 0.8, model.id2word[word],
horizontalalignment='center',
verticalalignment='center',
fontsize=20, color=topic_colors[topics[0]], # choose just the most likely topic
transform=ax.transAxes)
word_pos += 0.2 # to move the word for the next iter
ax.set_axis_off()
plt.show()
color_words_dict(model, dictionary)
Explanation: We see that the document word coloring is done just the way we expected. :)
Word-coloring a dictionary
We can do the same for the entire vocabulary, statically. The only difference would be in using get_term_topics, and iterating over the dictionary.
We will use a modified version of the coloring code when passing an entire dictionary.
End of explanation |
10,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using SGD on MNIST
Background
... about machine learning (a reminder from lesson 1)
The good news is that modern machine learning can be distilled down to a couple of key techniques that are of very wide applicability. Recent studies have shown that the vast majority of datasets can be best modeled with just two methods
Step1: Let's download, unzip, and format the data.
Step2: Normalize
Many machine learning algorithms behave better when the data is normalized, that is when the mean is 0 and the standard deviation is 1. We will subtract off the mean and standard deviation from our training set in order to normalize the data
Step3: Note that for consistency (with the parameters we learn when training), we subtract the mean and standard deviation of our training set from our validation set.
Step4: Look at the data
In any sort of data science work, it's important to look at your data, to make sure you understand the format, how it's stored, what type of values it holds, etc. To make it easier to work with, let's reshape it into 2d images from the flattened 1d format.
Helper methods
Step5: Plots
Step6: It's the digit 3! And that's stored in the y value
Step7: We can look at part of an image
Step8: Neural Networks
We will take a deep look logistic regression and how we can program it ourselves. We are going to treat it as a specific example of a shallow neural net.
What is a neural network?
A neural network is an infinitely flexible function, consisting of layers. A layer is a linear function such as matrix multiplication followed by a non-linear function (the activation).
One of the tricky parts of neural networks is just keeping track of all the vocabulary!
Functions, parameters, and training
A function takes inputs and returns outputs. For instance, $f(x) = 3x + 5$ is an example of a function. If we input $2$, the output is $3\times 2 + 5 = 11$, or if we input $-1$, the output is $3\times -1 + 5 = 2$
Functions have parameters. The above function $f$ is $ax + b$, with parameters a and b set to $a=3$ and $b=5$.
Machine learning is often about learning the best values for those parameters. For instance, suppose we have the data points on the chart below. What values should we choose for $a$ and $b$?
<img src="images/sgd2.gif" alt="" style="width
Step9: We will begin with the highest level abstraction
Step10: Each input is a vector of size 28*28 pixels and our output is of size 10 (since there are 10 digits
Step11: Loss functions and metrics
In machine learning the loss function or cost function is representing the price paid for inaccuracy of predictions.
The loss associated with one example in binary classification is given by
Step12: Note that in our toy example above our accuracy is 100% and our loss is 0.16. Compare that to a loss of 0.03 that we are getting while predicting cats and dogs. Exercise
Step13: GPUs are great at handling lots of data at once (otherwise don't get performance benefit). We break the data up into batches, and that specifies how many samples from our dataset we want to send to the GPU at a time. The fastai library defaults to a batch size of 64. On each iteration of the training loop, the error on 1 batch of data will be calculated, and the optimizer will update the parameters based on that.
An epoch is completed once each data sample has been used once in the training loop.
Now that we have the parameters for our model, we can make predictions on our validation set.
Step14: Question
Step15: Let's check how accurate this approach is on our validation set. You may want to compare this against other implementations of logistic regression, such as the one in sklearn. In our testing, this simple pytorch version is faster and more accurate for this problem!
Step16: Let's see how some of our predictions look!
Step17: Defining Logistic Regression Ourselves
Above, we used pytorch's nn.Linear to create a linear layer. This is defined by a matrix multiplication and then an addition (these are also called affine transformations). Let's try defining this ourselves.
Just as Numpy has np.matmul for matrix multiplication (in Python 3, this is equivalent to the @ operator), PyTorch has torch.matmul.
Our PyTorch class needs two things
Step18: We create our neural net and the optimizer. (We will use the same loss and metrics from above).
Step19: Let's look at our predictions on the first eight images
Step20: Aside about Broadcasting and Matrix Multiplication
Now let's dig in to what we were doing with torch.matmul
Step21: Broadcasting
The term broadcasting describes how arrays with different shapes are treated during arithmetic operations. The term broadcasting was first used by Numpy, although is now used in other libraries such as Tensorflow and Matlab; the rules can vary by library.
From the Numpy Documentation
Step22: How are we able to do a > 0? 0 is being broadcast to have the same dimensions as a.
Remember above when we normalized our dataset by subtracting the mean (a scalar) from the entire data set (a matrix) and dividing by the standard deviation (another scalar)? We were using broadcasting!
Other examples of broadcasting with a scalar
Step23: Broadcasting a vector to a matrix
We can also broadcast a vector to a matrix
Step24: Although numpy does this automatically, you can also use the broadcast_to method
Step25: The numpy expand_dims method lets us convert the 1-dimensional array c into a 2-dimensional array (although one of those dimensions has value 1).
Step26: Broadcasting Rules
Step27: When operating on two arrays, Numpy/PyTorch compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when
they are equal, or
one of them is 1
Arrays do not need to have the same number of dimensions. For example, if you have a 256*256*3 array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible
Step28: We get the same answer using torch.matmul
Step29: The following is NOT matrix multiplication. What is it?
Step30: From a machine learning perspective, matrix multiplication is a way of creating features by saying how much we want to weight each input column. Different features are different weighted averages of the input columns.
The website matrixmultiplication.xyz provides a nice visualization of matrix multiplcation
Step31: Writing Our Own Training Loop
As a reminder, this is what we did above to write our own logistic regression class (as a pytorch neural net)
Step32: Above, we are using the fastai method fit to train our model. Now we will try writing the training loop ourselves.
Review question
Step33: md is the ImageClassifierData object we created above. We want an iterable version of our training data (question
Step34: First, we will do a forward pass, which means computing the predicted y by passing x to the model.
Step35: We can check the loss
Step36: We may also be interested in the accuracy. We don't expect our first predictions to be very good, because the weights of our network were initialized to random values. Our goal is to see the loss decrease (and the accuracy increase) as we train the network
Step37: Now we will use the optimizer to calculate which direction to step in. That is, how should we update our weights to try to decrease the loss?
Pytorch has an automatic differentiation package (autograd) that takes derivatives for us, so we don't have to calculate the derivative ourselves! We just call .backward() on our loss to calculate the direction of steepest descent (the direction to lower the loss the most).
Step38: Now, let's make another set of predictions and check if our loss is lower
Step39: Note that we are using stochastic gradient descent, so the loss is not guaranteed to be strictly better each time. The stochasticity comes from the fact that we are using mini-batches; we are just using 64 images to calculate our prediction and update the weights, not the whole dataset.
Step40: If we run several iterations in a loop, we should see the loss decrease and the accuracy increase with time.
Step41: Put it all together in a training loop
Step42: Stochastic Gradient Descent
Nearly all of deep learning is powered by one very important algorithm | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.torch_imports import *
from fastai.io import *
path = 'data/mnist/'
Explanation: Using SGD on MNIST
Background
... about machine learning (a reminder from lesson 1)
The good news is that modern machine learning can be distilled down to a couple of key techniques that are of very wide applicability. Recent studies have shown that the vast majority of datasets can be best modeled with just two methods:
Ensembles of decision trees (i.e. Random Forests and Gradient Boosting Machines), mainly for structured data (such as you might find in a database table at most companies). We looked at random forests in depth as we analyzed the Blue Book for Bulldozers dataset.
Multi-layered neural networks learnt with SGD (i.e. shallow and/or deep learning), mainly for unstructured data (such as audio, vision, and natural language)
In this lesson, we will start on the 2nd approach (a neural network with SGD) by analyzing the MNIST dataset. You may be surprised to learn that logistic regression is actually an example of a simple neural net!
About The Data
In this lesson, we will be working with MNIST, a classic data set of hand-written digits. Solutions to this problem are used by banks to automatically recognize the amounts on checks, and by the postal service to automatically recognize zip codes on mail.
<img src="images/mnist.png" alt="" style="width: 60%"/>
A matrix can represent an image, by creating a grid where each entry corresponds to a different pixel.
<img src="images/digit.gif" alt="digit" style="width: 55%"/>
(Source: Adam Geitgey
)
Imports and data
We will be using the fastai library, which is still in pre-alpha. If you are accessing this course notebook, you probably already have it downloaded, as it is in the same Github repo as the course materials.
We use symbolic links (often called symlinks) to make it possible to import these from your current directory. For instance, I ran:
ln -s ../../fastai
in the terminal, within the directory I'm working in, home/fastai/courses/ml1.
End of explanation
import os
os.makedirs(path, exist_ok=True)
URL='http://deeplearning.net/data/mnist/'
FILENAME='mnist.pkl.gz'
def load_mnist(filename):
return pickle.load(gzip.open(filename, 'rb'), encoding='latin-1')
get_data(URL+FILENAME, path+FILENAME)
((x, y), (x_valid, y_valid), _) = load_mnist(path+FILENAME)
type(x), x.shape, type(y), y.shape
Explanation: Let's download, unzip, and format the data.
End of explanation
mean = x.mean()
std = x.std()
x=(x-mean)/std
mean, std, x.mean(), x.std()
Explanation: Normalize
Many machine learning algorithms behave better when the data is normalized, that is when the mean is 0 and the standard deviation is 1. We will subtract off the mean and standard deviation from our training set in order to normalize the data:
End of explanation
x_valid = (x_valid-mean)/std
x_valid.mean(), x_valid.std()
Explanation: Note that for consistency (with the parameters we learn when training), we subtract the mean and standard deviation of our training set from our validation set.
End of explanation
def show(img, title=None):
plt.imshow(img, cmap="gray")
if title is not None: plt.title(title)
def plots(ims, figsize=(12,6), rows=2, titles=None):
f = plt.figure(figsize=figsize)
cols = len(ims)//rows
for i in range(len(ims)):
sp = f.add_subplot(rows, cols, i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i], cmap='gray')
Explanation: Look at the data
In any sort of data science work, it's important to look at your data, to make sure you understand the format, how it's stored, what type of values it holds, etc. To make it easier to work with, let's reshape it into 2d images from the flattened 1d format.
Helper methods
End of explanation
x_valid.shape
x_imgs = np.reshape(x_valid, (-1,28,28)); x_imgs.shape
show(x_imgs[0], y_valid[0])
y_valid.shape
Explanation: Plots
End of explanation
y_valid[0]
Explanation: It's the digit 3! And that's stored in the y value:
End of explanation
x_imgs[0,10:15,10:15]
show(x_imgs[0,10:15,10:15])
plots(x_imgs[:8], titles=y_valid[:8])
Explanation: We can look at part of an image:
End of explanation
from fastai.metrics import *
from fastai.model import *
from fastai.dataset import *
import torch.nn as nn
Explanation: Neural Networks
We will take a deep look logistic regression and how we can program it ourselves. We are going to treat it as a specific example of a shallow neural net.
What is a neural network?
A neural network is an infinitely flexible function, consisting of layers. A layer is a linear function such as matrix multiplication followed by a non-linear function (the activation).
One of the tricky parts of neural networks is just keeping track of all the vocabulary!
Functions, parameters, and training
A function takes inputs and returns outputs. For instance, $f(x) = 3x + 5$ is an example of a function. If we input $2$, the output is $3\times 2 + 5 = 11$, or if we input $-1$, the output is $3\times -1 + 5 = 2$
Functions have parameters. The above function $f$ is $ax + b$, with parameters a and b set to $a=3$ and $b=5$.
Machine learning is often about learning the best values for those parameters. For instance, suppose we have the data points on the chart below. What values should we choose for $a$ and $b$?
<img src="images/sgd2.gif" alt="" style="width: 70%"/>
In the above gif from fast.ai's deep learning course, intro to SGD notebook), an algorithm called stochastic gradient descent is being used to learn the best parameters to fit the line to the data (note: in the gif, the algorithm is stopping before the absolute best parameters are found). This process is called training or fitting.
Most datasets will not be well-represented by a line. We could use a more complicated function, such as $g(x) = ax^2 + bx + c + \sin d$. Now we have 4 parameters to learn: $a$, $b$, $c$, and $d$. This function is more flexible than $f(x) = ax + b$ and will be able to accurately model more datasets.
Neural networks take this to an extreme, and are infinitely flexible. They often have thousands, or even hundreds of thousands of parameters. However the core idea is the same as above. The neural network is a function, and we will learn the best parameters for modeling our data.
PyTorch
We will be using the open source deep learning library, fastai, which provides high level abstractions and best practices on top of PyTorch. This is the highest level, simplest way to get started with deep learning. Please note that fastai requires Python 3 to function. It is currently in pre-alpha, so items may move around and more documentation will be added in the future.
The fastai deep learning library uses PyTorch, a Python framework for dynamic neural networks with GPU acceleration, which was released by Facebook's AI team.
PyTorch has two overlapping, yet distinct, purposes. As described in the PyTorch documentation:
<img src="images/what_is_pytorch.png" alt="pytorch" style="width: 80%"/>
The neural network functionality of PyTorch is built on top of the Numpy-like functionality for fast matrix computations on a GPU. Although the neural network purpose receives way more attention, both are very useful. We'll implement a neural net from scratch today using PyTorch.
Further learning: If you are curious to learn what dynamic neural networks are, you may want to watch this talk by Soumith Chintala, Facebook AI researcher and core PyTorch contributor.
If you want to learn more PyTorch, you can try this introductory tutorial or this tutorial to learn by examples.
About GPUs
Graphical processing units (GPUs) allow for matrix computations to be done with much greater speed, as long as you have a library such as PyTorch that takes advantage of them. Advances in GPU technology in the last 10-20 years have been a key part of why neural networks are proving so much more powerful now than they did a few decades ago.
You may own a computer that has a GPU which can be used. For the many people that either don't have a GPU (or have a GPU which can't be easily accessed by Python), there are a few differnt options:
Don't use a GPU: For the sake of this tutorial, you don't have to use a GPU, although some computations will be slower.
Use crestle, through your browser: Crestle is a service that gives you an already set up cloud service with all the popular scientific and deep learning frameworks already pre-installed and configured to run on a GPU in the cloud. It is easily accessed through your browser. New users get 10 hours and 1 GB of storage for free. After this, GPU usage is 34 cents per hour. I recommend this option to those who are new to AWS or new to using the console.
Set up an AWS instance through your console: You can create an AWS instance with a GPU by following the steps in this fast.ai setup lesson.] AWS charges 90 cents per hour for this.
Neural Net for Logistic Regression in PyTorch
End of explanation
net = nn.Sequential(
nn.Linear(28*28, 100),
nn.ReLU(),
nn.Linear(100, 100),
nn.ReLU(),
nn.Linear(100, 10),
nn.LogSoftmax()
).cuda()
Explanation: We will begin with the highest level abstraction: using a neural net defined by PyTorch's Sequential class.
End of explanation
md = ImageClassifierData.from_arrays(path, (x,y), (x_valid, y_valid))
loss=nn.NLLLoss()
metrics=[accuracy]
# opt=optim.SGD(net.parameters(), 1e-1, momentum=0.9)
opt=optim.SGD(net.parameters(), 1e-1, momentum=0.9, weight_decay=1e-3)
Explanation: Each input is a vector of size 28*28 pixels and our output is of size 10 (since there are 10 digits: 0, 1, ..., 9).
We use the output of the final layer to generate our predictions. Often for classification problems (like MNIST digit classification), the final layer has the same number of outputs as there are classes. In that case, this is 10: one for each digit from 0 to 9. These can be converted to comparative probabilities. For instance, it may be determined that a particular hand-written image is 80% likely to be a 4, 18% likely to be a 9, and 2% likely to be a 3.
End of explanation
def binary_loss(y, p):
return np.mean(-(y * np.log(p) + (1-y)*np.log(1-p)))
acts = np.array([1, 0, 0, 1])
preds = np.array([0.9, 0.1, 0.2, 0.8])
binary_loss(acts, preds)
Explanation: Loss functions and metrics
In machine learning the loss function or cost function is representing the price paid for inaccuracy of predictions.
The loss associated with one example in binary classification is given by:
-(y * log(p) + (1-y) * log (1-p))
where y is the true label of x and p is the probability predicted by our model that the label is 1.
End of explanation
fit(net, md, epochs=5, crit=loss, opt=opt, metrics=metrics)
set_lrs(opt, 1e-2)
fit(net, md, epochs=3, crit=loss, opt=opt, metrics=metrics)
fit(net, md, epochs=5, crit=loss, opt=opt, metrics=metrics)
set_lrs(opt, 1e-2)
fit(net, md, epochs=3, crit=loss, opt=opt, metrics=metrics)
t = [o.numel() for o in net.parameters()]
t, sum(t)
Explanation: Note that in our toy example above our accuracy is 100% and our loss is 0.16. Compare that to a loss of 0.03 that we are getting while predicting cats and dogs. Exercise: play with preds to get a lower loss for this example.
Example: Here is an example on how to compute the loss for one example of binary classification problem. Suppose for an image x with label 1 and your model gives it a prediction of 0.9. For this case the loss should be small because our model is predicting a label $1$ with high probability.
loss = -log(0.9) = 0.10
Now suppose x has label 0 but our model is predicting 0.9. In this case our loss is should be much larger.
loss = -log(1-0.9) = 2.30
Exercise: look at the other cases and convince yourself that this make sense.
Exercise: how would you rewrite binary_loss using if instead of * and +?
Why not just maximize accuracy? The binary classification loss is an easier function to optimize.
For multi-class classification, we use negative log liklihood (also known as categorical cross entropy) which is exactly the same thing, but summed up over all classes.
Fitting the model
Fitting is the process by which the neural net learns the best parameters for the dataset.
End of explanation
preds = predict(net, md.val_dl)
preds.shape
Explanation: GPUs are great at handling lots of data at once (otherwise don't get performance benefit). We break the data up into batches, and that specifies how many samples from our dataset we want to send to the GPU at a time. The fastai library defaults to a batch size of 64. On each iteration of the training loop, the error on 1 batch of data will be calculated, and the optimizer will update the parameters based on that.
An epoch is completed once each data sample has been used once in the training loop.
Now that we have the parameters for our model, we can make predictions on our validation set.
End of explanation
preds.argmax(axis=1)[:5]
preds = preds.argmax(1)
Explanation: Question: Why does our output have length 10 (for each image)?
End of explanation
np.mean(preds == y_valid)
Explanation: Let's check how accurate this approach is on our validation set. You may want to compare this against other implementations of logistic regression, such as the one in sklearn. In our testing, this simple pytorch version is faster and more accurate for this problem!
End of explanation
plots(x_imgs[:8], titles=preds[:8])
Explanation: Let's see how some of our predictions look!
End of explanation
def get_weights(*dims): return nn.Parameter(torch.randn(dims)/dims[0])
def softmax(x): return torch.exp(x)/(torch.exp(x).sum(dim=1)[:,None])
class LogReg(nn.Module):
def __init__(self):
super().__init__()
self.l1_w = get_weights(28*28, 10) # Layer 1 weights
self.l1_b = get_weights(10) # Layer 1 bias
def forward(self, x):
x = x.view(x.size(0), -1)
x = (x @ self.l1_w) + self.l1_b # Linear Layer
x = torch.log(softmax(x)) # Non-linear (LogSoftmax) Layer
return x
Explanation: Defining Logistic Regression Ourselves
Above, we used pytorch's nn.Linear to create a linear layer. This is defined by a matrix multiplication and then an addition (these are also called affine transformations). Let's try defining this ourselves.
Just as Numpy has np.matmul for matrix multiplication (in Python 3, this is equivalent to the @ operator), PyTorch has torch.matmul.
Our PyTorch class needs two things: constructor (says what the parameters are) and a forward method (how to calculate a prediction using those parameters) The method forward describes how the neural net converts inputs to outputs.
In PyTorch, the optimizer knows to try to optimize any attribute of type Parameter.
End of explanation
net2 = LogReg().cuda()
opt=optim.Adam(net2.parameters())
fit(net2, md, epochs=1, crit=loss, opt=opt, metrics=metrics)
dl = iter(md.trn_dl)
xmb,ymb = next(dl)
vxmb = Variable(xmb.cuda())
vxmb
preds = net2(vxmb).exp(); preds[:3]
preds = preds.data.max(1)[1]; preds
Explanation: We create our neural net and the optimizer. (We will use the same loss and metrics from above).
End of explanation
preds = predict(net2, md.val_dl).argmax(1)
plots(x_imgs[:8], titles=preds[:8])
np.mean(preds == y_valid)
Explanation: Let's look at our predictions on the first eight images:
End of explanation
a = np.array([10, 6, -4])
b = np.array([2, 8, 7])
a,b
a + b
(a < b).mean()
Explanation: Aside about Broadcasting and Matrix Multiplication
Now let's dig in to what we were doing with torch.matmul: matrix multiplication. First, let's start with a simpler building block: broadcasting.
Element-wise operations
Broadcasting and element-wise operations are supported in the same way by both numpy and pytorch.
Operators (+,-,*,/,>,<,==) are usually element-wise.
Examples of element-wise operations:
End of explanation
a
a > 0
Explanation: Broadcasting
The term broadcasting describes how arrays with different shapes are treated during arithmetic operations. The term broadcasting was first used by Numpy, although is now used in other libraries such as Tensorflow and Matlab; the rules can vary by library.
From the Numpy Documentation:
The term broadcasting describes how numpy treats arrays with
different shapes during arithmetic operations. Subject to certain
constraints, the smaller array is “broadcast” across the larger
array so that they have compatible shapes. Broadcasting provides a
means of vectorizing array operations so that looping occurs in C
instead of Python. It does this without making needless copies of
data and usually leads to efficient algorithm implementations.
In addition to the efficiency of broadcasting, it allows developers to write less code, which typically leads to fewer errors.
This section was adapted from Chapter 4 of the fast.ai Computational Linear Algebra course.
Broadcasting with a scalar
End of explanation
a + 1
m = np.array([[1, 2, 3], [4,5,6], [7,8,9]]); m
2*m
Explanation: How are we able to do a > 0? 0 is being broadcast to have the same dimensions as a.
Remember above when we normalized our dataset by subtracting the mean (a scalar) from the entire data set (a matrix) and dividing by the standard deviation (another scalar)? We were using broadcasting!
Other examples of broadcasting with a scalar:
End of explanation
c = np.array([10,20,30]); c
m + c
c + m
Explanation: Broadcasting a vector to a matrix
We can also broadcast a vector to a matrix:
End of explanation
c.shape
np.broadcast_to(c[:,None], m.shape)
np.broadcast_to(np.expand_dims(c,0), (3,3))
c.shape
np.expand_dims(c,0).shape
Explanation: Although numpy does this automatically, you can also use the broadcast_to method:
End of explanation
np.expand_dims(c,0).shape
m + np.expand_dims(c,0)
np.expand_dims(c,1)
c[:, None].shape
m + np.expand_dims(c,1)
np.broadcast_to(np.expand_dims(c,1), (3,3))
Explanation: The numpy expand_dims method lets us convert the 1-dimensional array c into a 2-dimensional array (although one of those dimensions has value 1).
End of explanation
c[None]
c[:,None]
c[None] > c[:,None]
xg,yg = np.ogrid[0:5, 0:5]; xg,yg
xg+yg
Explanation: Broadcasting Rules
End of explanation
m, c
m @ c # np.matmul(m, c)
Explanation: When operating on two arrays, Numpy/PyTorch compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when
they are equal, or
one of them is 1
Arrays do not need to have the same number of dimensions. For example, if you have a 256*256*3 array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:
Image (3d array): 256 x 256 x 3
Scale (1d array): 3
Result (3d array): 256 x 256 x 3
The numpy documentation includes several examples of what dimensions can and can not be broadcast together.
Matrix Multiplication
We are going to use broadcasting to define matrix multiplication.
End of explanation
T(m) @ T(c)
Explanation: We get the same answer using torch.matmul:
End of explanation
m,c
m * c
(m * c).sum(axis=1)
c
np.broadcast_to(c, (3,3))
Explanation: The following is NOT matrix multiplication. What is it?
End of explanation
n = np.array([[10,40],[20,0],[30,-5]]); n
m
m @ n
(m * n[:,0]).sum(axis=1)
(m * n[:,1]).sum(axis=1)
Explanation: From a machine learning perspective, matrix multiplication is a way of creating features by saying how much we want to weight each input column. Different features are different weighted averages of the input columns.
The website matrixmultiplication.xyz provides a nice visualization of matrix multiplcation
End of explanation
# Our code from above
class LogReg(nn.Module):
def __init__(self):
super().__init__()
self.l1_w = get_weights(28*28, 10) # Layer 1 weights
self.l1_b = get_weights(10) # Layer 1 bias
def forward(self, x):
x = x.view(x.size(0), -1)
x = x @ self.l1_w + self.l1_b
return torch.log(softmax(x))
net2 = LogReg().cuda()
opt=optim.Adam(net2.parameters())
fit(net2, md, epochs=1, crit=loss, opt=opt, metrics=metrics)
Explanation: Writing Our Own Training Loop
As a reminder, this is what we did above to write our own logistic regression class (as a pytorch neural net):
End of explanation
net2 = LogReg().cuda()
loss=nn.NLLLoss()
learning_rate = 1e-3
optimizer=optim.Adam(net2.parameters(), lr=learning_rate)
Explanation: Above, we are using the fastai method fit to train our model. Now we will try writing the training loop ourselves.
Review question: What does it mean to train a model?
We will use the LogReg class we created, as well as the same loss function, learning rate, and optimizer as before:
End of explanation
dl = iter(md.trn_dl) # Data loader
Explanation: md is the ImageClassifierData object we created above. We want an iterable version of our training data (question: what does it mean for something to be iterable?):
End of explanation
xt, yt = next(dl)
y_pred = net2(Variable(xt).cuda())
Explanation: First, we will do a forward pass, which means computing the predicted y by passing x to the model.
End of explanation
l = loss(y_pred, Variable(yt).cuda())
print(l)
Explanation: We can check the loss:
End of explanation
np.mean(to_np(y_pred).argmax(axis=1) == to_np(yt))
Explanation: We may also be interested in the accuracy. We don't expect our first predictions to be very good, because the weights of our network were initialized to random values. Our goal is to see the loss decrease (and the accuracy increase) as we train the network:
End of explanation
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable weights
# of the model)
optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to model parameters
l.backward()
# Calling the step function on an Optimizer makes an update to its parameters
optimizer.step()
Explanation: Now we will use the optimizer to calculate which direction to step in. That is, how should we update our weights to try to decrease the loss?
Pytorch has an automatic differentiation package (autograd) that takes derivatives for us, so we don't have to calculate the derivative ourselves! We just call .backward() on our loss to calculate the direction of steepest descent (the direction to lower the loss the most).
End of explanation
xt, yt = next(dl)
y_pred = net2(Variable(xt).cuda())
l = loss(y_pred, Variable(yt).cuda())
print(l)
Explanation: Now, let's make another set of predictions and check if our loss is lower:
End of explanation
np.mean(to_np(y_pred).argmax(axis=1) == to_np(yt))
Explanation: Note that we are using stochastic gradient descent, so the loss is not guaranteed to be strictly better each time. The stochasticity comes from the fact that we are using mini-batches; we are just using 64 images to calculate our prediction and update the weights, not the whole dataset.
End of explanation
for t in range(100):
xt, yt = next(dl)
y_pred = net2(Variable(xt).cuda())
l = loss(y_pred, Variable(yt).cuda())
if t % 10 == 0:
accuracy = np.mean(to_np(y_pred).argmax(axis=1) == to_np(yt))
print("loss: ", l.data[0], "\t accuracy: ", accuracy)
optimizer.zero_grad()
l.backward()
optimizer.step()
Explanation: If we run several iterations in a loop, we should see the loss decrease and the accuracy increase with time.
End of explanation
def score(x, y):
y_pred = to_np(net2(V(x)))
return np.sum(y_pred.argmax(axis=1) == to_np(y))/len(y_pred)
net2 = LogReg().cuda()
loss=nn.NLLLoss()
learning_rate = 1e-2
optimizer=optim.SGD(net2.parameters(), lr=learning_rate)
for epoch in range(1):
losses=[]
dl = iter(md.trn_dl)
for t in range(len(dl)):
# Forward pass: compute predicted y and loss by passing x to the model.
xt, yt = next(dl)
y_pred = net2(V(xt))
l = loss(y_pred, V(yt))
losses.append(l)
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable weights of the model)
optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to model parameters
l.backward()
# Calling the step function on an Optimizer makes an update to its parameters
optimizer.step()
val_dl = iter(md.val_dl)
val_scores = [score(*next(val_dl)) for i in range(len(val_dl))]
print(np.mean(val_scores))
Explanation: Put it all together in a training loop
End of explanation
net2 = LogReg().cuda()
loss_fn=nn.NLLLoss()
lr = 1e-2
w,b = net2.l1_w,net2.l1_b
for epoch in range(1):
losses=[]
dl = iter(md.trn_dl)
for t in range(len(dl)):
xt, yt = next(dl)
y_pred = net2(V(xt))
l = loss(y_pred, Variable(yt).cuda())
losses.append(loss)
# Backward pass: compute gradient of the loss with respect to model parameters
l.backward()
w.data -= w.grad.data * lr
b.data -= b.grad.data * lr
w.grad.data.zero_()
b.grad.data.zero_()
val_dl = iter(md.val_dl)
val_scores = [score(*next(val_dl)) for i in range(len(val_dl))]
print(np.mean(val_scores))
Explanation: Stochastic Gradient Descent
Nearly all of deep learning is powered by one very important algorithm: stochastic gradient descent (SGD). SGD can be seeing as an approximation of gradient descent (GD). In GD you have to run through all the samples in your training set to do a single itaration. In SGD you use only a subset of training samples to do the update for a parameter in a particular iteration. The subset used in each iteration is called a batch or minibatch.
Now, instead of using the optimizer, we will do the optimization ourselves!
End of explanation |
10,533 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Logistic Regression in TensorFlow 2.0
Learning Objectives
Load a CSV file using Pandas
Create train, validation, and test sets
Define and train a model using Keras (including setting class weights)
Evaluate the model using various metrics (including precision and recall)
Try common techniques for dealing with imbalanced data
Step1: In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
Step2: Data processing and exploration
Download the Kaggle Credit Card Fraud data set
Pandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.
Note
Step3: Now, let's view the statistics of the raw dataframe.
Step4: Examine the class label imbalance
Let's look at the dataset imbalance
Step5: This shows the small fraction of positive samples.
Clean, split and normalize the data
The raw data has a few issues. First the Time and Amount columns are too variable to use directly. Drop the Time column (since it's not clear what it means) and take the log of the Amount column to reduce its range.
Step6: Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where overfitting is a significant concern from the lack of training data.
Step7: Normalize the input features using the sklearn StandardScaler.
This will set the mean to 0 and standard deviation to 1.
Note
Step8: Caution
Step9: Define the model and metrics
Define a function that creates a simple neural network with a densly connected hidden layer, a dropout layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent
Step10: Understanding useful metrics
Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
False negatives and false positives are samples that were incorrectly classified
True negatives and true positives are samples that were correctly classified
Accuracy is the percentage of examples correctly classified
$\frac{\text{true samples}}{\text{total samples}}$
Precision is the percentage of predicted positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false positives}}$
Recall is the percentage of actual positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false negatives}}$
AUC refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.
Note
Step11: Test run the model
Step12: Optional
Step13: The correct bias to set can be derived from
Step14: Set that as the initial bias, and the model will give much more reasonable initial guesses.
It should be near
Step15: With this initialization the initial loss should be approximately
Step16: This initial loss is about 50 times less than if would have been with naive initilization.
This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.
Checkpoint the initial weights
To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
Step17: Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses
Step18: The above figure makes it clear
Step19: Check training history
In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this tutorial.
Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
Step20: Note
Step21: Evaluate your model on the test dataset and display the results for the metrics you created above.
Step22: If the model had predicted everything perfectly, this would be a diagonal matrix where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
Plot the ROC
Now plot the ROC. This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
Step23: It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
Class weights
Calculate class weights
The goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
Step24: Train a model with class weights
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
Note
Step25: Check training history
Step26: Evaluate metrics
Step27: Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application.
Plot the ROC
Step28: Oversampling
Oversample the minority class
A related approach would be to resample the dataset by oversampling the minority class.
Step29: Using NumPy
You can balance the dataset manually by choosing the right number of random
indices from the positive examples
Step30: Using tf.data
If you're using tf.data the easiest way to produce balanced examples is to start with a positive and a negative dataset, and merge them. See the tf.data guide for more examples.
Step31: Each dataset provides (feature, label) pairs
Step32: Merge the two together using experimental.sample_from_datasets
Step33: To use this dataset, you'll need the number of steps per epoch.
The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once
Step34: Train on the oversampled data
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
Note
Step35: If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal
Step36: Re-train
Because training is easier on the balanced data, the above training procedure may overfit quickly.
So break up the epochs to give the callbacks.EarlyStopping finer control over when to stop training.
Step37: Re-check training history
Step38: Evaluate metrics
Step39: Plot the ROC | Python Code:
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
# Use matplotlib for visualizing the model
import matplotlib as mpl
import matplotlib.pyplot as plt
# Here we'll import Pandas and Numpy data processing libraries
import numpy as np
import pandas as pd
# Use seaborn for data visualization
import seaborn as sns
# Scikit-learn is an open source machine learning library that supports supervised and unsupervised learning.
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
print("TensorFlow version: ",tf.version.VERSION)
Explanation: Advanced Logistic Regression in TensorFlow 2.0
Learning Objectives
Load a CSV file using Pandas
Create train, validation, and test sets
Define and train a model using Keras (including setting class weights)
Evaluate the model using various metrics (including precision and recall)
Try common techniques for dealing with imbalanced data:
Class weighting and
Oversampling
Introduction
This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the Credit Card Fraud Detection dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use Keras to define the model and class weights to help the model learn from the imbalanced data.
PENDING LINK UPDATE: Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Start by importing the necessary libraries for this lab.
End of explanation
# Customize our Matplot lib visualization figure size and colors
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
Explanation: In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
End of explanation
file = tf.keras.utils
# pandas module read_csv() function reads the CSV file into a DataFrame object.
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
# `head()` function is used to get the first n rows of dataframe
raw_df.head()
Explanation: Data processing and exploration
Download the Kaggle Credit Card Fraud data set
Pandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.
Note: This dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available here and the page of the DefeatFraud project
End of explanation
# describe() is used to view some basic statistical details
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
Explanation: Now, let's view the statistics of the raw dataframe.
End of explanation
# Numpy bincount() method is used to obtain the frequency of each element provided inside a numpy array
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
Explanation: Examine the class label imbalance
Let's look at the dataset imbalance:
End of explanation
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps=0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
Explanation: This shows the small fraction of positive samples.
Clean, split and normalize the data
The raw data has a few issues. First the Time and Amount columns are too variable to use directly. Drop the Time column (since it's not clear what it means) and take the log of the Amount column to reduce its range.
End of explanation
# TODO 1
# Use a utility from sklearn to split and shuffle our dataset.
# train_test_split() method split arrays or matrices into random train and test subsets
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
Explanation: Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where overfitting is a significant concern from the lack of training data.
End of explanation
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
# `np.clip()` clip (limit) the values in an array.
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
Explanation: Normalize the input features using the sklearn StandardScaler.
This will set the mean to 0 and standard deviation to 1.
Note: The StandardScaler is only fit using the train_features to be sure the model is not peeking at the validation or test sets.
End of explanation
# pandas DataFrame is two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns)
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)
# Seaborn’s jointplot displays a relationship between 2 variables (bivariate) as well as
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
# The suptitle() function in pyplot module of the matplotlib library is used to add a title to the figure.
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
_ = plt.suptitle("Negative distribution")
Explanation: Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.
Look at the data distribution
Next compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:
Do these distributions make sense?
Yes. You've normalized the input and these are mostly concentrated in the +/- 2 range.
Can you see the difference between the ditributions?
Yes the positive examples contain a much higher rate of extreme values.
End of explanation
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics = METRICS, output_bias=None):
if output_bias is not None:
# `tf.keras.initializers.Constant()` generates tensors with constant values.
output_bias = tf.keras.initializers.Constant(output_bias)
# TODO 1
# Creating a Sequential model
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
# Compile the model
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
Explanation: Define the model and metrics
Define a function that creates a simple neural network with a densly connected hidden layer, a dropout layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
End of explanation
EPOCHS = 100
BATCH_SIZE = 2048
# Stop training when a monitored metric has stopped improving.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
# Display a model summary
model = make_model()
model.summary()
Explanation: Understanding useful metrics
Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
False negatives and false positives are samples that were incorrectly classified
True negatives and true positives are samples that were correctly classified
Accuracy is the percentage of examples correctly classified
$\frac{\text{true samples}}{\text{total samples}}$
Precision is the percentage of predicted positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false positives}}$
Recall is the percentage of actual positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false negatives}}$
AUC refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.
Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time.
Read more:
* True vs. False and Positive vs. Negative
* Accuracy
* Precision and Recall
* ROC-AUC
Baseline model
Build the model
Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.
Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
End of explanation
# use the model to do prediction with model.predict()
model.predict(train_features[:10])
Explanation: Test run the model:
End of explanation
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
Explanation: Optional: Set the correct initial bias.
These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: A Recipe for Training Neural Networks: "init well"). This can help with initial convergence.
With the default bias initialization the loss should be about math.log(2) = 0.69314
End of explanation
# np.log() is a mathematical function that is used to calculate the natural logarithm.
initial_bias = np.log([pos/neg])
initial_bias
Explanation: The correct bias to set can be derived from:
$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$
$$ b_0 = -log_e(1/p_0 - 1) $$
$$ b_0 = log_e(pos/neg)$$
End of explanation
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
Explanation: Set that as the initial bias, and the model will give much more reasonable initial guesses.
It should be near: pos/total = 0.0018
End of explanation
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
Explanation: With this initialization the initial loss should be approximately:
$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
End of explanation
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
Explanation: This initial loss is about 50 times less than if would have been with naive initilization.
This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.
Checkpoint the initial weights
To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
End of explanation
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
# Fit data to model
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
Explanation: Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
End of explanation
model = make_model()
model.load_weights(initial_weights)
# Fit data to model
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels))
Explanation: The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage.
Train the model
End of explanation
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
# subplots() which acts as a utility wrapper and helps in creating common layouts of subplots
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
Explanation: Check training history
In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this tutorial.
Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
End of explanation
# TODO 1
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
Explanation: Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.
Evaluate metrics
You can use a confusion matrix to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
End of explanation
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
Explanation: Evaluate your model on the test dataset and display the results for the metrics you created above.
End of explanation
def plot_roc(name, labels, predictions, **kwargs):
# Plot Receiver operating characteristic (ROC) curve.
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
Explanation: If the model had predicted everything perfectly, this would be a diagonal matrix where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
Plot the ROC
Now plot the ROC. This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
End of explanation
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
# TODO 1
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
Explanation: It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
Class weights
Calculate class weights
The goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
End of explanation
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
Explanation: Train a model with class weights
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
Note: Using class_weights changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like optimizers.SGD, may fail. The optimizer used here, optimizers.Adam, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
End of explanation
plot_metrics(weighted_history)
Explanation: Check training history
End of explanation
# TODO 1
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
Explanation: Evaluate metrics
End of explanation
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
# Function legend() which is used to Place a legend on the axes
plt.legend(loc='lower right')
Explanation: Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application.
Plot the ROC
End of explanation
# TODO 1
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
Explanation: Oversampling
Oversample the minority class
A related approach would be to resample the dataset by oversampling the minority class.
End of explanation
# np.arange() return evenly spaced values within a given interval.
ids = np.arange(len(pos_features))
# choice() method, you can get the random samples of one dimensional array and return the random samples of numpy array.
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
# numpy.concatenate() function concatenate a sequence of arrays along an existing axis.
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
# numpy.random.shuffle() modify a sequence in-place by shuffling its contents.
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
Explanation: Using NumPy
You can balance the dataset manually by choosing the right number of random
indices from the positive examples:
End of explanation
BUFFER_SIZE = 100000
def make_ds(features, labels):
# With the help of tf.data.Dataset.from_tensor_slices() method, we can get the slices of an array in the form of objects
# by using tf.data.Dataset.from_tensor_slices() method.
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
Explanation: Using tf.data
If you're using tf.data the easiest way to produce balanced examples is to start with a positive and a negative dataset, and merge them. See the tf.data guide for more examples.
End of explanation
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
Explanation: Each dataset provides (feature, label) pairs:
End of explanation
# Samples elements at random from the datasets in `datasets`.
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
Explanation: Merge the two together using experimental.sample_from_datasets:
End of explanation
# `np.ceil()` function returns the ceil value of the input array elements
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
Explanation: To use this dataset, you'll need the number of steps per epoch.
The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
End of explanation
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks = [early_stopping],
validation_data=val_ds)
Explanation: Train on the oversampled data
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
End of explanation
plot_metrics(resampled_history )
Explanation: If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight.
This smoother gradient signal makes it easier to train the model.
Check training history
Note that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
End of explanation
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch = 20,
epochs=10*EPOCHS,
callbacks = [early_stopping],
validation_data=(val_ds))
Explanation: Re-train
Because training is easier on the balanced data, the above training procedure may overfit quickly.
So break up the epochs to give the callbacks.EarlyStopping finer control over when to stop training.
End of explanation
plot_metrics(resampled_history)
Explanation: Re-check training history
End of explanation
# TODO 1
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
Explanation: Evaluate metrics
End of explanation
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
Explanation: Plot the ROC
End of explanation |
10,534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Considering Outliers and Novelty Detection
Step1: Finding more things that can go wrong with your data
Understanding the difference between anomalies and novel data
Examining a Fast and Simple Univariate Method
Step2: Samples total 442<BR>
Dimensionality 10<BR>
Features real, -.2 < x < .2<BR>
Targets integer 25 - 346<BR>
Step3: Leveraging on the Gaussian distribution
Step4: Making assumptions and checking out
Step5: Developing A Multivariate Approach
Using principal component analysis
Step6: Using cluster analysis
Step7: Automating outliers detection with SVM | Python Code:
import numpy as np
from scipy.stats.stats import pearsonr
np.random.seed(101)
normal = np.random.normal(loc=0.0, scale= 1.0, size=1000)
print 'Mean: %0.3f Median: %0.3f Variance: %0.3f' % (np.mean(normal), np.median(normal), np.var(normal))
outlying = normal.copy()
outlying[0] = 50.0
print 'Mean: %0.3f Median: %0.3f Variance: %0.3f' % (np.mean(outlying), np.median(outlying), np.var(outlying))
print 'Pearson''s correlation coefficient: %0.3f p-value: %0.3f' % pearsonr(normal,outlying)
Explanation: Considering Outliers and Novelty Detection
End of explanation
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
Explanation: Finding more things that can go wrong with your data
Understanding the difference between anomalies and novel data
Examining a Fast and Simple Univariate Method
End of explanation
X,y = diabetes.data, diabetes.target
import pandas as pd
pd.options.display.float_format = '{:.2f}'.format
df = pd.DataFrame(X)
print df.describe()
%matplotlib inline
import matplotlib.pyplot as plt
import pylab as pl
box_plots = df.boxplot(return_type='dict')
Explanation: Samples total 442<BR>
Dimensionality 10<BR>
Features real, -.2 < x < .2<BR>
Targets integer 25 - 346<BR>
End of explanation
from sklearn.preprocessing import StandardScaler
Xs = StandardScaler().fit_transform(X)
o_idx = np.where(np.abs(Xs)>3)
# .any(1) method will avoid duplicating
print df[(np.abs(Xs)>3).any(1)]
Explanation: Leveraging on the Gaussian distribution
End of explanation
from scipy.stats.mstats import winsorize
Xs_w = winsorize(Xs, limits=(0.05, 0.95))
Xs_c = Xs.copy()
Xs_c[o_idx] = np.sign(Xs_c[o_idx]) * 3
Explanation: Making assumptions and checking out
End of explanation
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
from pandas.tools.plotting import scatter_matrix
dim_reduction = PCA()
Xc = dim_reduction.fit_transform(scale(X))
print 'variance explained by the first 2 components: %0.1f%%' % (sum(dim_reduction.explained_variance_ratio_[:2]*100))
print 'variance explained by the last 2 components: %0.1f%%' % (sum(dim_reduction.explained_variance_ratio_[-2:]*100))
df = pd.DataFrame(Xc, columns=['comp_'+str(j) for j in range(10)])
first_two = df.plot(kind='scatter', x='comp_0', y='comp_1', c='DarkGray', s=50)
last_two = df.plot(kind='scatter', x='comp_8', y='comp_9', c='DarkGray', s=50)
print 'variance explained by the first 3 components: %0.1f%%' % (sum(dim_reduction.explained_variance_ratio_[:3]*100))
scatter_first = scatter_matrix(pd.DataFrame(Xc[:,:3], columns=['comp1','comp2','comp3']),
alpha=0.3, figsize=(15, 15), diagonal='kde', marker='o', grid=True)
scatter_last = scatter_matrix(pd.DataFrame(Xc[:,-2:], columns=['comp9','comp10']),
alpha=0.3, figsize=(15, 15), diagonal='kde', marker='o', grid=True)
outlying = (Xc[:,-1] < -0.3) | (Xc[:,-2] < -1.0)
print df[outlying]
Explanation: Developing A Multivariate Approach
Using principal component analysis
End of explanation
from sklearn.cluster import DBSCAN
DB = DBSCAN(eps=2.5, min_samples=25)
DB.fit(Xc)
from collections import Counter
print Counter(DB.labels_)
print df[DB.labels_==-1]
Explanation: Using cluster analysis
End of explanation
from sklearn import svm
outliers_fraction = 0.01 #
nu_estimate = 0.95 * outliers_fraction + 0.05
auto_detection = svm.OneClassSVM(kernel="rbf", gamma=0.01, degree=3, nu=nu_estimate)
auto_detection.fit(Xc)
evaluation = auto_detection.predict(Xc)
print df[evaluation==-1]
inliers = Xc[evaluation==+1,:]
outliers = Xc[evaluation==-1,:]
from matplotlib import pyplot as plt
import pylab as pl
inlying = plt.plot(inliers[:,0],inliers[:,1], 'o', markersize=2, color='g', alpha=1.0, label='inliers')
outlying = plt.plot(outliers[:,0],outliers[:,1], 'o', markersize=5, color='k', alpha=1.0, label='outliers')
plt.scatter(outliers[:,0],
outliers[:,1],
s=100, edgecolors="k", facecolors="none")
plt.xlabel('Component 1 ('+str(round(dim_reduction.explained_variance_ratio_[0],3))+')')
plt.ylabel('Component 2'+'('+str(round(dim_reduction.explained_variance_ratio_[1],3))+')')
plt.xlim([-7,7])
plt.ylim([-6,6])
plt.legend((inlying[0],outlying[0]),('inliers','outliers'),numpoints=1,loc='best')
plt.title("")
plt.show()
Explanation: Automating outliers detection with SVM
End of explanation |
10,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Connect to Source and test connection
Using 4 instances
Step1: Enumerate the parameter combinations
Step2: Specify the model changes
(Much like we do when setting up a PEST job)
Step3: Specify the result 'y' that we want to retrieve
Step4: Trigger the run...
Will run the 100 simulations across the four instances of Source (25 runs each)
Step5: The results | Python Code:
## Veneer started elsewhere (probably from a command line using veneer.manager.start)
ports = list(range(15004,15008))
ports
bv = BulkVeneer(ports)
v = bv.veneers[1]
network = v.network()
network.as_dataframe().plot()
network.outlet_nodes()
outlet_node = network.outlet_nodes()[0]['properties']['name'] + '$'
Explanation: Connect to Source and test connection
Using 4 instances
End of explanation
import numpy as np
N_RUNS=100
params = {
'x1':np.random.uniform(1.0,1500.0,size=N_RUNS),
'x2':np.random.uniform(1.0,5.0,size=N_RUNS),
'x3':np.random.uniform(1.0,200.0,size=N_RUNS),
'x4':np.random.uniform(0.5,3.0,size=N_RUNS)
}
params = pd.DataFrame(params)
params
Explanation: Enumerate the parameter combinations
End of explanation
runner = BatchRunner(bv.veneers)
v.model.catchment.runoff.set_param_values?
for p in ['x1','x2','x3','x4']:
runner.parameters.model.catchment.runoff.set_param_values(p,'$%s$'%p,fus=['Grazing'])
Explanation: Specify the model changes
(Much like we do when setting up a PEST job)
End of explanation
runner.retrieve('y').retrieve_multiple_time_series(criteria={'NetworkElement':outlet_node,'RecordingVariable':'Downstream Flow Volume'}).sum()[0]
%xmode Verbose
print(runner._retrieval.script())
Explanation: Specify the result 'y' that we want to retrieve
End of explanation
jobs,results = runner.run(params)
#jobs
Explanation: Trigger the run...
Will run the 100 simulations across the four instances of Source (25 runs each)
End of explanation
pd.DataFrame(results)
Explanation: The results
End of explanation |
10,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
t-SNE Tutorial
Step1: Zklastrowane punkty w 2D
Działanie t-SNE na dwóch oddalonych od siebie klastrach punktów w 2D
Zadanie 1
Napisz funkcję, która pokaże punkty ze zbioru przed oraz po transformacji t-SNE z zadaną wielkością zbioru oraz perplexity, użyj sklearn.manifold.TSNE inicjalizowane PCA, resztę parametrów pozostaw jako domyślną.
Dokumentacja t-SNE
Funkcje pomocnicze
Step2: Korzystając z napisanej funkcji pokaż jak zachowuje się t-SNE dla kilku różnych wielkości klastrów (w granicach 10 - 400) i różnych wartości perplexity
Zadanie 2
Pokaż zależność różnicy (np. jako błędu średniokwadratowego znormalizowanych danych) pomiędzy oryginalnymi, a zredukowanymi odległościami pomiędzy parami punktów w zależności od perplexity. Na wykresie rozróżnij różnicę odległości w bliskim oraz dalekim sąsiedztwie.
Sprawdź jak wygląda wykres dla małych wielkości klastra (np. 10 - 50) oraz dużych (np. 200 - 300). Pokaż co dzieje się z punktami w różnych częściach wykresu.
Zalecany przez autora przedział perplexity to 5 - 50. Sprawdź co dzieje się z ww. różnicą poza tym przedziałem.
Proponowane parametry t-SNE
Step3: Zadanie 3
Przeprowadź redukcję wymiarów przy pomocy t-SNE
Wyświetl reprezentację graficzną zredukowanego zbioru przy pomocy funkcji plot_2D
Zmieniając parametry spróbuj doprowadzić do sytuacji w której niebieskie punkty znajdują sie bliżej siebie w centrum.
Wykorzystaj zmianę parametru learning_rate.
Zbiór ręcznie napisanych cyfr
Załadowanie zbioru danych
Step4: Zadanie 4
Przeprowadź redukcję przy pomocy t-SNE
Wyświetl wizualizację zredukowanego zbioru przy pomocy funkcji plot_digits(zbiór zredukowany, pierwotny zbiór cyfr)
Zbiór zdjęć twarzy
Załadowanie zbioru danych | Python Code:
import numpy as np
import scipy
import sklearn
import matplotlib.pyplot as plt
from functions import *
from sklearn import manifold, datasets
from sklearn.manifold import TSNE
Explanation: t-SNE Tutorial
End of explanation
def plot_2D_cluster(cluster_size, perplexity):
pass
Explanation: Zklastrowane punkty w 2D
Działanie t-SNE na dwóch oddalonych od siebie klastrach punktów w 2D
Zadanie 1
Napisz funkcję, która pokaże punkty ze zbioru przed oraz po transformacji t-SNE z zadaną wielkością zbioru oraz perplexity, użyj sklearn.manifold.TSNE inicjalizowane PCA, resztę parametrów pozostaw jako domyślną.
Dokumentacja t-SNE
Funkcje pomocnicze:
point_cluster(cluster_size) - generuje losowe dwa klastry punktów 2D o rozmiarze cluster_size wraz z ich kolorami
plot_2D(data, colors) - rysuje wykres punktów ze współrzędnymi data oraz kolorami z colors
End of explanation
data, colors = point_cluster_multi(100)
plot_2D(data, colors)
Explanation: Korzystając z napisanej funkcji pokaż jak zachowuje się t-SNE dla kilku różnych wielkości klastrów (w granicach 10 - 400) i różnych wartości perplexity
Zadanie 2
Pokaż zależność różnicy (np. jako błędu średniokwadratowego znormalizowanych danych) pomiędzy oryginalnymi, a zredukowanymi odległościami pomiędzy parami punktów w zależności od perplexity. Na wykresie rozróżnij różnicę odległości w bliskim oraz dalekim sąsiedztwie.
Sprawdź jak wygląda wykres dla małych wielkości klastra (np. 10 - 50) oraz dużych (np. 200 - 300). Pokaż co dzieje się z punktami w różnych częściach wykresu.
Zalecany przez autora przedział perplexity to 5 - 50. Sprawdź co dzieje się z ww. różnicą poza tym przedziałem.
Proponowane parametry t-SNE:
- inicjalizacja PCA
- nieco obniżone względem domyślnego learning_rate, np. 500
Pomocne funkcje:
pairwise_distances(points) - zwraca macierz euklidesowych odległości między punktami w której komórka i,j oznacza odległość między punktem i a j
np.where(condition[, x, y]) - dokumentacja
Punkty w wielu wymiarach
Prosty przyklad przedstawiajacy powiększenie mniejszego klastra punktów w wyniku przeprowadzenia redukcji t-SNE
Załadowanie zbioru danych
End of explanation
digits = datasets.load_digits(n_class=6)
X = digits.data
Explanation: Zadanie 3
Przeprowadź redukcję wymiarów przy pomocy t-SNE
Wyświetl reprezentację graficzną zredukowanego zbioru przy pomocy funkcji plot_2D
Zmieniając parametry spróbuj doprowadzić do sytuacji w której niebieskie punkty znajdują sie bliżej siebie w centrum.
Wykorzystaj zmianę parametru learning_rate.
Zbiór ręcznie napisanych cyfr
Załadowanie zbioru danych
End of explanation
faces = datasets.fetch_olivetti_faces()
draw_faces(faces)
Explanation: Zadanie 4
Przeprowadź redukcję przy pomocy t-SNE
Wyświetl wizualizację zredukowanego zbioru przy pomocy funkcji plot_digits(zbiór zredukowany, pierwotny zbiór cyfr)
Zbiór zdjęć twarzy
Załadowanie zbioru danych
End of explanation |
10,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding Lane Lines on the Road
In this project, I will use the OpenCV & Python to identify lane lines on the road. Firstly, I can develop the pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Note
Step1: Read in an Image
Step9: Ideas for Lane Detection Pipeline
Some OpenCV functions that might be useful for this project are
Step10: Test Images
Make sure pipeline works well on these images before try the videos.
Step11: Build a Lane Finding Pipeline
Step12: Test on Videos
I can test my solution on two provided videos
Step13: Let's try the one with the solid white lane on the right first ...
Step15: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
Step17: Now for the one with the solid yellow lane on the left. This one's more tricky! | Python Code:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
import math
Explanation: Finding Lane Lines on the Road
In this project, I will use the OpenCV & Python to identify lane lines on the road. Firstly, I can develop the pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt.
Import Packages
End of explanation
#reading in an image
image = mpimg.imread('test_images/whiteCarLaneSwitch.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
Explanation: Read in an Image
End of explanation
def grayscale(img):
Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
Applies a Gaussian Noise kernel
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
return cv2.addWeighted(initial_img, α, img, β, γ)
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Helper Functions
End of explanation
import os
os.listdir("test_images/")
#reading in an image
image = mpimg.imread('test_images/solidYellowCurve2.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
Explanation: Test Images
Make sure pipeline works well on these images before try the videos.
End of explanation
gray = grayscale(image)
# Define a kernel size and apply Gaussian smoothing
kernel_size = 9
blur_gray = gaussian_blur(gray, kernel_size)
# Define our parameters for Canny and apply
low_threshold = 50
high_threshold = 100
edges = canny(blur_gray, low_threshold, high_threshold)
# Next we'll create a masked edges image using cv2.fillPoly()
# This time we are defining a four sided polygon to mask
imshape = image.shape
vertices = np.array([[(0,imshape[0]),(450, 320), (490, 320), (imshape[1],imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 50 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 1 #minimum number of pixels making up a line
max_line_gap = 250 # maximum gap in pixels between connectable line segments
line_image = np.copy(image)*0 # creating a blank to draw lines on
# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
# Iterate over the output "lines" and draw lines on a blank image
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),10)
# Create a "color" binary image to combine with line image
color_edges = np.dstack((edges, edges, edges))
# Draw the lines on the edge image
lines_edges = cv2.addWeighted(color_edges, 0.8, line_image, 1, 0)
plt.imshow(lines_edges)
plt.imshow(gray,cmap='gray')
cv2.imwrite('test_images_output/gray.jpg', gray)
plt.imshow(blur_gray,cmap='gray')
plt.imshow(edges,cmap='gray')
cv2.imwrite('test_images_output/edges.jpg', edges)
vertices
plt.imshow(masked_edges,cmap='gray')
cv2.imwrite('test_images_output/edges.jpg', masked_edges)
plt.imshow(lines)
lines
plt.imshow(color_edges)
cv2.imwrite('test_images_output/edges.jpg', color_edges)
plt.imshow(lines_edges)
# NOTE: The output you return should be a color image (3 channel) for processing video below
def process_image(image):
# grayscale conversion before processing causes more harm than good
# because sometimes the lane and road have same amount of luminance
# grayscaleImage = grayscale(image)
# Blur to avoid edges from noise
blurredImage = gaussian_blur(image, 9)
# Detect edges using canny
# high to low threshold factor of 3
# it is necessary to keep a linient threshold at the lower end
# to continue to detect faded lane markings
edgesImage = canny(blurredImage, 30, 90)
# mark out the trapezium region of interest
# dont' be too agressive as the car may drift laterally
# while driving, hence ample space is still left on both sides.
height = image.shape[0]
width = image.shape[1]
vertices = np.array([[(0,imshape[0]),(450, 320), (490, 320), (imshape[1],imshape[0])]], dtype=np.int32)
# mask the canny output with trapezium region of interest
regionInterestImage = region_of_interest(edgesImage, vertices)
# parameters tuned using this method:
# threshold 30 by modifying it and seeing where slightly curved
# lane markings are barely detected
# min line length 20 by modifying and seeing where broken short
# lane markings are barely detected
# max line gap as 100 to allow plenty of room for the algo to
# connect spaced out lane markings
lineMarkedImage = hough_lines(regionInterestImage, 1, np.pi/180, 50, 10, 200)
# Test detected edges by uncommenting this
# return cv2.cvtColor(regionInterestImage, cv2.COLOR_GRAY2RGB)
# draw output on top of original
return weighted_img(lineMarkedImage, image)
out_image = process_image(image)
plt.imshow(out_image)
cv2.imwrite('test_images_output/solidYellowCurve2.jpg', out_image) # to save the image
Explanation: Build a Lane Finding Pipeline
End of explanation
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
#reading in an image
image = mpimg.imread('test_images/solidYellowCurve.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
# # NOTE: The output you return should be a color image (3 channel) for processing video below
# def process_image(image):
# # grayscale conversion before processing causes more harm than good
# # because sometimes the lane and road have same amount of luminance
# # grayscaleImage = grayscale(image)
# # Blur to avoid edges from noise
# blurredImage = gaussian_blur(image, 5)
# # Detect edges using canny
# # high to low threshold factor of 3
# # it is necessary to keep a linient threshold at the lower end
# # to continue to detect faded lane markings
# edgesImage = canny(blurredImage, 50, 100)
# # mark out the trapezium region of interest
# # dont' be too agressive as the car may drift laterally
# # while driving, hence ample space is still left on both sides.
# height = image.shape[0]
# width = image.shape[1]
# vertices = np.array([[(0,imshape[0]),(450, 320), (490, 320), (imshape[1],imshape[0])]], dtype=np.int32)
# # mask the canny output with trapezium region of interest
# regionInterestImage = region_of_interest(edgesImage, vertices)
# # parameters tuned using this method:
# # threshold 30 by modifying it and seeing where slightly curved
# # lane markings are barely detected
# # min line length 20 by modifying and seeing where broken short
# # lane markings are barely detected
# # max line gap as 100 to allow plenty of room for the algo to
# # connect spaced out lane markings
# lineMarkedImage = hough_lines(regionInterestImage, 1, np.pi/180, 50, 100, 160)
# # Test detected edges by uncommenting this
# # return cv2.cvtColor(regionInterestImage, cv2.COLOR_GRAY2RGB)
# # draw output on top of original
# return weighted_img(lineMarkedImage, image)
# NOTE: The output you return should be a color image (3 channel) for processing video below
def process_image(image):
# grayscale conversion before processing causes more harm than good
# because sometimes the lane and road have same amount of luminance
# grayscaleImage = grayscale(image)
# Blur to avoid edges from noise
blurredImage = gaussian_blur(image, 9)
# Detect edges using canny
# high to low threshold factor of 3
# it is necessary to keep a linient threshold at the lower end
# to continue to detect faded lane markings
edgesImage = canny(blurredImage, 30, 90)
# mark out the trapezium region of interest
# dont' be too agressive as the car may drift laterally
# while driving, hence ample space is still left on both sides.
height = image.shape[0]
width = image.shape[1]
vertices = np.array([[(0,imshape[0]),(450, 320), (490, 320), (imshape[1],imshape[0])]], dtype=np.int32)
# mask the canny output with trapezium region of interest
regionInterestImage = region_of_interest(edgesImage, vertices)
# parameters tuned using this method:
# threshold 30 by modifying it and seeing where slightly curved
# lane markings are barely detected
# min line length 20 by modifying and seeing where broken short
# lane markings are barely detected
# max line gap as 100 to allow plenty of room for the algo to
# connect spaced out lane markings
lineMarkedImage = hough_lines(regionInterestImage, 1, np.pi/180, 50, 10, 200)
# Test detected edges by uncommenting this
# return cv2.cvtColor(regionInterestImage, cv2.COLOR_GRAY2RGB)
# draw output on top of original
return weighted_img(lineMarkedImage, image)
Explanation: Test on Videos
I can test my solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
End of explanation
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## Uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(white_output))
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## Uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output))
Explanation: Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation |
10,538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multivariate Regression
Let's grab a small little data set of Blue Book car values
Step1: We can use pandas to split up this matrix into the feature vectors we're interested in, and the value we're trying to predict.
Note how we use pandas.Categorical to convert textual category data (model name) into an ordinal number that we can work with.
This is actually a questionable thing to do in the real world - doing a regression on categorical data only works well if there is some inherent order to the categories! | Python Code:
import pandas as pd
df = pd.read_excel('http://cdn.sundog-soft.com/Udemy/DataScience/cars.xls')
df.head()
Explanation: Multivariate Regression
Let's grab a small little data set of Blue Book car values:
End of explanation
import statsmodels.api as sm
df['Model_ord'] = pd.Categorical(df.Model).codes
X = df[['Mileage', 'Model_ord', 'Doors']]
y = df[['Price']]
X1 = sm.add_constant(X)
est = sm.OLS(y, X1).fit()
est.summary()
y.groupby(df.Doors).mean()
Explanation: We can use pandas to split up this matrix into the feature vectors we're interested in, and the value we're trying to predict.
Note how we use pandas.Categorical to convert textual category data (model name) into an ordinal number that we can work with.
This is actually a questionable thing to do in the real world - doing a regression on categorical data only works well if there is some inherent order to the categories!
End of explanation |
10,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Modular neural nets
In the previous HW, we computed the loss and gradient for a two-layer neural network in a single monolithic function. This isn't very difficult for a small two-layer network, but would be tedious and error-prone for larger networks. Ideally we want to build networks using a more modular design so that we can snap together different types of layers and loss functions in order to quickly experiment with different architectures.
In this exercise we will implement this approach, and develop a number of different layer types in isolation that can then be easily plugged together. For each layer we will implement forward and backward functions. The forward function will receive data, weights, and other parameters, and will return both an output and a cache object that stores data needed for the backward pass. The backward function will recieve upstream derivatives and the cache object, and will return gradients with respect to the data and all of the weights. This will allow us to write code that looks like this
Step2: Affine layer
Step3: Affine layer
Step4: ReLU layer
Step5: ReLU layer
Step6: Loss layers
Step7: Convolution layer
Step8: Convolution layer
Step9: Max pooling layer
Step10: Max pooling layer
Step11: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory
Step12: Sandwich layers
There are a couple common layer "sandwiches" that frequently appear in ConvNets. For example convolutional layers are frequently followed by ReLU and pooling, and affine layers are frequently followed by ReLU. To make it more convenient to use these common patterns, we have defined several convenience layers in the file cs231n/layer_utils.py. Lets grad-check them to make sure that they work correctly | Python Code:
# As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
Explanation: Modular neural nets
In the previous HW, we computed the loss and gradient for a two-layer neural network in a single monolithic function. This isn't very difficult for a small two-layer network, but would be tedious and error-prone for larger networks. Ideally we want to build networks using a more modular design so that we can snap together different types of layers and loss functions in order to quickly experiment with different architectures.
In this exercise we will implement this approach, and develop a number of different layer types in isolation that can then be easily plugged together. For each layer we will implement forward and backward functions. The forward function will receive data, weights, and other parameters, and will return both an output and a cache object that stores data needed for the backward pass. The backward function will recieve upstream derivatives and the cache object, and will return gradients with respect to the data and all of the weights. This will allow us to write code that looks like this:
```python
def two_layer_net(X, W1, b1, W2, b2, reg):
# Forward pass; compute scores
s1, fc1_cache = affine_forward(X, W1, b1)
a1, relu_cache = relu_forward(s1)
scores, fc2_cache = affine_forward(a1, W2, b2)
# Loss functions return data loss and gradients on scores
data_loss, dscores = svm_loss(scores, y)
# Compute backward pass
da1, dW2, db2 = affine_backward(dscores, fc2_cache)
ds1 = relu_backward(da1, relu_cache)
dX, dW1, db1 = affine_backward(ds1, fc1_cache)
# A real network would add regularization here
# Return loss and gradients
return loss, dW1, db1, dW2, db2
```
End of explanation
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print 'Testing affine_forward function:'
print 'difference: ', rel_error(out, correct_out)
Explanation: Affine layer: forward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done we will test your implementation by running the following:
End of explanation
# Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be less than 1e-10
print 'Testing affine_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
Explanation: Affine layer: backward
Now implement the affine_backward function in the same file. You can test your implementation using numeric gradient checking.
End of explanation
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 1e-8
print 'Testing relu_forward function:'
print 'difference: ', rel_error(out, correct_out)
Explanation: ReLU layer: forward
Implement the relu_forward function and test your implementation by running the following:
End of explanation
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 1e-12
print 'Testing relu_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
Explanation: ReLU layer: backward
Implement the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print 'Testing svm_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print '\nTesting softmax_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. It's still a good idea to test them to make sure they work correctly.
End of explanation
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print 'Testing conv_forward_naive'
print 'difference: ', rel_error(out, correct_out)
Explanation: Convolution layer: forward naive
We are now ready to implement the forward pass for a convolutional layer. Implement the function conv_forward_naive in the file cs231n/layers.py.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print 'Testing conv_backward_naive function'
print 'dx error: ', rel_error(dx, dx_num)
print 'dw error: ', rel_error(dw, dw_num)
print 'db error: ', rel_error(db, db_num)
Explanation: Convolution layer: backward naive
Next you need to implement the function conv_backward_naive in the file cs231n/layers.py. As usual, we will check your implementation with numeric gradient checking.
End of explanation
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print 'Testing max_pool_forward_naive function:'
print 'difference: ', rel_error(out, correct_out)
Explanation: Max pooling layer: forward naive
The last layer we need for a basic convolutional neural network is the max pooling layer. First implement the forward pass in the function max_pool_forward_naive in the file cs231n/layers.py.
End of explanation
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print 'Testing max_pool_backward_naive function:'
print 'dx error: ', rel_error(dx, dx_num)
Explanation: Max pooling layer: backward naive
Implement the backward pass for a max pooling layer in the function max_pool_backward_naive in the file cs231n/layers.py. As always we check the correctness of the backward pass using numerical gradient checking.
End of explanation
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print 'Testing conv_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'Difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting conv_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
print 'dw difference: ', rel_error(dw_naive, dw_fast)
print 'db difference: ', rel_error(db_naive, db_fast)
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print 'Testing pool_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting pool_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16) # N, C, H, W = X.shape
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print 'Testing conv_relu_pool_forward:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print 'Testing conv_relu_forward:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print 'Testing affine_relu_forward:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
Explanation: Sandwich layers
There are a couple common layer "sandwiches" that frequently appear in ConvNets. For example convolutional layers are frequently followed by ReLU and pooling, and affine layers are frequently followed by ReLU. To make it more convenient to use these common patterns, we have defined several convenience layers in the file cs231n/layer_utils.py. Lets grad-check them to make sure that they work correctly:
End of explanation |
10,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TF Lattice Aggregate Function Models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Importing required packages
Step3: Downloading the Puzzles dataset
Step4: Extract and convert features and labels
Step5: Setting the default values used for training in this guide
Step6: Feature Configs
Feature calibration and per-feature configurations are set using tfl.configs.FeatureConfig. Feature configurations include monotonicity constraints, per-feature regularization (see tfl.configs.RegularizerConfig), and lattice sizes for lattice models.
Note that we must fully specify the feature config for any feature that we want our model to recognize. Otherwise the model will have no way of knowing that such a feature exists. For aggregation models, these features will automaticaly be considered and properly handled as ragged.
Compute Quantiles
Although the default setting for pwl_calibration_input_keypoints in tfl.configs.FeatureConfig is 'quantiles', for premade models we have to manually define the input keypoints. To do so, we first define our own helper function for computing quantiles.
Step7: Defining Our Feature Configs
Now that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.
Step8: Aggregate Function Model
To construct a TFL premade model, first construct a model configuration from tfl.configs. An aggregate function model is constructed using the tfl.configs.AggregateFunctionConfig. It applies piecewise-linear and categorical calibration, followed by a lattice model on each dimension of the ragged input. It then applies an aggregation layer over the output for each dimension. This is then followed by an optional output piecewise-linear calibration.
Step9: The output of each Aggregation layer is the averaged output of a calibrated lattice over the ragged inputs. Here is the model used inside the first Aggregation layer
Step10: Now, as with any other tf.keras.Model, we compile and fit the model to our data.
Step11: After training our model, we can evaluate it on our test set. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install tensorflow-lattice pydot
Explanation: TF Lattice Aggregate Function Models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/aggregate_function_models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/aggregate_function_models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/lattice/blob/master/docs/tutorials/aggregate_function_models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/aggregate_function_models.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
TFL Premade Aggregate Function Models are quick and easy ways to build TFL tf.keras.model instances for learning complex aggregation functions. This guide outlines the steps needed to construct a TFL Premade Aggregate Function Model and train/test it.
Setup
Installing TF Lattice package:
End of explanation
import tensorflow as tf
import collections
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
Explanation: Importing required packages:
End of explanation
train_dataframe = pd.read_csv(
'https://raw.githubusercontent.com/wbakst/puzzles_data/master/train.csv')
train_dataframe.head()
test_dataframe = pd.read_csv(
'https://raw.githubusercontent.com/wbakst/puzzles_data/master/test.csv')
test_dataframe.head()
Explanation: Downloading the Puzzles dataset:
End of explanation
# Features:
# - star_rating rating out of 5 stars (1-5)
# - word_count number of words in the review
# - is_amazon 1 = reviewed on amazon; 0 = reviewed on artifact website
# - includes_photo if the review includes a photo of the puzzle
# - num_helpful number of people that found this review helpful
# - num_reviews total number of reviews for this puzzle (we construct)
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'star_rating', 'word_count', 'is_amazon', 'includes_photo', 'num_helpful',
'num_reviews'
]
def extract_features(dataframe, label_name):
# First we extract flattened features.
flattened_features = {
feature_name: dataframe[feature_name].values.astype(float)
for feature_name in feature_names[:-1]
}
# Construct mapping from puzzle name to feature.
star_rating = collections.defaultdict(list)
word_count = collections.defaultdict(list)
is_amazon = collections.defaultdict(list)
includes_photo = collections.defaultdict(list)
num_helpful = collections.defaultdict(list)
labels = {}
# Extract each review.
for i in range(len(dataframe)):
row = dataframe.iloc[i]
puzzle_name = row['puzzle_name']
star_rating[puzzle_name].append(float(row['star_rating']))
word_count[puzzle_name].append(float(row['word_count']))
is_amazon[puzzle_name].append(float(row['is_amazon']))
includes_photo[puzzle_name].append(float(row['includes_photo']))
num_helpful[puzzle_name].append(float(row['num_helpful']))
labels[puzzle_name] = float(row[label_name])
# Organize data into list of list of features.
names = list(star_rating.keys())
star_rating = [star_rating[name] for name in names]
word_count = [word_count[name] for name in names]
is_amazon = [is_amazon[name] for name in names]
includes_photo = [includes_photo[name] for name in names]
num_helpful = [num_helpful[name] for name in names]
num_reviews = [[len(ratings)] * len(ratings) for ratings in star_rating]
labels = [labels[name] for name in names]
# Flatten num_reviews
flattened_features['num_reviews'] = [len(reviews) for reviews in num_reviews]
# Convert data into ragged tensors.
star_rating = tf.ragged.constant(star_rating)
word_count = tf.ragged.constant(word_count)
is_amazon = tf.ragged.constant(is_amazon)
includes_photo = tf.ragged.constant(includes_photo)
num_helpful = tf.ragged.constant(num_helpful)
num_reviews = tf.ragged.constant(num_reviews)
labels = tf.constant(labels)
# Now we can return our extracted data.
return (star_rating, word_count, is_amazon, includes_photo, num_helpful,
num_reviews), labels, flattened_features
train_xs, train_ys, flattened_features = extract_features(train_dataframe, 'Sales12-18MonthsAgo')
test_xs, test_ys, _ = extract_features(test_dataframe, 'SalesLastSixMonths')
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
Explanation: Extract and convert features and labels
End of explanation
LEARNING_RATE = 0.1
BATCH_SIZE = 128
NUM_EPOCHS = 500
MIDDLE_DIM = 3
MIDDLE_LATTICE_SIZE = 2
MIDDLE_KEYPOINTS = 16
OUTPUT_KEYPOINTS = 8
Explanation: Setting the default values used for training in this guide:
End of explanation
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
Explanation: Feature Configs
Feature calibration and per-feature configurations are set using tfl.configs.FeatureConfig. Feature configurations include monotonicity constraints, per-feature regularization (see tfl.configs.RegularizerConfig), and lattice sizes for lattice models.
Note that we must fully specify the feature config for any feature that we want our model to recognize. Otherwise the model will have no way of knowing that such a feature exists. For aggregation models, these features will automaticaly be considered and properly handled as ragged.
Compute Quantiles
Although the default setting for pwl_calibration_input_keypoints in tfl.configs.FeatureConfig is 'quantiles', for premade models we have to manually define the input keypoints. To do so, we first define our own helper function for computing quantiles.
End of explanation
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='star_rating',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['star_rating'], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='word_count',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['word_count'], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='is_amazon',
lattice_size=2,
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='includes_photo',
lattice_size=2,
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='num_helpful',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['num_helpful'], num_keypoints=5),
# Larger num_helpful indicating more trust in star_rating.
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="star_rating", trust_type="trapezoid"),
],
),
tfl.configs.FeatureConfig(
name='num_reviews',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['num_reviews'], num_keypoints=5),
)
]
Explanation: Defining Our Feature Configs
Now that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.
End of explanation
# Model config defines the model structure for the aggregate function model.
aggregate_function_model_config = tfl.configs.AggregateFunctionConfig(
feature_configs=feature_configs,
middle_dimension=MIDDLE_DIM,
middle_lattice_size=MIDDLE_LATTICE_SIZE,
middle_calibration=True,
middle_calibration_num_keypoints=MIDDLE_KEYPOINTS,
middle_monotonicity='increasing',
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=OUTPUT_KEYPOINTS,
output_initialization=np.linspace(
min_label, max_label, num=OUTPUT_KEYPOINTS))
# An AggregateFunction premade model constructed from the given model config.
aggregate_function_model = tfl.premade.AggregateFunction(
aggregate_function_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
aggregate_function_model, show_layer_names=False, rankdir='LR')
Explanation: Aggregate Function Model
To construct a TFL premade model, first construct a model configuration from tfl.configs. An aggregate function model is constructed using the tfl.configs.AggregateFunctionConfig. It applies piecewise-linear and categorical calibration, followed by a lattice model on each dimension of the ragged input. It then applies an aggregation layer over the output for each dimension. This is then followed by an optional output piecewise-linear calibration.
End of explanation
aggregation_layers = [
layer for layer in aggregate_function_model.layers
if isinstance(layer, tfl.layers.Aggregation)
]
tf.keras.utils.plot_model(
aggregation_layers[0].model, show_layer_names=False, rankdir='LR')
Explanation: The output of each Aggregation layer is the averaged output of a calibrated lattice over the ragged inputs. Here is the model used inside the first Aggregation layer:
End of explanation
aggregate_function_model.compile(
loss='mae',
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
aggregate_function_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
Explanation: Now, as with any other tf.keras.Model, we compile and fit the model to our data.
End of explanation
print('Test Set Evaluation...')
print(aggregate_function_model.evaluate(test_xs, test_ys))
Explanation: After training our model, we can evaluate it on our test set.
End of explanation |
10,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Built-In Data Structures
Python also has several built-in compound types, which act as containers for other types.
| Type Name | Example |Description |
|-----------|---------------------------|---------------------------------------|
| list | [1, 2, 3] | Ordered collection |
| tuple | (1, 2, 3) | Immutable ordered collection |
| dict | {'a'
Step1: Lists have a number of useful properties and methods available to them.
Step2: One of the powerful features of Python's compound objects is that they can contain a mix of objects of any type.
Step3: This flexibility is a consequence of Python's dynamic type system.
Creating such a mixed sequence in a statically-typed language like C can be much more of a headache!
We see that lists can even contain other lists as elements.
Such type flexibility is an essential piece of what makes Python code relatively quick and easy to write.
List indexing and slicing
Python provides access to elements in compound types through indexing for single elements, and slicing for multiple elements.
Step4: Python uses zero-based indexing, so we can access the first and second element in using the following syntax
Step5: Elements at the end of the list can be accessed with negative numbers, starting from -1
Step6: You can visualize this indexing scheme this way
Step7: we can equivalently write
Step8: Similarly, if we leave out the last index, it defaults to the length of the list.
Thus, the last three elements can be accessed as follows
Step9: Finally, it is possible to specify a third integer that represents the step size; for example, to select every second element of the list, we can write
Step10: A particularly useful version of this is to specify a negative step, which will reverse the array
Step11: Both indexing and slicing can be used to set elements as well as access them.
The syntax is as you would expect
Step12: A very similar slicing syntax is also used in other data containers, such as NumPy arrays, as we will see in Day 2 sessions.
Tuples
Tuples are in many ways similar to lists, but they are defined with parentheses and cannot be changed!
Step13: They can also be defined without any brackets at all
Step14: Like the lists discussed before, tuples have a length, and individual elements can be extracted using square-bracket indexing
Step15: The main distinguishing feature of tuples is that they are immutable
Step16: Items are accessed and set via the indexing syntax used for lists and tuples, except here the index is not a zero-based order but valid key in the dictionary
Step17: New items can be added to the dictionary using indexing as well
Step18: Keep in mind that dictionaries do not maintain any order for the input parameters.
Sets
The 4th basic data container is the set, which contains unordered collections of unique items.
They are defined much like lists and tuples, except they use the curly brackets of dictionaries.
They do not contain duplicate entries. Which means they are significantly faster than lists!
http
Step19: If you're familiar with the mathematics of sets, you'll be familiar with operations like the union, intersection, difference, symmetric difference, and others.
Python's sets have all of these operations built-in, via methods or operators.
For each, we'll show the two equivalent methods | Python Code:
a = [2, 3, 5, 7]
Explanation: Built-In Data Structures
Python also has several built-in compound types, which act as containers for other types.
| Type Name | Example |Description |
|-----------|---------------------------|---------------------------------------|
| list | [1, 2, 3] | Ordered collection |
| tuple | (1, 2, 3) | Immutable ordered collection |
| dict | {'a':1, 'b':2, 'c':3} | Unordered (key,value) mapping |
| set | {1, 2, 3} | Unordered collection of unique values |
Note, round, square, and curly brackets have distinct meanings.
Lists
Lists are the basic ordered and mutable data collection type in Python.
End of explanation
# Length of a list
len(a)
# Append a value to the end
a.append(11)
a
# Addition concatenates lists
a + [13, 17, 19]
# sort() method sorts in-place
a = [2, 5, 1, 6, 3, 4]
a.sort()
a
Explanation: Lists have a number of useful properties and methods available to them.
End of explanation
a = [1, 'two', 3.14, [0, 3, 5]]
a
Explanation: One of the powerful features of Python's compound objects is that they can contain a mix of objects of any type.
End of explanation
a = [2, 3, 5, 7, 11]
Explanation: This flexibility is a consequence of Python's dynamic type system.
Creating such a mixed sequence in a statically-typed language like C can be much more of a headache!
We see that lists can even contain other lists as elements.
Such type flexibility is an essential piece of what makes Python code relatively quick and easy to write.
List indexing and slicing
Python provides access to elements in compound types through indexing for single elements, and slicing for multiple elements.
End of explanation
a[0]
a[1]
Explanation: Python uses zero-based indexing, so we can access the first and second element in using the following syntax:
End of explanation
a[-1]
a[-2]
Explanation: Elements at the end of the list can be accessed with negative numbers, starting from -1:
End of explanation
a[0:3]
Explanation: You can visualize this indexing scheme this way:
Here values in the list are represented by large numbers in the squares; list indices are represented by small numbers above and below.
In this case, L[2] returns 5, because that is the next value at index 2.
Where indexing is a means of fetching a single value from the list, slicing is a means of accessing multiple values in sub-lists.
It uses a colon to indicate the start point (inclusive) and end point (non-inclusive) of the sub-array.
For example, to get the first three elements of the list, we can write:
End of explanation
a[:3]
Explanation: we can equivalently write:
End of explanation
a[-3:]
Explanation: Similarly, if we leave out the last index, it defaults to the length of the list.
Thus, the last three elements can be accessed as follows:
End of explanation
a[::2] # equivalent to a[0:len(a):2]
Explanation: Finally, it is possible to specify a third integer that represents the step size; for example, to select every second element of the list, we can write:
End of explanation
a[::-1]
Explanation: A particularly useful version of this is to specify a negative step, which will reverse the array:
End of explanation
a[0] = 100
print(a)
a[1:3] = [55, 56]
print(a)
Explanation: Both indexing and slicing can be used to set elements as well as access them.
The syntax is as you would expect:
End of explanation
t = (1, 2, 3)
Explanation: A very similar slicing syntax is also used in other data containers, such as NumPy arrays, as we will see in Day 2 sessions.
Tuples
Tuples are in many ways similar to lists, but they are defined with parentheses and cannot be changed!
End of explanation
t = 1, 2, 3
print(t)
Explanation: They can also be defined without any brackets at all:
End of explanation
len(t)
t[0]
Explanation: Like the lists discussed before, tuples have a length, and individual elements can be extracted using square-bracket indexing:
End of explanation
numbers = {'one':1, 'two':2, 'three':3}
# or
numbers = dict(one=1, two=2, three=2)
Explanation: The main distinguishing feature of tuples is that they are immutable: this means that once they are created, their size and contents cannot be changed:
Tuples are often used in a Python program; e.g. in functions that have multiple return values.
Dictionaries
Dictionaries are extremely flexible mappings of keys to values, and form the basis of much of Python's internal implementation.
They can be created via a comma-separated list of key:value pairs within curly braces:
End of explanation
# Access a value via the key
numbers['two']
Explanation: Items are accessed and set via the indexing syntax used for lists and tuples, except here the index is not a zero-based order but valid key in the dictionary:
End of explanation
# Set a new key:value pair
numbers['ninety'] = 90
print(numbers)
Explanation: New items can be added to the dictionary using indexing as well:
End of explanation
primes = {2, 3, 5, 7}
odds = {1, 3, 5, 7, 9}
a = {1, 1, 2}
a
Explanation: Keep in mind that dictionaries do not maintain any order for the input parameters.
Sets
The 4th basic data container is the set, which contains unordered collections of unique items.
They are defined much like lists and tuples, except they use the curly brackets of dictionaries.
They do not contain duplicate entries. Which means they are significantly faster than lists!
http://stackoverflow.com/questions/2831212/python-sets-vs-lists
End of explanation
# union: items appearing in either
primes | odds # with an operator
primes.union(odds) # equivalently with a method
# intersection: items appearing in both
primes & odds # with an operator
primes.intersection(odds) # equivalently with a method
# difference: items in primes but not in odds
primes - odds # with an operator
primes.difference(odds) # equivalently with a method
# symmetric difference: items appearing in only one set
primes ^ odds # with an operator
primes.symmetric_difference(odds) # equivalently with a method
Explanation: If you're familiar with the mathematics of sets, you'll be familiar with operations like the union, intersection, difference, symmetric difference, and others.
Python's sets have all of these operations built-in, via methods or operators.
For each, we'll show the two equivalent methods:
End of explanation |
10,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AMICI documentation example of the steady state solver logic
This is an example to document the internal logic of the steady state solver, which is used in preequilibration and postequilibration.
Steady states of dynamical system
Not every dynamical system needs to run into a steady state. Instead, it may exhibit
continuous growth, e.g., $$\dot{x} = x, \quad x_0 = 1$$
a finite-time blow up, e.g., $$\dot{x} = x^2, \quad x_0 = 1$$
oscillations, e.g., $$\ddot{x} = -x, \quad x_0 = 1$$
chaotic behaviour, e.g., the Lorentz attractor
If the considered dynamical system has a steady state for positive times, then integrating the ODE long enough will equilibrate the system to this steady state. However, this may be computationally more demanding than other approaches and may fail, if the maximum number of integration steps is exceeded before reaching the steady state.
In general, Newton's method will find the steady state faster than forward simulation. However, it only converges if started close enough to the steady state. Moreover, it will not work, if the dynamical system has conserved quantities which were not removed prior to steady state computation
Step1: The example model
We will use the example model model_constant_species.xml, which has conserved species. Those are automatically removed in the SBML import of AMICI, but they can also be kept in the model to demonstrate the failure of Newton's method due to a singular right hand side Jacobian.
Step2: Inferring the steady state of the system (postequilibration)
First, we want to demonstrate that Newton's method will fail with the unreduced model due to a singular right hand side Jacobian.
Step3: The fields posteq_status and posteq_numsteps in rdata tells us how postequilibration worked
Step4: However, the same logic works, if we use the reduced model.
For sufficiently many Newton steps, postequilibration is achieved by Newton's method in the first run. In this specific example, the steady state is found within one step.
Step5: Postequilibration with sensitivities
Equilibration is possible with forward and adjoint sensitivity analysis. As for the main simulation part, adjoint sensitivity analysis yields less information than forward sensitivity analysis, since no state sensitivities are computed. However, it has a better scaling behavior towards large model sizes.
Postequilibration with forward sensitivities
If forward sensitivity analysis is used, then state sensitivities at the timepoint np.inf will be computed. This can be done in (currently) two different ways
Step6: Postequilibration with adjoint sensitivities
Postequilibration also works with adjoint sensitivities. In this case, it is exploited that the ODE of the adjoint state $p$ will always have the steady state 0, since it's a linear ODE
Step7: If we carry out the same computation with a system that has a singular Jacobian, then posteq_numstepsB will not be 0 any more (which indicates that the linear system solve was used to compute backward postequilibration).
Now, integration is carried out and hence posteq_numstepsB > 0
Step8: Preequilibrating the model
Sometimes, we want to launch a solver run from a steady state which was inferred numerically, i.e., the system was preequilibrated. In order to do this with AMICI, we need to pass an ExpData object, which contains fixed parameter for the actual simulation and for preequilibration of the model.
Step9: We can also combine pre- and postequilibration.
Step10: Preequilibration with sensitivities
Beyond the need for an ExpData object, the steady state solver logic in preequilibration is the same as in postequilibration, also if sensitivities are requested. The computation will fail for singular Jacobians, if SteadyStateSensitivityMode is set to newtonOnly, or if not enough steps can be taken.
However, if forward simulation with steady state sensitivities is allowed, or if the Jacobian is not singular, it will work.
Prequilibration with forward sensitivities
Step11: Prequilibration with adjoint sensitivities
When using preequilibration, adjoint sensitivity analysis can be used for simulation. This is a particularly interesting case
Step12: As for postquilibration, adjoint preequilibration has an analytic solution (via the linear system), which will be preferred. If used for models with singular Jacobian, numerical integration will be carried out, which is indicated by preeq_numstepsB.
Step13: Controlling the error tolerances in pre- and postequilibration
When solving ODEs or DAEs, AMICI uses the default logic of CVODES and IDAS to control error tolerances. This means that error weights are computed based on the absolute error tolerances and the product of current state variables of the system and their respective relative error tolerances. If this error combination is then controlled.
The respective tolerances for equilibrating a system with AMICI can be controlled by the user via the getter/setter functions [get|set][Absolute|Relative]ToleranceSteadyState[Sensi] | Python Code:
from IPython.display import Image
fig = Image(filename=('../../../documentation/gfx/steadystate_solver_workflow.png'))
fig
Explanation: AMICI documentation example of the steady state solver logic
This is an example to document the internal logic of the steady state solver, which is used in preequilibration and postequilibration.
Steady states of dynamical system
Not every dynamical system needs to run into a steady state. Instead, it may exhibit
continuous growth, e.g., $$\dot{x} = x, \quad x_0 = 1$$
a finite-time blow up, e.g., $$\dot{x} = x^2, \quad x_0 = 1$$
oscillations, e.g., $$\ddot{x} = -x, \quad x_0 = 1$$
chaotic behaviour, e.g., the Lorentz attractor
If the considered dynamical system has a steady state for positive times, then integrating the ODE long enough will equilibrate the system to this steady state. However, this may be computationally more demanding than other approaches and may fail, if the maximum number of integration steps is exceeded before reaching the steady state.
In general, Newton's method will find the steady state faster than forward simulation. However, it only converges if started close enough to the steady state. Moreover, it will not work, if the dynamical system has conserved quantities which were not removed prior to steady state computation: Conserved quantities will cause singularities in the Jacobian of the right hand side of the system, such that the linear problem within each step of Newton's method can not be solved.
Logic of the steady state solver
If AMICI has to equilibrate a dynamical system, it can do this either via simulating until the right hand side of the system becomes small, or it can try to find the steady state directly by Newton's method.
Amici decides automatically which approach is chosen and how forward or adjoint sensitivities are computed, if requested. However, the user can influence this behavior, if prior knowledge about the dynamical is available.
The logic which AMICI will follow to equilibrate the system works as follows:
End of explanation
import libsbml
import importlib
import amici
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
# SBML model we want to import
sbml_file = 'model_constant_species.xml'
# Name of the models that will also be the name of the python module
model_name = 'model_constant_species'
model_reduced_name = model_name + '_reduced'
# Directories to which the generated model code is written
model_output_dir = model_name
model_reduced_output_dir = model_reduced_name
# Read the model and give some output
sbml_reader = libsbml.SBMLReader()
sbml_doc = sbml_reader.readSBML(sbml_file)
sbml_model = sbml_doc.getModel()
dir(sbml_doc)
print('Species: ', [s.getId() for s in sbml_model.getListOfSpecies()])
print('\nReactions:')
for reaction in sbml_model.getListOfReactions():
reactants = ' + '.join(['%s %s'%(int(r.getStoichiometry()) if r.getStoichiometry() > 1 else '', r.getSpecies()) for r in reaction.getListOfReactants()])
products = ' + '.join(['%s %s'%(int(r.getStoichiometry()) if r.getStoichiometry() > 1 else '', r.getSpecies()) for r in reaction.getListOfProducts()])
reversible = '<' if reaction.getReversible() else ''
print('%3s: %10s %1s->%10s\t\t[%s]' % (reaction.getId(),
reactants,
reversible,
products,
libsbml.formulaToL3String(reaction.getKineticLaw().getMath())))
# Create an SbmlImporter instance for our SBML model
sbml_importer = amici.SbmlImporter(sbml_file)
# specify observables and constant parameters
constantParameters = ['synthesis_substrate', 'init_enzyme']
observables = {
'observable_product': {'name': '', 'formula': 'product'},
'observable_substrate': {'name': '', 'formula': 'substrate'},
}
sigmas = {'observable_product': 1.0, 'observable_substrate': 1.0}
# import the model
sbml_importer.sbml2amici(model_reduced_name,
model_reduced_output_dir,
observables=observables,
constant_parameters=constantParameters,
sigmas=sigmas)
sbml_importer.sbml2amici(model_name,
model_output_dir,
observables=observables,
constant_parameters=constantParameters,
sigmas=sigmas,
compute_conservation_laws=False)
# import the models and run some test simulations
model_reduced_module = amici.import_model_module(model_reduced_name, os.path.abspath(model_reduced_output_dir))
model_reduced = model_reduced_module.getModel()
model_module = amici.import_model_module(model_name, os.path.abspath(model_output_dir))
model = model_module.getModel()
# simulate model with conservation laws
model_reduced.setTimepoints(np.linspace(0, 2, 100))
solver_reduced = model_reduced.getSolver()
rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced)
# simulate model without conservation laws
model.setTimepoints(np.linspace(0, 2, 100))
solver = model.getSolver()
rdata = amici.runAmiciSimulation(model, solver)
# plot trajectories
import amici.plotting
amici.plotting.plotStateTrajectories(rdata_reduced, model=model_reduced)
amici.plotting.plotObservableTrajectories(rdata_reduced, model=model_reduced)
amici.plotting.plotStateTrajectories(rdata, model=model)
amici.plotting.plotObservableTrajectories(rdata, model=model)
Explanation: The example model
We will use the example model model_constant_species.xml, which has conserved species. Those are automatically removed in the SBML import of AMICI, but they can also be kept in the model to demonstrate the failure of Newton's method due to a singular right hand side Jacobian.
End of explanation
# Call postequilibration by setting an infinity timepoint
model.setTimepoints(np.full(1, np.inf))
# set the solver
solver = model.getSolver()
solver.setNewtonMaxSteps(10)
solver.setMaxSteps(1000)
rdata = amici.runAmiciSimulation(model, solver)
#np.set_printoptions(threshold=8, edgeitems=2)
for key, value in rdata.items():
print('%12s: ' % key, value)
Explanation: Inferring the steady state of the system (postequilibration)
First, we want to demonstrate that Newton's method will fail with the unreduced model due to a singular right hand side Jacobian.
End of explanation
# reduce maxsteps for integration
solver.setMaxSteps(100)
rdata = amici.runAmiciSimulation(model, solver)
print('Status of postequilibration:', rdata['posteq_status'])
print('Number of steps employed in postequilibration:', rdata['posteq_numsteps'])
Explanation: The fields posteq_status and posteq_numsteps in rdata tells us how postequilibration worked:
the first entry informs us about the status/number of steps in Newton's method (here 0, as Newton's method did not work)
the second entry tells us, the status/how many integration steps were taken until steady state was reached
the third entry informs us about the status/number of Newton steps in the second launch, after simulation
The status is encoded as an Integer flag with the following meanings:
1: Successful run
0: Did not run
-1: Error: No further specification is given, the error message should give more information.
-2: Error: The method did not converge to a steady state within the maximum number of steps (Newton's method or simulation).
-3: Error: The Jacobian of the right hand side is singular (only Newton's method)
-4: Error: The damping factor in Newton's method was reduced until it met the lower bound without success (Newton's method only)
-5: Error: The model was simulated past the timepoint t=1e100 without finding a steady state. Therefore, it is likely that the model has not steady state for the given parameter vector.
Here, only the second entry of posteq_status contains a positive integer: The first run of Newton's method failed due to a Jacobian, which oculd not be factorized, but the second run (simulation) contains the entry 1 (success). The third entry is 0, thus Newton's method was not launched for a second time.
More information can be found inposteq_numsteps: Also here, only the second entry contains a positive integer, which is smaller than the maximum number of steps taken (<1000). Hence steady state was reached via simulation, which corresponds to the simulated time written to posteq_time.
We want to demonstrate a complete failure if inferring the steady state by reducing the number of integration steps to a lower value:
End of explanation
# Call postequilibration by setting an infinity timepoint
model_reduced.setTimepoints(np.full(1, np.inf))
# set the solver
solver_reduced = model_reduced.getSolver()
solver_reduced.setNewtonMaxSteps(10)
solver_reduced.setMaxSteps(100)
rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced)
print('Status of postequilibration:', rdata_reduced['posteq_status'])
print('Number of steps employed in postequilibration:', rdata_reduced['posteq_numsteps'])
Explanation: However, the same logic works, if we use the reduced model.
For sufficiently many Newton steps, postequilibration is achieved by Newton's method in the first run. In this specific example, the steady state is found within one step.
End of explanation
# Call simulation with singular Jacobian and integrateIfNewtonFails mode
model.setTimepoints(np.full(1, np.inf))
model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.integrateIfNewtonFails)
solver = model.getSolver()
solver.setNewtonMaxSteps(10)
solver.setSensitivityMethod(amici.SensitivityMethod.forward)
solver.setSensitivityOrder(amici.SensitivityOrder.first)
solver.setMaxSteps(10000)
rdata = amici.runAmiciSimulation(model, solver)
print('Status of postequilibration:', rdata['posteq_status'])
print('Number of steps employed in postequilibration:', rdata['posteq_numsteps'])
print('Computed state sensitivities:')
print(rdata['sx'][0,:,:])
# Call simulation with singular Jacobian and newtonOnly mode (will fail)
model.setTimepoints(np.full(1, np.inf))
model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.newtonOnly)
solver = model.getSolver()
solver.setSensitivityMethod(amici.SensitivityMethod.forward)
solver.setSensitivityOrder(amici.SensitivityOrder.first)
solver.setMaxSteps(10000)
rdata = amici.runAmiciSimulation(model, solver)
print('Status of postequilibration:', rdata['posteq_status'])
print('Number of steps employed in postequilibration:', rdata['posteq_numsteps'])
print('Computed state sensitivities:')
print(rdata['sx'][0,:,:])
# Call postequilibration by setting an infinity timepoint
model_reduced.setTimepoints(np.full(1, np.inf))
model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.newtonOnly)
solver_reduced = model_reduced.getSolver()
solver_reduced.setNewtonMaxSteps(10)
solver_reduced.setSensitivityMethod(amici.SensitivityMethod.forward)
solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first)
solver_reduced.setMaxSteps(1000)
rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced)
print('Status of postequilibration:', rdata_reduced['posteq_status'])
print('Number of steps employed in postequilibration:', rdata_reduced['posteq_numsteps'])
print('Computed state sensitivities:')
print(rdata_reduced['sx'][0,:,:])
Explanation: Postequilibration with sensitivities
Equilibration is possible with forward and adjoint sensitivity analysis. As for the main simulation part, adjoint sensitivity analysis yields less information than forward sensitivity analysis, since no state sensitivities are computed. However, it has a better scaling behavior towards large model sizes.
Postequilibration with forward sensitivities
If forward sensitivity analysis is used, then state sensitivities at the timepoint np.inf will be computed. This can be done in (currently) two different ways:
If the Jacobian $\nabla_x f$ of the right hand side $f$ is not (close to) singular, the most efficient approach will be solving the linear system of equations, which defines the steady state sensitivities:
$$0 = \dot{s}^x = (\nabla_x f) s^x + \frac{\partial f}{\partial \theta}\qquad \Rightarrow \qquad(\nabla_x f) s^x = - \frac{\partial f}{\partial \theta}$$
This approach will always be chosen by AMICI, if the option model.SteadyStateSensitivityMode is set to SteadyStateSensitivityMode.newtonOnly. Furthermore, it will also be chosen, if the steady state was found by Newton's method, as in this case, the Jacobian is at least not singular (but may still be poorly conditioned). A check for the condition number of the Jacobian is currently missing, but will soon be implemented.
If the Jacobian is poorly conditioned or singular, then the only way to obtain a reliable result will be integrating the state variables with state sensitivities until the norm of the right hand side becomes small. This approach will be chosen by AMICI, if the steady state was found by simulation and the option model.SteadyStateSensitivityMode is set to SteadyStateSensitivityMode.simulationFSA. This approach is numerically more stable, but the computation time for large models may be substantial.
Side remark:
A possible third way may consist in a (relaxed) Richardson iteration type approach, which interprets the entries of the right hand side $f$ as residuals and minimizes the squared residuals $\Vert f \Vert^2$ by a Levenberg-Marquart-type algorithm. This approach would also work for poorly conditioned (and even for singular Jacobians if additional constraints are implemented as Lagrange multipliers) while being faster than a long forward simulation.
We want to demonstrate both possibilities to find the steady state sensitivities, as well as the failure of their computation if the Jacobian is singular and the newtonOnly setting was used.
End of explanation
# Call adjoint postequilibration by setting an infinity timepoint
# and create an edata object, which is needed for adjoint computation
edata = amici.ExpData(2, 0, 0, np.array([float('inf')]))
edata.setObservedData([1.8] * 2)
edata.fixedParameters = np.array([3., 5.])
model_reduced.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.newtonOnly)
solver_reduced = model_reduced.getSolver()
solver_reduced.setNewtonMaxSteps(10)
solver_reduced.setSensitivityMethod(amici.SensitivityMethod.adjoint)
solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first)
solver_reduced.setMaxSteps(1000)
rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata)
print('Status of postequilibration:', rdata_reduced['posteq_status'])
print('Number of steps employed in postequilibration:', rdata_reduced['posteq_numsteps'])
print('Number of backward steps employed in postequilibration:', rdata_reduced['posteq_numstepsB'])
print('Computed gradient:', rdata_reduced['sllh'])
Explanation: Postequilibration with adjoint sensitivities
Postequilibration also works with adjoint sensitivities. In this case, it is exploited that the ODE of the adjoint state $p$ will always have the steady state 0, since it's a linear ODE:
$$\frac{d}{dt} p(t) = J(x^*, \theta)^T p(t),$$
where $x^*$ denotes the steady state of the system state.
Since the Eigenvalues of the Jacobian are negative and since the Jacobian at steady state is a fixed matrix, this system has a simple algebraic solution:
$$p(t) = e^{t J(x^*, \theta)^T} p_{\text{end}}.$$
As a consequence, the quadratures in adjoint computation also reduce to a matrix-vector product:
$$Q(x, \theta) = Q(x^*, \theta) = p_{\text{integral}} * \frac{\partial f}{\partial \theta}$$
with
$$p_{\text{integral}} = \int_0^\infty p(s) ds = (J(x^*, \theta)^T)^{-1} p_{\text{end}}.$$
However, this solution is given in terms of a linear system of equations defined by the transposed Jacobian of the right hand side. Hence, if the (transposed) Jacobian is singular, it is not applicable.
In this case, standard integration must be carried out.
End of explanation
# Call adjoint postequilibration with model with singular Jacobian
model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.newtonOnly)
solver = model.getSolver()
solver.setNewtonMaxSteps(10)
solver.setSensitivityMethod(amici.SensitivityMethod.adjoint)
solver.setSensitivityOrder(amici.SensitivityOrder.first)
rdata = amici.runAmiciSimulation(model, solver, edata)
print('Status of postequilibration:', rdata['posteq_status'])
print('Number of steps employed in postequilibration:', rdata['posteq_numsteps'])
print('Number of backward steps employed in postequilibration:', rdata['posteq_numstepsB'])
print('Computed gradient:', rdata['sllh'])
Explanation: If we carry out the same computation with a system that has a singular Jacobian, then posteq_numstepsB will not be 0 any more (which indicates that the linear system solve was used to compute backward postequilibration).
Now, integration is carried out and hence posteq_numstepsB > 0
End of explanation
# create edata, with 3 timepoints and 2 observables:
edata = amici.ExpData(2, 0, 0,
np.array([0., 0.1, 1.]))
edata.setObservedData([1.8] * 6)
edata.fixedParameters = np.array([3., 5.])
edata.fixedParametersPreequilibration = np.array([0., 2.])
edata.reinitializeFixedParameterInitialStates = True
# create the solver object and run the simulation
solver_reduced = model_reduced.getSolver()
solver_reduced.setNewtonMaxSteps(10)
rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata)
amici.plotting.plotStateTrajectories(rdata_reduced, model = model_reduced)
amici.plotting.plotObservableTrajectories(rdata_reduced, model = model_reduced)
Explanation: Preequilibrating the model
Sometimes, we want to launch a solver run from a steady state which was inferred numerically, i.e., the system was preequilibrated. In order to do this with AMICI, we need to pass an ExpData object, which contains fixed parameter for the actual simulation and for preequilibration of the model.
End of explanation
# Change the last timepoint to an infinity timepoint.
edata.setTimepoints(np.array([0., 0.1, float('inf')]))
# run the simulation
rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata)
Explanation: We can also combine pre- and postequilibration.
End of explanation
# No postquilibration this time.
edata.setTimepoints(np.array([0., 0.1, 1.]))
# create the solver object and run the simulation, singular Jacobian, enforce Newton solver for sensitivities
model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.newtonOnly)
solver = model.getSolver()
solver.setNewtonMaxSteps(10)
solver.setSensitivityMethod(amici.SensitivityMethod.forward)
solver.setSensitivityOrder(amici.SensitivityOrder.first)
rdata = amici.runAmiciSimulation(model, solver, edata)
for key, value in rdata.items():
if key[0:6] == 'preeq_':
print('%20s: ' % key, value)
# Singluar Jacobian, use simulation
model.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.integrateIfNewtonFails)
solver = model.getSolver()
solver.setNewtonMaxSteps(10)
solver.setSensitivityMethod(amici.SensitivityMethod.forward)
solver.setSensitivityOrder(amici.SensitivityOrder.first)
rdata = amici.runAmiciSimulation(model, solver, edata)
for key, value in rdata.items():
if key[0:6] == 'preeq_':
print('%20s: ' % key, value)
# Non-singular Jacobian, use Newton solver
solver_reduced = model_reduced.getSolver()
solver_reduced.setNewtonMaxSteps(10)
solver_reduced.setSensitivityMethod(amici.SensitivityMethod.forward)
solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first)
rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata)
for key, value in rdata_reduced.items():
if key[0:6] == 'preeq_':
print('%20s: ' % key, value)
Explanation: Preequilibration with sensitivities
Beyond the need for an ExpData object, the steady state solver logic in preequilibration is the same as in postequilibration, also if sensitivities are requested. The computation will fail for singular Jacobians, if SteadyStateSensitivityMode is set to newtonOnly, or if not enough steps can be taken.
However, if forward simulation with steady state sensitivities is allowed, or if the Jacobian is not singular, it will work.
Prequilibration with forward sensitivities
End of explanation
# Non-singular Jacobian, use Newton solver and adjoints with initial state sensitivities
solver_reduced = model_reduced.getSolver()
solver_reduced.setNewtonMaxSteps(10)
solver_reduced.setSensitivityMethod(amici.SensitivityMethod.adjoint)
solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first)
rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata)
for key, value in rdata_reduced.items():
if key[0:6] == 'preeq_':
print('%20s: ' % key, value)
print('Gradient:', rdata_reduced['sllh'])
# Non-singular Jacobian, use simulation solver and adjoints with initial state sensitivities
solver_reduced = model_reduced.getSolver()
solver_reduced.setNewtonMaxSteps(0)
solver_reduced.setSensitivityMethod(amici.SensitivityMethod.adjoint)
solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first)
rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata)
for key, value in rdata_reduced.items():
if key[0:6] == 'preeq_':
print('%20s: ' % key, value)
print('Gradient:', rdata_reduced['sllh'])
# Non-singular Jacobian, use Newton solver and adjoints with fully adjoint preequilibration
solver_reduced = model_reduced.getSolver()
solver_reduced.setNewtonMaxSteps(10)
solver_reduced.setSensitivityMethod(amici.SensitivityMethod.adjoint)
solver_reduced.setSensitivityMethodPreequilibration(amici.SensitivityMethod.adjoint)
solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first)
rdata_reduced = amici.runAmiciSimulation(model_reduced, solver_reduced, edata)
for key, value in rdata_reduced.items():
if key[0:6] == 'preeq_':
print('%20s: ' % key, value)
print('Gradient:', rdata_reduced['sllh'])
Explanation: Prequilibration with adjoint sensitivities
When using preequilibration, adjoint sensitivity analysis can be used for simulation. This is a particularly interesting case: Standard adjoint sensitivity analysis requires the initial state sensitivities sx0 to work, at least if data is given for finite (i.e., not exclusively postequilibration) timepoints:
For each parameter, a contribution to the gradient is given by the scalar product of the corresponding state sensitivity vector at timepoint $t=0$, (column in sx0), with the adjoint state ($p(t=0)$). Hence, the matrix sx0 is needed. This scalar product "closes the loop" from forward to adjoint simulation.
By default, if adjoint sensitivity analysis is called with preequilibration, the initial state sensitivities are computed in just the same way as if this way done for forward sensitivity analysis. The only difference in the internal logic is that, if the steady state gets inferred via simulation, a separate solver object is used in order to ensure that the steady state simulation does not interfere with the snapshotting of the forward trajectory from the actual time course.
However, also an adjoint version of preequilibration is possible: In this case, the "loop" from forward to adjoint simulation needs no closure: The simulation time is extended by preequilibration: forward from $t = -\infty$ to $t=0$, and after adjoint simulation also backward from $t=0$ to $t = -\infty$. Similar to adjoint postequilibration, the steady state of the adjoint state (at $t=-\infty$) is $p=0$, hence the scalar product (at $t=-\infty$) for the initial state sensitivities of preequilibration with the adjoint state vanishes. Instead, this gradient contribution is covered by additional quadratures $\int_{-\infty}^0 p(s) ds \cdot \frac{\partial f}{\partial \theta}$. In order to compute these quadratures correctly, the adjoint state from the main adjoint simulation must be passed on to the initial adjoint state of backward preequilibration.
However, as the adjoint state must be passed on from backward computation to preequilibration, it is currently not allowed to alter (reinitialize) states of the model at $t=0$, unless these states are constant, as otherwise this alteration would lead to a discontinuity in the adjoints state as well and hence to an incorrect gradient.
End of explanation
# Non-singular Jacobian, use Newton solver and adjoints with fully adjoint preequilibration
solver = model.getSolver()
solver.setNewtonMaxSteps(10)
solver.setSensitivityMethod(amici.SensitivityMethod.adjoint)
solver.setSensitivityMethodPreequilibration(amici.SensitivityMethod.adjoint)
solver.setSensitivityOrder(amici.SensitivityOrder.first)
rdata = amici.runAmiciSimulation(model, solver, edata)
for key, value in rdata.items():
if key[0:6] == 'preeq_':
print('%20s: ' % key, value)
print('Gradient:', rdata['sllh'])
Explanation: As for postquilibration, adjoint preequilibration has an analytic solution (via the linear system), which will be preferred. If used for models with singular Jacobian, numerical integration will be carried out, which is indicated by preeq_numstepsB.
End of explanation
# Non-singular Jacobian, use simulaiton
model_reduced.setSteadyStateSensitivityMode(amici.SteadyStateSensitivityMode.integrateIfNewtonFails)
solver_reduced = model_reduced.getSolver()
solver_reduced.setNewtonMaxSteps(0)
solver_reduced.setSensitivityMethod(amici.SensitivityMethod.forward)
solver_reduced.setSensitivityOrder(amici.SensitivityOrder.first)
# run with lax tolerances
solver_reduced.setRelativeToleranceSteadyState(1e-2)
solver_reduced.setAbsoluteToleranceSteadyState(1e-3)
solver_reduced.setRelativeToleranceSteadyStateSensi(1e-2)
solver_reduced.setAbsoluteToleranceSteadyStateSensi(1e-3)
rdata_reduced_lax = amici.runAmiciSimulation(model_reduced, solver_reduced, edata)
# run with strict tolerances
solver_reduced.setRelativeToleranceSteadyState(1e-12)
solver_reduced.setAbsoluteToleranceSteadyState(1e-16)
solver_reduced.setRelativeToleranceSteadyStateSensi(1e-12)
solver_reduced.setAbsoluteToleranceSteadyStateSensi(1e-16)
rdata_reduced_strict = amici.runAmiciSimulation(model_reduced, solver_reduced, edata)
# compare ODE outputs
print('\nODE solver steps, which were necessary to reach steady state:')
print('lax tolerances: ', rdata_reduced_lax['preeq_numsteps'])
print('strict tolerances: ', rdata_reduced_strict['preeq_numsteps'])
print('\nsimulation time corresponding to steady state:')
print(rdata_reduced_lax['preeq_t'])
print(rdata_reduced_strict['preeq_t'])
print('\ncomputation time to reach steady state:')
print(rdata_reduced_lax['preeq_cpu_time'])
print(rdata_reduced_strict['preeq_cpu_time'])
Explanation: Controlling the error tolerances in pre- and postequilibration
When solving ODEs or DAEs, AMICI uses the default logic of CVODES and IDAS to control error tolerances. This means that error weights are computed based on the absolute error tolerances and the product of current state variables of the system and their respective relative error tolerances. If this error combination is then controlled.
The respective tolerances for equilibrating a system with AMICI can be controlled by the user via the getter/setter functions [get|set][Absolute|Relative]ToleranceSteadyState[Sensi]:
End of explanation |
10,543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a better model
Step1: Are we underfitting?
Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions
Step2: ...and load our fine-tuned weights.
Step3: We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer
Step4: Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way!
Step5: For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
Step6: And fit the model in the usual way
Step7: Reducing overfitting
Now that we've gotten the model to overfit, we can take a number of steps to reduce this.
Approaches to reducing overfitting
We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment)
Step8: Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested).
Step9: As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches.
Step10: Adding data augmentation
Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it
Step11: When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.
Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable
Step12: Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.
Step13: Batch normalization
About batch normalization
Batch normalization (batchnorm) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called normalization. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.
Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights.
Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that all modern networks should use batchnorm, or something equivalent. There are two reasons for this | Python Code:
from theano.sandbox import cuda
%matplotlib inline
from imp import reload
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
#path = "data/dogscats/sample/"
path = "data/dogscats/"
model_path = path + 'models/'
if not os.path.exists(model_path): os.mkdir(model_path)
import keras.backend as K
K.set_image_dim_ordering('th')
batch_size=64
Explanation: Training a better model
End of explanation
model = vgg_ft(2)
Explanation: Are we underfitting?
Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:
How is this possible?
Is this desirable?
The answer to (1) is that this is happening because of dropout. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability p (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.
The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.
So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!
(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.)
Removing dropout
Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:
- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)
- Split the model between the convolutional (conv) layers and the dense layers
- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch
- Create a new model with just the dense layers, and dropout p set to zero
- Train this new model using the output of the conv layers as training data.
As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent...
End of explanation
model.load_weights(model_path+'finetune3.h5')
Explanation: ...and load our fine-tuned weights.
End of explanation
layers = model.layers
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
last_conv_idx
layers[last_conv_idx]
conv_layers = layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
# Dense layers - also known as fully connected or 'FC' layers
fc_layers = layers[last_conv_idx+1:]
Explanation: We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer:
End of explanation
batches = get_batches(path+'train', shuffle=False, batch_size=batch_size)
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample)
trn_features = conv_model.predict_generator(batches, batches.nb_sample)
save_array(model_path + 'train_convlayer_features.bc', trn_features)
save_array(model_path + 'valid_convlayer_features.bc', val_features)
trn_features = load_array(model_path+'train_convlayer_features.bc')
val_features = load_array(model_path+'valid_convlayer_features.bc')
trn_features.shape
Explanation: Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way!
End of explanation
# Copy the weights from the pre-trained model.
# NB: Since we're removing dropout, we want to half the weights
def proc_wgts(layer): return [o/2 for o in layer.get_weights()]
# Such a finely tuned model needs to be updated very slowly!
opt = RMSprop(lr=0.00001, rho=0.7)
def get_fc_model():
model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(2, activation='softmax')
])
for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc_model = get_fc_model()
Explanation: For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
End of explanation
fc_model.fit(trn_features, trn_labels, nb_epoch=8,
batch_size=batch_size, validation_data=(val_features, val_labels))
fc_model.save_weights(model_path+'no_dropout.h5')
fc_model.load_weights(model_path+'no_dropout.h5')
Explanation: And fit the model in the usual way:
End of explanation
# dim_ordering='tf' uses tensorflow dimension ordering,
# which is the same order as matplotlib uses for display.
# Therefore when just using for display purposes, this is more convenient
gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1,
height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1,
channel_shift_range=10., horizontal_flip=True, dim_ordering='tf')
Explanation: Reducing overfitting
Now that we've gotten the model to overfit, we can take a number of steps to reduce this.
Approaches to reducing overfitting
We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):
Add more data
Use data augmentation
Use architectures that generalize well
Add regularization
Reduce architecture complexity.
We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.
Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)
We recommend always using at least some light data augmentation, unless you have so much data that your model will never see the same input twice.
About data augmentation
Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation:
End of explanation
# Create a 'batch' of a single image
img = np.expand_dims(ndimage.imread('cat.jpg'),0)
# Request the generator to create batches from this image
aug_iter = gen.flow(img)
# Get eight examples of these augmented images
aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)]
# The original
plt.imshow(img[0])
Explanation: Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested).
End of explanation
# Augmented data
plots(aug_imgs, (20,7), 2)
# Ensure that we return to theano dimension ordering
K.set_image_dim_ordering('th')
Explanation: As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches.
End of explanation
gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1,
height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True)
batches = get_batches(path+'train', gen, batch_size=batch_size)
# NB: We don't want to augment or shuffle the validation set
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
Explanation: Adding data augmentation
Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it:
End of explanation
fc_model = get_fc_model()
for layer in conv_model.layers: layer.trainable = False
# Look how easy it is to connect two models together!
conv_model.add(fc_model)
Explanation: When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.
Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable:
End of explanation
conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.save_weights(model_path + 'aug1.h5')
conv_model.load_weights(model_path + 'aug1.h5')
Explanation: Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.
End of explanation
conv_layers[-1].output_shape[1:]
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(1000, activation='softmax')
]
p=0.6
bn_model = Sequential(get_bn_layers(0.6))
bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5')
def proc_wgts(layer, prev_p, new_p):
scal = (1-prev_p)/(1-new_p)
return [o*scal for o in layer.get_weights()]
for l in bn_model.layers:
if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6))
bn_model.pop()
for layer in bn_model.layers: layer.trainable=False
bn_model.add(Dense(2,activation='softmax'))
bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels))
bn_model.save_weights(model_path+'bn.h5')
bn_model.load_weights(model_path+'bn.h5')
bn_layers = get_bn_layers(0.6)
bn_layers.pop()
bn_layers.append(Dense(2,activation='softmax'))
final_model = Sequential(conv_layers)
for layer in final_model.layers: layer.trainable = False
for layer in bn_layers: final_model.add(layer)
for l1,l2 in zip(bn_model.layers, bn_layers):
l2.set_weights(l1.get_weights())
final_model.compile(optimizer=Adam(),
loss='categorical_crossentropy', metrics=['accuracy'])
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final1.h5')
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final2.h5')
final_model.optimizer.lr=0.001
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
bn_model.save_weights(model_path + 'final3.h5')
Explanation: Batch normalization
About batch normalization
Batch normalization (batchnorm) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called normalization. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.
Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights.
Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that all modern networks should use batchnorm, or something equivalent. There are two reasons for this:
1. Adding batchnorm to a model can result in 10x or more improvements in training speed
2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to reduce overfitting.
As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:
1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean
2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.
This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so.
Adding batchnorm to the model
We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers):
End of explanation |
10,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Graph Convolutions
In this tutorial we will learn more about "graph convolutions." These are one of the most powerful deep learning tools for working with molecular data. The reason for this is that molecules can be naturally viewed as graphs.
Note how standard chemical diagrams of the sort we're used to from high school lend themselves naturally to visualizing molecules as graphs. In the remainder of this tutorial, we'll dig into this relationship in significantly more detail. This will let us get a deeper understanding of how these systems work.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Step1: What are Graph Convolutions?
Consider a standard convolutional neural network (CNN) of the sort commonly used to process images. The input is a grid of pixels. There is a vector of data values for each pixel, for example the red, green, and blue color channels. The data passes through a series of convolutional layers. Each layer combines the data from a pixel and its neighbors to produce a new data vector for the pixel. Early layers detect small scale local patterns, while later layers detect larger, more abstract patterns. Often the convolutional layers alternate with pooling layers that perform some operation such as max or min over local regions.
Graph convolutions are similar, but they operate on a graph. They begin with a data vector for each node of the graph (for example, the chemical properties of the atom that node represents). Convolutional and pooling layers combine information from connected nodes (for example, atoms that are bonded to each other) to produce a new data vector for each node.
Training a GraphConvModel
Let's use the MoleculeNet suite to load the Tox21 dataset. To featurize the data in a way that graph convolutional networks can use, we set the featurizer option to 'GraphConv'. The MoleculeNet call returns a training set, a validation set, and a test set for us to use. It also returns tasks, a list of the task names, and transformers, a list of data transformations that were applied to preprocess the dataset. (Most deep networks are quite finicky and require a set of data transformations to ensure that training proceeds stably.)
Step2: Let's now train a graph convolutional network on this dataset. DeepChem has the class GraphConvModel that wraps a standard graph convolutional architecture underneath the hood for user convenience. Let's instantiate an object of this class and train it on our dataset.
Step3: Let's try to evaluate the performance of the model we've trained. For this, we need to define a metric, a measure of model performance. dc.metrics holds a collection of metrics already. For this dataset, it is standard to use the ROC-AUC score, the area under the receiver operating characteristic curve (which measures the tradeoff between precision and recall). Luckily, the ROC-AUC score is already available in DeepChem.
To measure the performance of the model under this metric, we can use the convenience function model.evaluate().
Step4: The results are pretty good, and GraphConvModel is very easy to use. But what's going on under the hood? Could we build GraphConvModel ourselves? Of course! DeepChem provides Keras layers for all the calculations involved in a graph convolution. We are going to apply the following layers from DeepChem.
GraphConv layer
Step5: We can now see more clearly what is happening. There are two convolutional blocks, each consisting of a GraphConv, followed by batch normalization, followed by a GraphPool to do max pooling. We finish up with a dense layer, another batch normalization, a GraphGather to combine the data from all the different nodes, and a final dense layer to produce the global output.
Let's now create the DeepChem model which will be a wrapper around the Keras model that we just created. We will also specify the loss function so the model know the objective to minimize.
Step6: What are the inputs to this model? A graph convolution requires a complete description of each molecule, including the list of nodes (atoms) and a description of which ones are bonded to each other. In fact, if we inspect the dataset we see that the feature array contains Python objects of type ConvMol.
Step7: Models expect arrays of numbers as their inputs, not Python objects. We must convert the ConvMol objects into the particular set of arrays expected by the GraphConv, GraphPool, and GraphGather layers. Fortunately, the ConvMol class includes the code to do this, as well as to combine all the molecules in a batch to create a single set of arrays.
The following code creates a Python generator that given a batch of data generates the lists of inputs, labels, and weights whose values are Numpy arrays. atom_features holds a feature vector of length 75 for each atom. The other inputs are required to support minibatching in TensorFlow. degree_slice is an indexing convenience that makes it easy to locate atoms from all molecules with a given degree. membership determines the membership of atoms in molecules (atom i belongs to molecule membership[i]). deg_adjs is a list that contains adjacency lists grouped by atom degree. For more details, check out the code.
Step8: Now, we can train the model using fit_generator(generator) which will use the generator we've defined to train the model.
Step9: Now that we have trained our graph convolutional method, let's evaluate its performance. We again have to use our defined generator to evaluate model performance. | Python Code:
!pip install --pre deepchem
Explanation: Introduction to Graph Convolutions
In this tutorial we will learn more about "graph convolutions." These are one of the most powerful deep learning tools for working with molecular data. The reason for this is that molecules can be naturally viewed as graphs.
Note how standard chemical diagrams of the sort we're used to from high school lend themselves naturally to visualizing molecules as graphs. In the remainder of this tutorial, we'll dig into this relationship in significantly more detail. This will let us get a deeper understanding of how these systems work.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
End of explanation
import deepchem as dc
tasks, datasets, transformers = dc.molnet.load_tox21(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = datasets
Explanation: What are Graph Convolutions?
Consider a standard convolutional neural network (CNN) of the sort commonly used to process images. The input is a grid of pixels. There is a vector of data values for each pixel, for example the red, green, and blue color channels. The data passes through a series of convolutional layers. Each layer combines the data from a pixel and its neighbors to produce a new data vector for the pixel. Early layers detect small scale local patterns, while later layers detect larger, more abstract patterns. Often the convolutional layers alternate with pooling layers that perform some operation such as max or min over local regions.
Graph convolutions are similar, but they operate on a graph. They begin with a data vector for each node of the graph (for example, the chemical properties of the atom that node represents). Convolutional and pooling layers combine information from connected nodes (for example, atoms that are bonded to each other) to produce a new data vector for each node.
Training a GraphConvModel
Let's use the MoleculeNet suite to load the Tox21 dataset. To featurize the data in a way that graph convolutional networks can use, we set the featurizer option to 'GraphConv'. The MoleculeNet call returns a training set, a validation set, and a test set for us to use. It also returns tasks, a list of the task names, and transformers, a list of data transformations that were applied to preprocess the dataset. (Most deep networks are quite finicky and require a set of data transformations to ensure that training proceeds stably.)
End of explanation
n_tasks = len(tasks)
model = dc.models.GraphConvModel(n_tasks, mode='classification')
model.fit(train_dataset, nb_epoch=50)
Explanation: Let's now train a graph convolutional network on this dataset. DeepChem has the class GraphConvModel that wraps a standard graph convolutional architecture underneath the hood for user convenience. Let's instantiate an object of this class and train it on our dataset.
End of explanation
metric = dc.metrics.Metric(dc.metrics.roc_auc_score)
print('Training set score:', model.evaluate(train_dataset, [metric], transformers))
print('Test set score:', model.evaluate(test_dataset, [metric], transformers))
Explanation: Let's try to evaluate the performance of the model we've trained. For this, we need to define a metric, a measure of model performance. dc.metrics holds a collection of metrics already. For this dataset, it is standard to use the ROC-AUC score, the area under the receiver operating characteristic curve (which measures the tradeoff between precision and recall). Luckily, the ROC-AUC score is already available in DeepChem.
To measure the performance of the model under this metric, we can use the convenience function model.evaluate().
End of explanation
from deepchem.models.layers import GraphConv, GraphPool, GraphGather
import tensorflow as tf
import tensorflow.keras.layers as layers
batch_size = 100
class MyGraphConvModel(tf.keras.Model):
def __init__(self):
super(MyGraphConvModel, self).__init__()
self.gc1 = GraphConv(128, activation_fn=tf.nn.tanh)
self.batch_norm1 = layers.BatchNormalization()
self.gp1 = GraphPool()
self.gc2 = GraphConv(128, activation_fn=tf.nn.tanh)
self.batch_norm2 = layers.BatchNormalization()
self.gp2 = GraphPool()
self.dense1 = layers.Dense(256, activation=tf.nn.tanh)
self.batch_norm3 = layers.BatchNormalization()
self.readout = GraphGather(batch_size=batch_size, activation_fn=tf.nn.tanh)
self.dense2 = layers.Dense(n_tasks*2)
self.logits = layers.Reshape((n_tasks, 2))
self.softmax = layers.Softmax()
def call(self, inputs):
gc1_output = self.gc1(inputs)
batch_norm1_output = self.batch_norm1(gc1_output)
gp1_output = self.gp1([batch_norm1_output] + inputs[1:])
gc2_output = self.gc2([gp1_output] + inputs[1:])
batch_norm2_output = self.batch_norm1(gc2_output)
gp2_output = self.gp2([batch_norm2_output] + inputs[1:])
dense1_output = self.dense1(gp2_output)
batch_norm3_output = self.batch_norm3(dense1_output)
readout_output = self.readout([batch_norm3_output] + inputs[1:])
logits_output = self.logits(self.dense2(readout_output))
return self.softmax(logits_output)
Explanation: The results are pretty good, and GraphConvModel is very easy to use. But what's going on under the hood? Could we build GraphConvModel ourselves? Of course! DeepChem provides Keras layers for all the calculations involved in a graph convolution. We are going to apply the following layers from DeepChem.
GraphConv layer: This layer implements the graph convolution. The graph convolution combines per-node feature vectures in a nonlinear fashion with the feature vectors for neighboring nodes. This "blends" information in local neighborhoods of a graph.
GraphPool layer: This layer does a max-pooling over the feature vectors of atoms in a neighborhood. You can think of this layer as analogous to a max-pooling layer for 2D convolutions but which operates on graphs instead.
GraphGather: Many graph convolutional networks manipulate feature vectors per graph-node. For a molecule for example, each node might represent an atom, and the network would manipulate atomic feature vectors that summarize the local chemistry of the atom. However, at the end of the application, we will likely want to work with a molecule level feature representation. This layer creates a graph level feature vector by combining all the node-level feature vectors.
Apart from this we are going to apply standard neural network layers such as Dense, BatchNormalization and Softmax layer.
End of explanation
model = dc.models.KerasModel(MyGraphConvModel(), loss=dc.models.losses.CategoricalCrossEntropy())
Explanation: We can now see more clearly what is happening. There are two convolutional blocks, each consisting of a GraphConv, followed by batch normalization, followed by a GraphPool to do max pooling. We finish up with a dense layer, another batch normalization, a GraphGather to combine the data from all the different nodes, and a final dense layer to produce the global output.
Let's now create the DeepChem model which will be a wrapper around the Keras model that we just created. We will also specify the loss function so the model know the objective to minimize.
End of explanation
test_dataset.X[0]
Explanation: What are the inputs to this model? A graph convolution requires a complete description of each molecule, including the list of nodes (atoms) and a description of which ones are bonded to each other. In fact, if we inspect the dataset we see that the feature array contains Python objects of type ConvMol.
End of explanation
from deepchem.metrics import to_one_hot
from deepchem.feat.mol_graphs import ConvMol
import numpy as np
def data_generator(dataset, epochs=1):
for ind, (X_b, y_b, w_b, ids_b) in enumerate(dataset.iterbatches(batch_size, epochs,
deterministic=False, pad_batches=True)):
multiConvMol = ConvMol.agglomerate_mols(X_b)
inputs = [multiConvMol.get_atom_features(), multiConvMol.deg_slice, np.array(multiConvMol.membership)]
for i in range(1, len(multiConvMol.get_deg_adjacency_lists())):
inputs.append(multiConvMol.get_deg_adjacency_lists()[i])
labels = [to_one_hot(y_b.flatten(), 2).reshape(-1, n_tasks, 2)]
weights = [w_b]
yield (inputs, labels, weights)
Explanation: Models expect arrays of numbers as their inputs, not Python objects. We must convert the ConvMol objects into the particular set of arrays expected by the GraphConv, GraphPool, and GraphGather layers. Fortunately, the ConvMol class includes the code to do this, as well as to combine all the molecules in a batch to create a single set of arrays.
The following code creates a Python generator that given a batch of data generates the lists of inputs, labels, and weights whose values are Numpy arrays. atom_features holds a feature vector of length 75 for each atom. The other inputs are required to support minibatching in TensorFlow. degree_slice is an indexing convenience that makes it easy to locate atoms from all molecules with a given degree. membership determines the membership of atoms in molecules (atom i belongs to molecule membership[i]). deg_adjs is a list that contains adjacency lists grouped by atom degree. For more details, check out the code.
End of explanation
model.fit_generator(data_generator(train_dataset, epochs=50))
Explanation: Now, we can train the model using fit_generator(generator) which will use the generator we've defined to train the model.
End of explanation
print('Training set score:', model.evaluate_generator(data_generator(train_dataset), [metric], transformers))
print('Test set score:', model.evaluate_generator(data_generator(test_dataset), [metric], transformers))
Explanation: Now that we have trained our graph convolutional method, let's evaluate its performance. We again have to use our defined generator to evaluate model performance.
End of explanation |
10,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Training on Cloud AI Platform</h1>
This notebook illustrates distributed training on Cloud AI Platform (formerly known as Cloud ML Engine).
Step1: Now that we have the TensorFlow code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
<p>
<h2> Train on Cloud AI Platform</h2>
<p>
Training on Cloud AI Platform requires
Step2: Lab Task 2
Address all the TODOs in the following code in babyweight/trainer/model.py with the cell below. This code is similar to the model training code we wrote in Lab 3.
After addressing all TODOs, run the cell to write the code to the model.py file.
Step3: Lab Task 3
After moving the code to a package, make sure it works standalone. (Note the --pattern and --train_examples lines so that I am not trying to boil the ocean on the small notebook VM. Change as appropriate for your model).
<p>
Even with smaller data, this might take <b>3-5 minutes</b> in which you won't see any output ...
Step4: Lab Task 4
The JSON below represents an input into your prediction model. Write the input.json file below with the next cell, then run the prediction locally to assess whether it produces predictions correctly.
Step5: Lab Task 5
Once the code works in standalone mode, you can run it on Cloud AI Platform.
Change the parameters to the model (-train_examples for example may not be part of your model) appropriately.
Because this is on the entire dataset, it will take a while. The training run took about <b> 2 hours </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section.
Step6: When I ran it, I used train_examples=20000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '2.1'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/babyweight/preproc; then
gsutil mb -l ${REGION} gs://${BUCKET}
# copy canonical set of preprocessed files if you didn't do previous notebook
gsutil -m cp -R gs://cloud-training-demos/babyweight gs://${BUCKET}
fi
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
Explanation: <h1>Training on Cloud AI Platform</h1>
This notebook illustrates distributed training on Cloud AI Platform (formerly known as Cloud ML Engine).
End of explanation
%%writefile babyweight/trainer/task.py
import argparse
import json
import os
from . import model
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--bucket',
help = 'GCS path to data. We assume that data is in gs://BUCKET/babyweight/preproc/',
required = True
)
parser.add_argument(
'--output_dir',
help = 'GCS location to write checkpoints and export models',
required = True
)
parser.add_argument(
'--batch_size',
help = 'Number of examples to compute gradient over.',
type = int,
default = 512
)
parser.add_argument(
'--job-dir',
help = 'this model ignores this field, but it is required by gcloud',
default = 'junk'
)
parser.add_argument(
'--nnsize',
help = 'Hidden layer sizes to use for DNN feature columns -- provide space-separated layers',
nargs = '+',
type = int,
default=[128, 32, 4]
)
parser.add_argument(
'--nembeds',
help = 'Embedding size of a cross of n key real-valued parameters',
type = int,
default = 3
)
## TODOs after this line
################################################################################
## TODO 1: add the new arguments here
## parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# unused args provided by service
arguments.pop('job_dir', None)
arguments.pop('job-dir', None)
## assign the arguments to the model variables
output_dir = arguments.pop('output_dir')
model.BUCKET = arguments.pop('bucket')
model.BATCH_SIZE = arguments.pop('batch_size')
model.TRAIN_STEPS = (arguments.pop('train_examples') * 100) / model.BATCH_SIZE
model.EVAL_STEPS = arguments.pop('eval_steps')
print ("Will train for {} steps using batch_size={}".format(model.TRAIN_STEPS, model.BATCH_SIZE))
model.PATTERN = arguments.pop('pattern')
model.NEMBEDS= arguments.pop('nembeds')
model.NNSIZE = arguments.pop('nnsize')
print ("Will use DNN size of {}".format(model.NNSIZE))
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
output_dir = os.path.join(
output_dir,
json.loads(
os.environ.get('TF_CONFIG', '{}')
).get('task', {}).get('trial', '')
)
# Run the training job
model.train_and_evaluate(output_dir)
Explanation: Now that we have the TensorFlow code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
<p>
<h2> Train on Cloud AI Platform</h2>
<p>
Training on Cloud AI Platform requires:
<ol>
<li> Making the code a Python package
<li> Using gcloud to submit the training code to Cloud AI Platform
</ol>
Ensure that the AI Platform API is enabled by going to this [link](https://console.developers.google.com/apis/library/ml.googleapis.com).
## Lab Task 1
The following code edits babyweight/trainer/task.py. You should use add hyperparameters needed by your model through the command-line using the `parser` module. Look at how `batch_size` is passed to the model in the code below. Do this for the following hyperparameters (defaults in parentheses): `train_examples` (5000), `eval_steps` (None), `pattern` (of).
End of explanation
%%writefile babyweight/trainer/model.py
import shutil
import numpy as np
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
tf.logging.set_verbosity(tf.logging.INFO)
BUCKET = None # set from task.py
PATTERN = 'of' # gets all files
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
# Define some hyperparameters
TRAIN_STEPS = 10000
EVAL_STEPS = None
BATCH_SIZE = 512
NEMBEDS = 3
NNSIZE = [64, 16, 4]
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(prefix, mode, batch_size):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults=DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Use prefix to create file path
file_path = 'gs://{}/babyweight/preproc/{}*{}*'.format(BUCKET, prefix, PATTERN)
# Create list of files that match pattern
file_list = tf.gfile.Glob(file_path)
# Create dataset from file list
dataset = (tf.data.TextLineDataset(file_list) # Read text file
.map(decode_csv)) # Transform each elem by applying decode_csv fn
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
# Define feature columns
def get_wide_deep():
# Define column types
is_male,mother_age,plurality,gestation_weeks = \
[\
tf.feature_column.categorical_column_with_vocabulary_list('is_male',
['True', 'False', 'Unknown']),
tf.feature_column.numeric_column('mother_age'),
tf.feature_column.categorical_column_with_vocabulary_list('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)']),
tf.feature_column.numeric_column('gestation_weeks')
]
# Discretize
age_buckets = tf.feature_column.bucketized_column(mother_age,
boundaries=np.arange(15,45,1).tolist())
gestation_buckets = tf.feature_column.bucketized_column(gestation_weeks,
boundaries=np.arange(17,47,1).tolist())
# Sparse columns are wide, have a linear relationship with the output
wide = [is_male,
plurality,
age_buckets,
gestation_buckets]
# Feature cross all the wide columns and embed into a lower dimension
crossed = tf.feature_column.crossed_column(wide, hash_bucket_size=20000)
embed = tf.feature_column.embedding_column(crossed, NEMBEDS)
# Continuous columns are deep, have a complex relationship with the output
deep = [mother_age,
gestation_weeks,
embed]
return wide, deep
# Create serving input function to be able to serve predictions later using provided inputs
def serving_input_fn():
feature_placeholders = {
'is_male': tf.placeholder(tf.string, [None]),
'mother_age': tf.placeholder(tf.float32, [None]),
'plurality': tf.placeholder(tf.string, [None]),
'gestation_weeks': tf.placeholder(tf.float32, [None]),
KEY_COLUMN: tf.placeholder_with_default(tf.constant(['nokey']), [None])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
# create metric for hyperparameter tuning
def my_rmse(labels, predictions):
pred_values = predictions['predictions']
return {'rmse': tf.metrics.root_mean_squared_error(labels, pred_values)}
def forward_features(estimator, key):
def new_model_fn(features, labels, mode, config):
spec = estimator.model_fn(features, labels, mode, config)
predictions = spec.predictions
predictions[key] = features[key]
spec = spec._replace(predictions=predictions)
return spec
return tf.estimator.Estimator(model_fn=new_model_fn, model_dir=estimator.model_dir, config=estimator.config)
## TODOs after this line
################################################################################
# Create estimator to train and evaluate
def train_and_evaluate(output_dir):
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
wide, deep = get_wide_deep()
EVAL_INTERVAL = 300 # seconds
## TODO 2a: set the save_checkpoints_secs to the EVAL_INTERVAL
run_config = tf.estimator.RunConfig(save_checkpoints_secs = None,
keep_checkpoint_max = 3)
## TODO 2b: change the dnn_hidden_units to NNSIZE
estimator = tf.estimator.DNNLinearCombinedRegressor(
model_dir = output_dir,
linear_feature_columns = wide,
dnn_feature_columns = deep,
dnn_hidden_units = None,
config = run_config)
# illustrates how to add an extra metric
estimator = tf.estimator.add_metrics(estimator, my_rmse)
# for batch prediction, you need a key associated with each instance
estimator = forward_features(estimator, KEY_COLUMN)
## TODO 2c: Set the third argument of read_dataset to BATCH_SIZE
## TODO 2d: and set max_steps to TRAIN_STEPS
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset('train', tf.estimator.ModeKeys.TRAIN, None),
max_steps = None)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn, exports_to_keep=None)
## TODO 2e: Lastly, set steps equal to EVAL_STEPS
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset('eval', tf.estimator.ModeKeys.EVAL, 2**15), # no need to batch in eval
steps = None,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
Explanation: Lab Task 2
Address all the TODOs in the following code in babyweight/trainer/model.py with the cell below. This code is similar to the model training code we wrote in Lab 3.
After addressing all TODOs, run the cell to write the code to the model.py file.
End of explanation
%%bash
echo "bucket=${BUCKET}"
rm -rf babyweight_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
python -m trainer.task \
--bucket=${BUCKET} \
--output_dir=babyweight_trained \
--job-dir=./tmp \
--pattern="00000-of-" --train_examples=1 --eval_steps=1
Explanation: Lab Task 3
After moving the code to a package, make sure it works standalone. (Note the --pattern and --train_examples lines so that I am not trying to boil the ocean on the small notebook VM. Change as appropriate for your model).
<p>
Even with smaller data, this might take <b>3-5 minutes</b> in which you won't see any output ...
End of explanation
%%writefile inputs.json
{"key": "b1", "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "g1", "is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
%%bash
sudo find "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine" -name '*.pyc' -delete
%%bash
MODEL_LOCATION=$(ls -d $(pwd)/babyweight_trained/export/exporter/* | tail -1)
echo $MODEL_LOCATION
gcloud ai-platform local predict --model-dir=$MODEL_LOCATION --json-instances=inputs.json
Explanation: Lab Task 4
The JSON below represents an input into your prediction model. Write the input.json file below with the next cell, then run the prediction locally to assess whether it produces predictions correctly.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=2.1 \
--python-version=3.7 \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=20000
Explanation: Lab Task 5
Once the code works in standalone mode, you can run it on Cloud AI Platform.
Change the parameters to the model (-train_examples for example may not be part of your model) appropriately.
Because this is on the entire dataset, it will take a while. The training run took about <b> 2 hours </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=2.1 \
--python-version=3.7 \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=2000 --batch_size=35 --nembeds=16 --nnsize=281
Explanation: When I ran it, I used train_examples=20000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was:
<pre>
Saving dict for global step 5714290: average_loss = 1.06473, global_step = 5714290, loss = 34882.4, rmse = 1.03186
</pre>
The final RMSE was 1.03 pounds.
<h2> Repeat training </h2>
<p>
This time with tuned parameters (note last line)
End of explanation |
10,546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Deep Learning
Project
Step1: Step 1
Step2: Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include
Step3: Step 2
Step4: Model Architecture
Step5: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
Step6: Step 3
Step7: Predict the Sign Type for Each Image
Step8: Analyze Performance
Step9: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability
Step10: Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.
Note | Python Code:
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = ?
validation_file=?
testing_file = ?
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.
The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
End of explanation
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = ?
# TODO: Number of validation examples
n_validation = ?
# TODO: Number of testing examples.
n_test = ?
# TODO: What's the shape of an traffic sign image?
image_shape = ?
# TODO: How many unique classes/labels there are in the dataset.
n_classes = ?
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.
Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
End of explanation
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
Explanation: Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
End of explanation
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
Neural network architecture (is the network over or underfitting?)
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
Pre-process the Data Set (normalization, grayscale, etc.)
Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project.
Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
End of explanation
### Define your architecture here.
### Feel free to use as many code cells as needed.
Explanation: Model Architecture
End of explanation
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
Explanation: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
End of explanation
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
Explanation: Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
Load and Output the Images
End of explanation
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
Explanation: Predict the Sign Type for Each Image
End of explanation
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
Explanation: Analyze Performance
End of explanation
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
Explanation: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:
```
(5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
End of explanation
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
Explanation: Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.
Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
Step 4 (Optional): Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
End of explanation |
10,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chemicals in Cosmetics - Data Bootcamp Report
Manuela Lopez Giraldo
May 12, 2017
Report Outline
Step1: To extract exact figures for current products, it would be best to clean up the dataset. I removed several columns with unnecessary information as well as removed observations with values in both the 'DiscontinuedDate' and 'ChemicalDateRemoved' columns. This resulted in a new DataFrame with observations of cosmetic products with chemicals that are currently being sold on the market. This new DataFrame will be used for the first two objectives of the report.
Step2: Part 1
Step3: This illustrates the companies with the most reports of cosmetics products with hazardous ingredients. However, these are major companies that own many brands and subsidiaries. To demonstrate which brands the average consumer should be the most aware of, I then showed the top-ranked brands with hazardous ingredients as a part of the parent company they fall under. I took a look at L'Oreal USA and Coty, the two major cosmetics parent companies.
Step4: The first two graphs show the brands within the top two major companies, L'Oreal and Coty, that offer the highest amount of cosmetics with hazardous chemicals. However, using group_by creates a series that shows that these are not necessarily the worst brands with the highest amount of potentially hazardous cosmetics.
The two worst brands, as suggested by L'Oreal and Coty, would be the L'Oreal brand and Sally Hansen. However, group_by shows that from all the reported observations in the dataset, NYX and bareMinerals are actually the worst offenders of all brands.
This shows that the high number of cosmetics with hazardous chemicals that is reported for the L'Oreal and Coty companies actually reflect their massive amount of subsidiary brands more so than a pervasiveness of hazardous ingredients in their chemicals.
After establishing which companies and brands offer the greatest range of cosmetics with potentially hazardous ingredients, I then wanted to analyze the products with these chemicals.
Step5: Part 2
Step6: NYX cosmetics regularly use the same three potentially hazardous ingredients in their products. Titanium Dioxide is considered to be low risk and is common in cosmetics, as seen by the subplot comparison. However, Butylated Hydroxyanisole (BHA) and Carbon Black are considered to pose a moderate hazard with possibility of being a human carcinogen by the Environmental Working Group's Cosmetics Database. NYX uses BHA in their cosmetics with much greater frequency as opposed to the entirety of the products in the market.
Part 3 | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
%matplotlib inline
import matplotlib as mpl
import sys
url = 'https://chhs.data.ca.gov/api/views/7kri-yb7t/rows.csv?accessType=DOWNLOAD'
CosmeticsData = pd.read_csv(url)
CosmeticsData.head()
Explanation: Chemicals in Cosmetics - Data Bootcamp Report
Manuela Lopez Giraldo
May 12, 2017
Report Outline:
In the United States, the FDA implements little regulation over ingredients in cosmetics. Despite the fact that the majority of products on the market containing dangerous ingredients, some of which are reported to be linked to cancer, there is no clear warning provided for the average consumer. All across the nation, consumers are slowly gaining and demanding more awareness of the ingredients that make up their cosmetics and are turning to brands that offer more natural and organic products. As consumers turn to "green" cosmetics, do the offerings by major cosmetics corporations follow suit?
This project will serve as a report on the product offerings by major cosmetics corporations.
This report uses data from the California Safe Cosmetics Program under the California Department of Public Health. I will cover the leading companies that offer potentially hazardous products, the products with the most hazardous ingredients, the most common chemicals included in cosmetics, as well as the trend of cosmetics products with hazardous chemicals over time.
Importing Packages
End of explanation
df = CosmeticsData.drop(CosmeticsData.columns[[0, 1, 2, 3, 4, 7, 9, 11, 12, 13, 15, 16, 18, 19]], axis=1)
df = df[pd.isnull(df['ChemicalDateRemoved'])]
df = df[pd.isnull(df['DiscontinuedDate'])]
df.head()
Explanation: To extract exact figures for current products, it would be best to clean up the dataset. I removed several columns with unnecessary information as well as removed observations with values in both the 'DiscontinuedDate' and 'ChemicalDateRemoved' columns. This resulted in a new DataFrame with observations of cosmetic products with chemicals that are currently being sold on the market. This new DataFrame will be used for the first two objectives of the report.
End of explanation
df['CompanyName'].value_counts().head(20)
top20companies = df['CompanyName'].value_counts()
top20companies.head(20).plot(kind='barh',
title='Top 20 Companies with Hazardous Cosmetics',
color=['gold', 'silver', 'beige'])
Explanation: Part 1: Companies, Brands & Products
My first objective is to report on the companies and brands behind these cosmetics with chemicals. Using value_counts, I was able to find the companies with the most observations (greatest amount of cosmetics) that are hazardous. As there are thousands of brands included in this dataset, I chose to report on the top 20 companies with the most amount of potentially hazardous ingredients.
End of explanation
dff = df.set_index('CompanyName')
loreal = dff.loc[["L'Oreal USA"], ['BrandName']]
fig, ax = plt.subplots()
loreal['BrandName'].str.title().value_counts().head(10).plot(kind='barh',
color=['gold', 'silver', 'beige'],
alpha=.8)
ax.set_xlabel('Amount of Cosmetics with Hazardous Chemicals', fontsize=9)
ax.set_title("Brands Owned By L'Oreal", fontsize=12)
coty = dff.loc[["Coty"], ['BrandName']]
fig, ax = plt.subplots()
coty['BrandName'].str.title().value_counts().head(10).plot(kind='barh',
color=['gold', 'silver', 'beige'],
alpha=.8)
ax.set_xlabel('Amount of Cosmetics with Hazardous Chemicals', fontsize=9)
ax.set_title("Brands Owned By Coty", fontsize=12)
df.groupby(['CompanyName', "BrandName"]).size().nlargest(5)
Explanation: This illustrates the companies with the most reports of cosmetics products with hazardous ingredients. However, these are major companies that own many brands and subsidiaries. To demonstrate which brands the average consumer should be the most aware of, I then showed the top-ranked brands with hazardous ingredients as a part of the parent company they fall under. I took a look at L'Oreal USA and Coty, the two major cosmetics parent companies.
End of explanation
top20types = df['SubCategory'].value_counts()
top20types.head(20).plot(kind='barh',
title='Top 20 Products with Most Chemicals',
color=['brown', 'beige'])
Explanation: The first two graphs show the brands within the top two major companies, L'Oreal and Coty, that offer the highest amount of cosmetics with hazardous chemicals. However, using group_by creates a series that shows that these are not necessarily the worst brands with the highest amount of potentially hazardous cosmetics.
The two worst brands, as suggested by L'Oreal and Coty, would be the L'Oreal brand and Sally Hansen. However, group_by shows that from all the reported observations in the dataset, NYX and bareMinerals are actually the worst offenders of all brands.
This shows that the high number of cosmetics with hazardous chemicals that is reported for the L'Oreal and Coty companies actually reflect their massive amount of subsidiary brands more so than a pervasiveness of hazardous ingredients in their chemicals.
After establishing which companies and brands offer the greatest range of cosmetics with potentially hazardous ingredients, I then wanted to analyze the products with these chemicals.
End of explanation
nyx = dff.loc[["NYX Los Angeles, Inc."], ['ChemicalName']]
fig, ax = plt.subplots(2, 1)
df['ChemicalName'].str.title().value_counts().head(5).plot(kind='barh', ax=ax[0],
color=['brown', 'beige'],
alpha=.8)
nyx['ChemicalName'].str.title().value_counts().plot(kind='barh', ax=ax[1],
color=['gold', 'silver', 'beige'],
alpha=.8)
ax[0].set_title("Most Common Chemicals Across All Observations", loc='right', fontsize=13)
ax[1].set_title("Chemicals used in NYX Cosmetics", loc='right', fontsize=13)
ax[0].set_xlabel('Amount of Products with these Chemicals', fontsize=9)
ax[1].set_xlabel('Amount of Products with these Chemicals', fontsize=9)
plt.subplots_adjust(top=2.1)
Explanation: Part 2: Chemicals
Having already looked at which brands carried the greatest amount of cosmetics with chemicals, I then wanted to take a closer look at NYX cosmetics' chemicals compared to the most used chemicals across all reported observations.
End of explanation
CosmeticsData.shape
labels = '% of Cosmetics that Retained Chemicals', '% of Cosmetics that Removed Chemicals'
sizes = [CosmeticsData['ChemicalDateRemoved'].isnull().sum(),
CosmeticsData['ChemicalDateRemoved'].notnull().sum()]
colors = ['lightblue', 'beige']
explode = (0, 0.3) # explode 1st slice
# Plot
plt.pie(sizes, explode=explode, labels=labels, colors=colors,
autopct='%1.1f%%', startangle=30)
plt.title('Change in Cosmetics with Hazardous Ingredients: 2009-Present')
plt.axis('equal')
plt.tight_layout()
plt.show()
Explanation: NYX cosmetics regularly use the same three potentially hazardous ingredients in their products. Titanium Dioxide is considered to be low risk and is common in cosmetics, as seen by the subplot comparison. However, Butylated Hydroxyanisole (BHA) and Carbon Black are considered to pose a moderate hazard with possibility of being a human carcinogen by the Environmental Working Group's Cosmetics Database. NYX uses BHA in their cosmetics with much greater frequency as opposed to the entirety of the products in the market.
Part 3: Chemical Trends in Cosmetics Products
To conclude the report, I wanted to take a look at the change in potentially hazardous ingredients in cosmetics from the beginning of these observations compared to the end. As cosmetics consumers increasingly demand more organic products, companies should show an overall decrease in the chemicals in their product range.
End of explanation |
10,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing a basic neural network
By Brett Naul (UC Berkeley)
In this exercise we'll implement and train a simple single-layer neural network classifier using numpy. First, let's create some example data
Step8: Setup
Our neural network will take an $\mathbf{x} = (x_1, x_2)$ vector as input and output a $K$-dimensional vector $\mathbf{p}=(p_1,\dots,p_K)$ of class probabilities. For simplicity we'll focus on a single choice of activation function, the ReLU function $f(x) = \max(x, 0)$.
We'll follow the scikit-learn model API and construct a simple model with fit and predict methods. The exercises below will step you through the necessary steps to implement a fully-functional neural network classifier.
Step9: Part 1
Step10: Part 2
Step11: 2c) (optional) Numerically check the gradient implementation in dloss
This is an extremely important step when coding derivatives by hand
Step12: Part 3
Step14: Part 4 | Python Code:
# Imports / plotting configuration
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('poster')
plt.rcParams['image.interpolation'] = 'nearest' # hard classification boundaries
plt.rcParams['image.cmap'] = 'viridis'
np.random.seed(13)
# Generate spiral sample data
def spiral_data(N, K=3, sigma=0.1):
X = np.zeros((N * K, 2))
y = np.zeros(N * K, dtype='int')
for j in range(K):
ix = range(N * j, N * (j + 1))
r = np.linspace(0.0, 1, N) # radius
theta = 2 * np.pi * j / K + np.linspace(0, 3 * np.pi, N) + np.random.randn(N) * sigma
X[ix] = np.c_[r * np.sin(theta), r * np.cos(theta)]
y[ix] = j
return X, y
N = 100
K = 3
X, y = spiral_data(N, K, 0.1)
# Visualize the generated data
fig = plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap='viridis')
plt.xlim([-1, 1])
plt.ylim([-1, 1]);
Explanation: Implementing a basic neural network
By Brett Naul (UC Berkeley)
In this exercise we'll implement and train a simple single-layer neural network classifier using numpy. First, let's create some example data: we'll sample points from along interlocking spirals like we saw in the TensorFlow playground.
Based on Stanford CS 231n "Neural Network Case Study" exercise.
End of explanation
from sklearn.base import BaseEstimator, ClassifierMixin
class SingleLayerReLU(BaseEstimator, ClassifierMixin):
Skeleton code for single-layer multi-class neural network classifier w/ ReLU activation.
NOTE: Whenever you change the code below, you need to re-run this cell AND re-initialize
your model (`model = SingleLayerNet(...)`) in order to update your specific `model` object.
def __init__(self, hidden_size, num_classes, sigma_init=0.01):
Initialize weights with Gaussian noise scaled by `sigma_init` and
biases with zeros.
self.hidden_size = hidden_size
self.num_classes = num_classes
self.W1 = sigma_init * np.random.randn(hidden_size, 2)
self.W2 = sigma_init * np.random.randn(num_classes, hidden_size)
self.b1 = np.zeros(hidden_size)
self.b2 = np.zeros(num_classes)
def loss(self, y, P):
Compute total softmax loss.
Inputs: y -> (N,) array of true (integer) labels
p -> (N, K) array of predicted probabilities
Outputs: L -> total loss value
return -np.sum(np.log(P[range(len(P)), y]))
def dloss(self, X, y):
Compute gradient of softmax loss with respect to network weights.
Inputs: X -> (N, 2) array of network inputs
y -> (N,) array of true labels
Outputs: dW1 -> (hidden_size, 2) array of weight derivatives
dW2 -> (hidden_size, 2) array of weight derivatives
db1 -> (hidden_size,) array of bias derivatives
db2 -> (hidden_size,) array of bias derivatives
H = np.maximum(0, X @ self.W1.T + self.b1) # ReLU activation
Z = H @ self.W2.T + self.b2
P = np.exp(Z) / np.sum(np.exp(Z), axis=1, keepdims=True)
dZ = P
dZ[range(len(X)), y] -= 1
dW2 = (H.T @ dZ).T
db2 = np.sum(dZ, axis=0)
dH = dZ @ self.W2
dH[H <= 0] = 0 # backprop ReLU activation
dW1 = (X.T @ dH).T
db1 = np.sum(dH, axis=0)
return (dW1, dW2, db1, db2)
def predict_proba(self, X):
Compute forward pass for all input values.
Inputs: X -> (N, 2) array of network inputs
Outputs: P -> (N, K) array of class probabilities
H = np.maximum(0, X @ self.W1.T + self.b1) # ReLU activation
Z = H @ self.W2.T + self.b2
P = np.exp(Z) / np.sum(np.exp(Z), axis=1, keepdims=True)
return P
def predict(self, X):
Compute most likely class labels for all input values.
Inputs: X -> (N, 2) array of network inputs
Outputs: P -> (N, K) array of class probabilities
P = self.predict_proba(X)
return np.argmax(P, 1)
def fit(self, X, y, step_size=3e-3, n_iter=10000):
Optimize model parameters W1, W2, b1, b2 via gradient descent.
Inputs: X -> (N, 2) array of network inputs
y -> (N,) array of true labels
step_size -> gradient descent step size
n_iter -> number of gradient descent steps to perform
Outputs: losses -> (n_iter,) array of loss values after each step
losses = np.zeros(n_iter + 1)
for i in range(0, n_iter + 1):
dW1, dW2, db1, db2 = self.dloss(X, y)
self.W1 -= step_size * dW1
self.W2 -= step_size * dW2
self.b1 -= step_size * db1
self.b2 -= step_size * db2
P = self.predict_proba(X)
losses[i] = self.loss(y, P)
if i % 1000 == 0:
print("Iteration {}: loss={}".format(i, losses[i]))
return losses
Explanation: Setup
Our neural network will take an $\mathbf{x} = (x_1, x_2)$ vector as input and output a $K$-dimensional vector $\mathbf{p}=(p_1,\dots,p_K)$ of class probabilities. For simplicity we'll focus on a single choice of activation function, the ReLU function $f(x) = \max(x, 0)$.
We'll follow the scikit-learn model API and construct a simple model with fit and predict methods. The exercises below will step you through the necessary steps to implement a fully-functional neural network classifier.
End of explanation
def visualize_predictions(model, X, y, step=0.02):
x_min, x_max = X[:, 0].min(), X[:, 0].max()
y_min, y_max = X[:, 1].min(), X[:, 1].max()
xx, yy = np.meshgrid(np.arange(x_min, x_max, step), np.arange(y_min, y_max, step))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("Accuracy: {}%".format(100 * np.mean(y == model.predict(X))))
model = SingleLayerReLU(100, K)
visualize_predictions(model, X, y)
Explanation: Part 1: Forward pass
The "forward pass" step (i.e., computing the output for a given input) can be written as
$$
\begin{align}
\mathbf{h} &= f(W_1 \mathbf{x} + \mathbf{b_1}) \
\mathbf{z} &= W_2 \mathbf{h} + \mathbf{b_2} \
\mathbf{p} &= \operatorname{softmax}(\mathbf{z})
\end{align}
$$
where $f(\cdot)$ is an activation function, $W_1$ and $W_2$ are weight matrices, and $\mathbf{b}1$ and $\mathbf{b}_2$ are bias vectors. The softmax function is given by
$$
\operatorname{softmax}(\mathbf{z}) = \exp\left(\mathbf{z}\right) / \left(\sum{k=1}^K \exp\left( z_k\right)\right)
$$
and the ReLU activation is
$$
f(x) = \max(x, 0).
$$
1a) Implement predict function
Your function should loop over the input points (i.e., the rows of $X$) and compute the estimated class probabilities for each. The model parameters $W_1, W_2, b_1, b_2$ are treated as fixed.
1b) Examine predictions
The function visualize_predictions below can be used to inspect the decision boundaries for a given model, dataset, and class labels. Initialize a network and see what the classifications look like. Try varying the number of hidden units and see how the decision boundary shapes change.
End of explanation
model = SingleLayerReLU(100, K)
losses = model.fit(X, y, step_size=2e-3, n_iter=20000)
plt.plot(losses, '-')
plt.xlabel('Iteration')
plt.ylabel('Total loss');
plt.figure()
visualize_predictions(model, X, y)
Explanation: Part 2: Training
Obviously generating classifications at random isn't the way to go. Now we'll use gradient descent to iteratively improve the weights of our network and hopefully improve our classification accuracy.
If $y$ is some true label and $\mathbf{p}$ are the estimated class probabilities from our network, then our loss function for a single training example can be written as
$$L(y, \mathbf{p}) = -\log p_y,$$
that is, the negative log of the predicted probability for the true class: 0 if our predicted probability was 1, and $+\infty$ if it was 0.
In order to perform gradient descent, we'll need the derivatives of the loss with respect to all of our model parameters<sup></sup>. We can compute these derivatives via the backpropagation (effectively just the chain rule)<sup>*</sup>:
$$
\begin{aligned}
\frac{\partial L}{\partial z_k} &= p_k - \text{{1 if $y=k$, 0 otherwise}} \
\frac{\partial L}{\partial W_2} &= \mathbf{h} \left(\frac{\partial L}{\partial \mathbf{z}}\right)^T \
\frac{\partial L}{\partial \mathbf{b}_2} &= \frac{\partial L}{\partial \mathbf{z}}\
\frac{\partial L}{\partial \mathbf{h}} &= W_2 \frac{\partial L}{\partial \mathbf{z}} \circ f'(\mathbf{h}) \
\frac{\partial L}{\partial W_1} &= \mathbf{x} \left(\frac{\partial L}{\partial \mathbf{h}}\right)^T\
\frac{\partial L}{\partial \mathbf{b}_1} &= \frac{\partial L}{\partial \mathbf{h}}\
\end{aligned}
$$
Even for very deep or complicated networks, the gradient of the loss function can always be decomposed in this way, one connection at a time.
The actual quantity we wish to minimize is the sum of this loss value over all the samples in our data set. The gradient of this total loss is just the sum of the gradient for each sample.
2a) Implement fit function
Our gradient descent training consists of the following steps:
- Compute loss gradient with respect to model parameters using dloss. Note: for the sake of efficiency, this function returns the sum of the gradient of over all samples, so you should pass in the entire dataset.
- Update model parameters as $W_i \leftarrow W_i - \alpha\frac{\partial{L}}{\partial W_i}$ and $\mathbf{b}_i \leftarrow \mathbf{b}_i - \alpha\frac{\partial{L}}{\partial \mathbf{b}_i}$, where $\alpha$ is some chosen step size.
- Repeat for num_iter iterations.
Implement these steps in the fit function above. Check that the loss function is decreasing, either by occasionaly printing the loss value or by returning the losses from each step and plotting the trajectory.
2b) Examine predictions for fitted model
Train a model and examine the predictions using visualize_predictions. Repeat for a few combinations hidden layer sizes, gradient descent step sizes, and numbers of training iterations. What's the best accuracy you can achieve? How does the loss curve change with the step size?
End of explanation
dW1, dW2, db1, db2 = model.dloss(X, y)
h = 1e-6
i = 0; j = 0
l0 = model.loss(y, model.predict_proba(X))
model.W1[i, j] += h
lh = model.loss(y, model.predict_proba(X))
print("Derivative: {}; Numerical esitmate: {}".format(dW1[i, j], (lh - l0) / h))
Explanation: 2c) (optional) Numerically check the gradient implementation in dloss
This is an extremely important step when coding derivatives by hand: check the gradients returned by dloss by perturbing a single weight value by $\epsilon$ and comparing the resulting change in loss to what you'd expect from the gradient.
<sub>* This is a bit of an abuse of notation since we're taking some derivatives with respect to vectors/matrices; this just means a vector/matrix of derivatives with respect to each element.</sub>
<sub>** For an excrutiating amount of detail about backprop, see e.g. Neural Networks and Deep Learning.</sub>
End of explanation
from sklearn.neural_network import MLPClassifier
X, y = spiral_data(100, 5, 0.1)
single_layer_model = MLPClassifier((8,), activation='identity', solver='lbfgs')
single_layer_model.fit(X, y)
visualize_predictions(single_layer_model, X, y)
multi_layer_model = MLPClassifier((100, 100), alpha=0.5, activation='relu', solver='lbfgs')
multi_layer_model.fit(X, y)
visualize_predictions(multi_layer_model, X, y)
Explanation: Part 3: Compare with scikit-learn
The scikit-learn package does contain a minimal implementation of a neural network classifier in scikit-learn.neural_networks.MLPClassifier. This classifier is nowhere near as powerful or fully-featured as the neural networks provided by other packages such as tensorflow, theano, etc., but it still can represent complicated non-linear relationships within small datasets.
3a) Implement a simple neural network classifier using scikit-learn
What parameter values give a network comparable to the one we implemented above? Note that the results may not be equivalent since scikit-learn version could reach a different set of (locally) optimal parameters.
Hint: try using solver='lbfgs' (a more complicated optimization method) for better results.
3b) Experiment with different activations/network sizes
How do the decision boundaries change for different models?
3c) Experiment with different numbers of classes and noise levels
Go back to where we initialized the data and change the parameters K and sigma. Can a single- or multi-layer perceptron still represent more complicated patterns?
End of explanation
from copy import deepcopy
X, y = spiral_data(100, 2, 0.1)
multi_layer_model = MLPClassifier((64, 8), alpha=0.5, activation='relu', solver='lbfgs')
multi_layer_model.fit(X, y)
visualize_predictions(multi_layer_model, X, y)
def visualize_activations(model, unit, X, y, step=0.02):
Visualize activations of ith neuron of last layer.
model = deepcopy(model)
model.coefs_[-1][:unit] = 0 # zero out other units
model.coefs_[-1][unit] = 1 # just want the activation function
model.coefs_[-1][(unit + 1):] = 0 # zero out other units
x_min, x_max = X[:, 0].min(), X[:, 0].max()
y_min, y_max = X[:, 1].min(), X[:, 1].max()
xx, yy = np.meshgrid(np.arange(x_min, x_max, step), np.arange(y_min, y_max, step))
Z = model.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 0]
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
fig, ax = plt.subplots(4, 2, figsize=(14, 20))
for i in range(len(multi_layer_model.coefs_[-1])):
plt.sca(ax.ravel()[i])
visualize_activations(multi_layer_model, i, X, y)
Explanation: Part 4: Visualize neuron activations (optional)
Interpreting neural network classifiers can be quite difficult, especially when the network contains many layers. One way of attempting to make sense of the model is by visualizing the activations of individual neurons: by plotting the activation of a neuron over the entire input space, we can see what patterns the neurons are learning to represent.
Try below to generate plots that show the activation of an individual neuron. scikit-learn doesn't make this immediately available, but we can make it work: try setting the last layer's weights (in coefs_[-1]) to zero for everything except a single unit, and setting that unit's weights to 1 (so that the activation just passes straight through). You can re-use most of the code from visualize_predictions to generate a contour map of activation strengths.
End of explanation |
10,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
練習 Theano 的 Scan Function
Step1: 撰寫一個 A 的 K 次計算函式
[0 1 2 3 4 5 6 7 8 9 ] >>> [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.]
k 為 integer ,代表多少次方
a 為 底
Step2: result 為用 tensor 來接
update 為利用 theano.scan 所產生 funciton description,白話的說就是 "利用函式來產出函式" | Python Code:
import theano
import theano.tensor as T
Explanation: 練習 Theano 的 Scan Function
End of explanation
k = T.iscalar('K')
a = T.vector('A')
i = T.vector('A')
result, updates = theano.scan(fn=lambda pre , k : pre*a ,
outputs_info = i,
non_sequences=a,
n_steps = k
)
Explanation: 撰寫一個 A 的 K 次計算函式
[0 1 2 3 4 5 6 7 8 9 ] >>> [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.]
k 為 integer ,代表多少次方
a 為 底
End of explanation
print result
print updates
final_result = result[-1]
power = theano.function(inputs=[a,k,i],outputs=[final_result],updates=updates)
print(power(range(10),2,[10]*10))
Explanation: result 為用 tensor 來接
update 為利用 theano.scan 所產生 funciton description,白話的說就是 "利用函式來產出函式"
End of explanation |
10,550 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading SETI Hackathon Data
This tutorial will show you how to programmatically download the SETI code challenge data to your local file space and
start to analyze it.
Please see the Step_1_Get_Data.ipynb notebook on information about all of the data available for this code challenge.
This tutorial will use the basic data set, but will work, of course, with any of the data sets.
Step1: No Spark Here
You'll notice that this tutorial doesn't use parallelization with Spark. This is to keep this simple and make this code generalizable to folks that are running this analysis on their local machines.
Step2: Assume you have the data in a local folder
Step3: Use ibmseti for convenience
While it's somewhat trivial to read these data, the ibmseti.compamp.SimCompamp class will extract the JSON header and the complex-value time-series data for you.
Step4: The Goal
The goal is to take each simulation data file and
1. convert the time-series data into a 2D spectrogram
2. Use the 2D spectrogram as an image to train an image classification model
There are multiple ways to improve your model's ability to classify signals. You can
* Modify the time-series data with some signals processing to make a better 2D spectrogram
* Build a really good image classification system
* Try something entirely different, such as
* transforming the time-series data in different ways (KTL transform)?
* use the time-series data directly in model
Here we just show how to view the data as a spectrogram
1. Converting the time-series data into a spectrogram with ibmseti
Step5: 2. Build the spectogram yourself
You don't need to use ibmseti python package to calculate the spectrogram for you.
This is especially important if you want to apply some signals processing to the time-series data before you create your spectrogram | Python Code:
#The ibmseti package contains some useful tools to faciliate reading the data.
#The `ibmseti` package version 1.0.5 works on Python 2.7.
# !pip install --user ibmseti
#A development version runs on Python 3.5.
# !pip install --user ibmseti==2.0.0.dev5
# If running on DSX, YOU WILL NEED TO RESTART YOUR SPARK KERNEL to use a newly installed Python Package.
# Click Kernel -> Restart above!
Explanation: Reading SETI Hackathon Data
This tutorial will show you how to programmatically download the SETI code challenge data to your local file space and
start to analyze it.
Please see the Step_1_Get_Data.ipynb notebook on information about all of the data available for this code challenge.
This tutorial will use the basic data set, but will work, of course, with any of the data sets.
End of explanation
import ibmseti
import os
import zipfile
Explanation: No Spark Here
You'll notice that this tutorial doesn't use parallelization with Spark. This is to keep this simple and make this code generalizable to folks that are running this analysis on their local machines.
End of explanation
!ls my_data_folder/basic4
zz = zipfile.ZipFile(mydatafolder + '/' + 'basic4.zip')
basic4list = zz.namelist()
firstfile = basic4list[0]
print firstfile
Explanation: Assume you have the data in a local folder
End of explanation
import ibmseti
aca = ibmseti.compamp.SimCompamp(zz.open(firstfile, 'rb').read())
# This data file is classified as a 'squiggle'
aca.header()
Explanation: Use ibmseti for convenience
While it's somewhat trivial to read these data, the ibmseti.compamp.SimCompamp class will extract the JSON header and the complex-value time-series data for you.
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
## ibmseti.compamp.SimCompamp has a method to calculate the spectrogram for you (without any signal processing applied to the time-series data)
spectrogram = aca.get_spectrogram()
fig, ax = plt.subplots(figsize=(10, 5))
ax.imshow(np.log(spectrogram), aspect = 0.5*float(spectrogram.shape[1]) / spectrogram.shape[0])
Explanation: The Goal
The goal is to take each simulation data file and
1. convert the time-series data into a 2D spectrogram
2. Use the 2D spectrogram as an image to train an image classification model
There are multiple ways to improve your model's ability to classify signals. You can
* Modify the time-series data with some signals processing to make a better 2D spectrogram
* Build a really good image classification system
* Try something entirely different, such as
* transforming the time-series data in different ways (KTL transform)?
* use the time-series data directly in model
Here we just show how to view the data as a spectrogram
1. Converting the time-series data into a spectrogram with ibmseti
End of explanation
complex_data = aca.complex_data()
#complex valued time-series
complex_data
complex_data = complex_data.reshape(32, 6144)
complex_data
#Apply a Hanning Window
complex_data = complex_data * np.hanning(complex_data.shape[1])
complex_data
# Build Spectogram & Plot
cpfft = np.fft.fftshift( np.fft.fft(complex_data), 1)
spectrogram = np.abs(cpfft)**2
fig, ax = plt.subplots(figsize=(10, 5))
ax.imshow(np.log(spectrogram), aspect = 0.5*float(spectrogram.shape[1]) / spectrogram.shape[0])
Explanation: 2. Build the spectogram yourself
You don't need to use ibmseti python package to calculate the spectrogram for you.
This is especially important if you want to apply some signals processing to the time-series data before you create your spectrogram
End of explanation |
10,551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing initial sampling methods on integer space
Holger Nahrstaedt 2020 Sigurd Carlsen October 2019
.. currentmodule
Step1: Random sampling
Step2: Sobol
Step3: Classic latin hypercube sampling
Step4: Centered latin hypercube sampling
Step5: Maximin optimized hypercube sampling
Step6: Correlation optimized hypercube sampling
Step7: Ratio optimized hypercube sampling
Step8: Halton sampling
Step9: Hammersly sampling
Step10: Grid sampling
Step11: Pdist boxplot of all methods
This boxplot shows the distance between all generated points using
Euclidian distance. The higher the value, the better the sampling method.
It can be seen that random has the worst performance | Python Code:
print(__doc__)
import numpy as np
np.random.seed(1234)
import matplotlib.pyplot as plt
from skopt.space import Space
from skopt.sampler import Sobol
from skopt.sampler import Lhs
from skopt.sampler import Halton
from skopt.sampler import Hammersly
from skopt.sampler import Grid
from scipy.spatial.distance import pdist
def plot_searchspace(x, title):
fig, ax = plt.subplots()
plt.plot(np.array(x)[:, 0], np.array(x)[:, 1], 'bo', label='samples')
plt.plot(np.array(x)[:, 0], np.array(x)[:, 1], 'bs', markersize=40, alpha=0.5)
# ax.legend(loc="best", numpoints=1)
ax.set_xlabel("X1")
ax.set_xlim([0, 5])
ax.set_ylabel("X2")
ax.set_ylim([0, 5])
plt.title(title)
ax.grid(True)
n_samples = 10
space = Space([(0, 5), (0, 5)])
Explanation: Comparing initial sampling methods on integer space
Holger Nahrstaedt 2020 Sigurd Carlsen October 2019
.. currentmodule:: skopt
When doing baysian optimization we often want to reserve some of the
early part of the optimization to pure exploration. By default the
optimizer suggests purely random samples for the first n_initial_points
(10 by default). The downside to this is that there is no guarantee that
these samples are spread out evenly across all the dimensions.
Sampling methods as Latin hypercube, Sobol, Halton and Hammersly
take advantage of the fact that we know beforehand how many random
points we want to sample. Then these points can be "spread out" in
such a way that each dimension is explored.
See also the example on a real space
sphx_glr_auto_examples_initial_sampling_method.py
End of explanation
x = space.rvs(n_samples)
plot_searchspace(x, "Random samples")
pdist_data = []
x_label = []
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("random")
Explanation: Random sampling
End of explanation
sobol = Sobol()
x = sobol.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Sobol')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("sobol")
Explanation: Sobol
End of explanation
lhs = Lhs(lhs_type="classic", criterion=None)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'classic LHS')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("lhs")
Explanation: Classic latin hypercube sampling
End of explanation
lhs = Lhs(lhs_type="centered", criterion=None)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'centered LHS')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("center")
Explanation: Centered latin hypercube sampling
End of explanation
lhs = Lhs(criterion="maximin", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'maximin LHS')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("maximin")
Explanation: Maximin optimized hypercube sampling
End of explanation
lhs = Lhs(criterion="correlation", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'correlation LHS')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("corr")
Explanation: Correlation optimized hypercube sampling
End of explanation
lhs = Lhs(criterion="ratio", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'ratio LHS')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("ratio")
Explanation: Ratio optimized hypercube sampling
End of explanation
halton = Halton()
x = halton.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Halton')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("halton")
Explanation: Halton sampling
End of explanation
hammersly = Hammersly()
x = hammersly.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Hammersly')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("hammersly")
Explanation: Hammersly sampling
End of explanation
grid = Grid(border="include", use_full_layout=False)
x = grid.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Grid')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("grid")
Explanation: Grid sampling
End of explanation
fig, ax = plt.subplots()
ax.boxplot(pdist_data)
plt.grid(True)
plt.ylabel("pdist")
_ = ax.set_ylim(0, 6)
_ = ax.set_xticklabels(x_label, rotation=45, fontsize=8)
Explanation: Pdist boxplot of all methods
This boxplot shows the distance between all generated points using
Euclidian distance. The higher the value, the better the sampling method.
It can be seen that random has the worst performance
End of explanation |
10,552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Representing text as numerical data
From the scikit-learn documentation
Step1: From the scikit-learn documentation
Step2: From the scikit-learn documentation
Step3: In order to make a prediction, the new observation must have the same features as the training observations, both in number and meaning.
Step4: Summary of the overall process
Step5: Is the current representation of the labels useful to us?
Step6: Do you remember our feature matrix (X) and label vector (y) convention? How can we achieve this here? Also recall train/test splitting. Describe the steps.
Step7: So, how dense is the matrix?
Step8: Building and evaluating a model
We will use multinomial Naive Bayes
Step9: A false negative example
Step10: "Spaminess" of words
Before we start
Step11: Naive Bayes counts the number of observations in each class
Step12: Add 1 to ham and spam counts to avoid dividing by 0
Step13: Calculate the ratio of spam-to-ham for each token
Step14: Examine the DataFrame sorted by spam_ratio
Step15: Tuning the vectorizer
Do you see any potential to enhance the vectorizer? Think about the following questions
Step16: n-grams
n-grams concatenate n words to form a token. The following accounts for 1- and 2-grams
Step17: Document frequencies
Often it's beneficial to exclude words that appear in the majority or just a couple of documents. This is, very frequent or infrequent words. This can be achieved by using the max_df and min_df parameters of the vectorizer.
Step18: A note on Stemming
'went' and 'go'
'kids' and 'kid'
'negative' and 'negatively'
What is the pattern?
The process of reducing a word to it's word stem, base or root form is called stemming. Scikit-Learn has no powerfull stemmer, but other libraries like the NLTK have.
Tf-idf
Tf-idf can be understood as a modification of the raw term frequencies (tf)
The concept behind tf-idf is to downweight terms proportionally to the number of documents in which they occur.
The idea is that terms that occur in many different documents are likely unimportant or don't contain any useful information for Natural Language Processing tasks such as document classification.
Explanation by example
Let consider a dataset containing 3 documents
Step19: First, we will compute the term frequency (alternatively
Step20: Secondly, we introduce inverse document frequency ($idf$) by defining the term document frequency $\text{df}(d,t)$, which is simply the number of documents $d$ that contain the term $t$. We can then define the idf as follows
Step21: Using those idfs, we can eventually calculate the tf-idfs for the 3rd document
Step22: Tf-idf in Scikit-Learn
Step23: Wait! Those numbers aren't the same!
Tf-idf in Scikit-Learn is calculated a little bit differently. Here, the +1 count is added to the idf, whereas instead of the denominator if the df
Step24: Normalization
By default, Scikit-Learn performs a normalization. The most common way to normalize the raw term frequency is l2-normalization, i.e., dividing the raw term frequency vector $v$ by its length $||v||_2$ (L2- or Euclidean norm).
$$v_{norm} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v{_1}^2 + v{_2}^2 + \dots + v{_n}^2}}$$
Why is that useful?
For example, we would normalize our 3rd document 'The sun is shining and the weather is sweet' as follows
Step25: Smooth idf
We are not quite there. Sckit-Learn also applies smoothing, which changes the original formula as follows | Python Code:
# example text for model training (SMS messages)
simple_train = ['call you tonight', 'Call me a cab', 'please call me... PLEASE!']
# import and instantiate CountVectorizer (with the default parameters)
from sklearn.feature_extraction.text import CountVectorizer
vect = CountVectorizer()
# learn the 'vocabulary' of the training data (occurs in-place)
vect.fit(simple_train)
# examine the fitted vocabulary
vect.get_feature_names()
# transform training data into a 'document-term matrix'
simple_train_dtm = vect.transform(simple_train)
simple_train_dtm
Explanation: Representing text as numerical data
From the scikit-learn documentation:
Text Analysis is a major application field for machine learning algorithms. However the raw data, a sequence of symbols cannot be fed directly to the algorithms themselves as most of them expect numerical feature vectors with a fixed size rather than the raw text documents with variable length.
Do you have any suggestion on how we can approach this problem?
We will use CountVectorizer to "convert text into a matrix of token counts":
End of explanation
# convert sparse matrix to a dense matrix
simple_train_dtm.toarray()
# examine the vocabulary and document-term matrix together
import pandas as pd
pd.DataFrame(simple_train_dtm.toarray(), columns=vect.get_feature_names())
# check the type of the document-term matrix
type(simple_train_dtm)
# examine the sparse matrix contents
print(simple_train_dtm)
Explanation: From the scikit-learn documentation:
In this scheme, features and samples are defined as follows:
Each individual token occurrence frequency (normalized or not) is treated as a feature.
The vector of all the token frequencies for a given document is considered a sample.
A corpus of documents can thus be represented by a matrix with one row per document and one column per token (e.g. word) occurring in the corpus.
We call vectorization the general process of turning a collection of text documents into numerical feature vectors. This specific strategy (tokenization, counting and normalization) is called the Bag of Words or "Bag of n-grams" representation. Documents are described by word occurrences while completely ignoring the relative position information of the words in the document.
End of explanation
# example text for model testing
simple_test = ["please don't call me"]
Explanation: From the scikit-learn documentation:
As most documents will typically use a very small subset of the words used in the corpus, the resulting matrix will have many feature values that are zeros (typically more than 99% of them).
For instance, a collection of 10,000 short text documents (such as emails) will use a vocabulary with a size in the order of 100,000 unique words in total while each document will use 100 to 1000 unique words individually.
In order to be able to store such a matrix in memory but also to speed up operations, implementations will typically use a sparse representation such as the implementations available in the scipy.sparse package.
End of explanation
# transform testing data into a document-term matrix (using existing vocabulary)
simple_test_dtm = vect.transform(simple_test)
simple_test_dtm.toarray()
# examine the vocabulary and document-term matrix together
pd.DataFrame(simple_test_dtm.toarray(), columns=vect.get_feature_names())
Explanation: In order to make a prediction, the new observation must have the same features as the training observations, both in number and meaning.
End of explanation
path = 'material/sms.tsv'
sms = pd.read_table(path, header=None, names=['label', 'message'])
sms.shape
# examine the first 10 rows
sms.head(10)
Explanation: Summary of the overall process:
vect.fit(train) Does what?
vect.transform(train) Does what?
vect.transform(test) Does what? What happens to tokens not seen before?
vect.fit(train) learns the vocabulary of the training data
vect.transform(train) uses the fitted vocabulary to build a document-term matrix from the training data
vect.transform(test) uses the fitted vocabulary to build a document-term matrix from the testing data (and ignores tokens it hasn't seen before)
A simple spam filter
End of explanation
# examine the class distribution
sms.label.value_counts()
# convert label to a numerical variable
sms['label_num'] = sms.label.map({'ham':0, 'spam':1})
# check that the conversion worked
sms.head(10)
Explanation: Is the current representation of the labels useful to us?
End of explanation
# how to define X and y (from the SMS data) for use with COUNTVECTORIZER
X = sms.message
y = sms.label_num
print(X.shape)
print(y.shape)
sms.message.head()
# split X and y into training and testing sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
# learn training data vocabulary, then use it to create a document-term matrix
vect = CountVectorizer()
vect.fit(X_train)
X_train_dtm = vect.transform(X_train)
# examine the document-term matrix
X_train_dtm
Explanation: Do you remember our feature matrix (X) and label vector (y) convention? How can we achieve this here? Also recall train/test splitting. Describe the steps.
End of explanation
# transform testing data (using fitted vocabulary) into a document-term matrix
X_test_dtm = vect.transform(X_test)
X_test_dtm
Explanation: So, how dense is the matrix?
End of explanation
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
nb.fit(X_train_dtm, y_train)
y_pred_class = nb.predict(X_test_dtm)
from sklearn import metrics
metrics.accuracy_score(y_test, y_pred_class)
# print the confusion matrix
metrics.confusion_matrix(y_test, y_pred_class)
Explanation: Building and evaluating a model
We will use multinomial Naive Bayes:
The multinomial Naive Bayes classifier is suitable for classification with discrete features (e.g., word counts for text classification). The multinomial distribution normally requires integer feature counts. However, in practice, fractional counts such as tf-idf may also work.
End of explanation
X_test[3132]
Explanation: A false negative example
End of explanation
vect.vocabulary_
X_train_tokens = vect.get_feature_names()
print(X_train_tokens[:50])
print(X_train_tokens[-50:])
# feature count per class
nb.feature_count_
# number of times each token appears across all HAM messages
ham_token_count = nb.feature_count_[0, :]
# number of times each token appears across all SPAM messages
spam_token_count = nb.feature_count_[1, :]
# create a table of tokens with their separate ham and spam counts
tokens = pd.DataFrame({'token':X_train_tokens, 'ham':ham_token_count, 'spam':spam_token_count}).set_index('token')
tokens.head()
tokens.sample(5, random_state=6)
Explanation: "Spaminess" of words
Before we start: the estimator has several fields that allow us to examine its internal state:
End of explanation
nb.class_count_
Explanation: Naive Bayes counts the number of observations in each class
End of explanation
tokens['ham'] = tokens.ham + 1
tokens['spam'] = tokens.spam + 1
tokens.sample(5, random_state=6)
# convert the ham and spam counts into frequencies
tokens['ham'] = tokens.ham / nb.class_count_[0]
tokens['spam'] = tokens.spam / nb.class_count_[1]
tokens.sample(5, random_state=6)
Explanation: Add 1 to ham and spam counts to avoid dividing by 0
End of explanation
tokens['spam_ratio'] = tokens.spam / tokens.ham
tokens.sample(5, random_state=6)
Explanation: Calculate the ratio of spam-to-ham for each token
End of explanation
tokens.sort_values('spam_ratio', ascending=False)
tokens.loc['00', 'spam_ratio']
Explanation: Examine the DataFrame sorted by spam_ratio
End of explanation
vect = CountVectorizer(stop_words='english')
Explanation: Tuning the vectorizer
Do you see any potential to enhance the vectorizer? Think about the following questions:
Are all word equally important?
Do you think there are "noise words" which negatively influence the results?
How can we account for the order of words?
Stopwords
Stopwords are the most common words in a language. Examples are 'is', 'which' and 'the'. Usually is beneficial to exclude these words in text processing tasks.
The CountVectorizer has a stop_words parameter:
- stop_words: string {'english'}, list, or None (default)
- If 'english', a built-in stop word list for English is used.
- If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens.
- If None, no stop words will be used.
End of explanation
vect = CountVectorizer(ngram_range=(1, 2))
Explanation: n-grams
n-grams concatenate n words to form a token. The following accounts for 1- and 2-grams
End of explanation
# ignore terms that appear in more than 50% of the documents
vect = CountVectorizer(max_df=0.5)
# only keep terms that appear in at least 2 documents
vect = CountVectorizer(min_df=2)
Explanation: Document frequencies
Often it's beneficial to exclude words that appear in the majority or just a couple of documents. This is, very frequent or infrequent words. This can be achieved by using the max_df and min_df parameters of the vectorizer.
End of explanation
import numpy as np
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining and the weather is sweet'])
Explanation: A note on Stemming
'went' and 'go'
'kids' and 'kid'
'negative' and 'negatively'
What is the pattern?
The process of reducing a word to it's word stem, base or root form is called stemming. Scikit-Learn has no powerfull stemmer, but other libraries like the NLTK have.
Tf-idf
Tf-idf can be understood as a modification of the raw term frequencies (tf)
The concept behind tf-idf is to downweight terms proportionally to the number of documents in which they occur.
The idea is that terms that occur in many different documents are likely unimportant or don't contain any useful information for Natural Language Processing tasks such as document classification.
Explanation by example
Let consider a dataset containing 3 documents:
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
tf = cv.fit_transform(docs).toarray()
tf
cv.vocabulary_
Explanation: First, we will compute the term frequency (alternatively: Bag-of-Words) $tf(t, d)$. $t$ is the number of times a term occures in a document $d$. Using Scikit-Learn we can quickly get those numbers:
End of explanation
n_docs = len(docs)
df_and = 1
idf_and = np.log(n_docs / (1 + df_and))
print('idf "and": %s' % idf_and)
df_is = 3
idf_is = np.log(n_docs / (1 + df_is))
print('idf "is": %s' % idf_is)
df_shining = 2
idf_shining = np.log(n_docs / (1 + df_shining))
print('idf "shining": %s' % idf_shining)
Explanation: Secondly, we introduce inverse document frequency ($idf$) by defining the term document frequency $\text{df}(d,t)$, which is simply the number of documents $d$ that contain the term $t$. We can then define the idf as follows:
$$\text{idf}(t) = log{\frac{n_d}{1+\text{df}(d,t)}},$$
where
$n_d$: The total number of documents
$\text{df}(d,t)$: The number of documents that contain term $t$.
Note that the constant 1 is added to the denominator to avoid a zero-division error if a term is not contained in any document in the test dataset.
Now, Let us calculate the idfs of the words "and", "is," and "shining:"
End of explanation
print('Tf-idfs in document 3:\n')
print('tf-idf "and": %s' % (1 * idf_and))
print('tf-idf "is": %s' % (2 * idf_is))
print('tf-idf "shining": %s' % (1 * idf_shining))
Explanation: Using those idfs, we can eventually calculate the tf-idfs for the 3rd document:
$$\text{tf-idf}(t, d) = \text{tf}(t, d) \times \text{idf}(t),$$
End of explanation
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(smooth_idf=False, norm=None)
tfidf.fit_transform(tf).toarray()[-1][:3]
Explanation: Tf-idf in Scikit-Learn
End of explanation
tf_and = 1
df_and = 1
tf_and * (np.log(n_docs / df_and) + 1)
tf_is = 2
df_is = 3
tf_is * (np.log(n_docs / df_is) + 1)
tf_shining = 1
df_shining = 2
tf_shining * (np.log(n_docs / df_shining) + 1)
Explanation: Wait! Those numbers aren't the same!
Tf-idf in Scikit-Learn is calculated a little bit differently. Here, the +1 count is added to the idf, whereas instead of the denominator if the df:
$$\text{idf}(t) = log{\frac{n_d}{\text{df}(d,t)}} + 1$$
End of explanation
tfidf = TfidfTransformer(use_idf=True, smooth_idf=False, norm='l2')
tfidf.fit_transform(tf).toarray()[-1][:3]
Explanation: Normalization
By default, Scikit-Learn performs a normalization. The most common way to normalize the raw term frequency is l2-normalization, i.e., dividing the raw term frequency vector $v$ by its length $||v||_2$ (L2- or Euclidean norm).
$$v_{norm} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v{_1}^2 + v{_2}^2 + \dots + v{_n}^2}}$$
Why is that useful?
For example, we would normalize our 3rd document 'The sun is shining and the weather is sweet' as follows:
End of explanation
tfidf = TfidfTransformer(use_idf=True, smooth_idf=True, norm='l2')
tfidf.fit_transform(tf).toarray()[-1][:3]
Explanation: Smooth idf
We are not quite there. Sckit-Learn also applies smoothing, which changes the original formula as follows:
$$\text{idf}(t) = log{\frac{1 + n_d}{1+\text{df}(d,t)}} + 1$$
End of explanation |
10,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CIFAR10 classification using HOG features
Load CIFAR10 dataset
Step1: Preprocessing
Step2: Lets look how images looks like
Step3: Feature extraction
We use HOG descriptor from scikit-image library.
Step4: Visualization of HOG histograms
Step5: Short look at the training data
Step6: PCA 2D decomposition
Step7: Classification
We simply use LinearSVC from scikit-learn library.
Step8: Training
Step9: Testing and score
We obtain 0.4914 score on CIFAR10 testing dataset.
Step10: Short look at the prediction
Step11: Our results for different parameters
TODO
Step12: SVC(kernel=linear) and SVC() | Python Code:
import myutils
raw_data_training, raw_data_testing = myutils.load_CIFAR_dataset(shuffle=False)
# raw_data_training = raw_data_training[:5000]
class_names = myutils.load_CIFAR_classnames()
n_training = len( raw_data_training )
n_testing = len( raw_data_testing )
print('Loaded CIFAR10 database with {} training and {} testing samples'.format(n_training, n_testing))
Explanation: CIFAR10 classification using HOG features
Load CIFAR10 dataset
End of explanation
# Converting to greyscale
def rgb2gray(image):
import cv2
return cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
Xdata_training = [ rgb2gray(raw_data_training[i][0]) for i in range(n_training)]
Xdata_testing = [ rgb2gray(raw_data_testing[i][0]) for i in range(n_testing)]
Explanation: Preprocessing
End of explanation
import random
import matplotlib.pyplot as plt
%matplotlib inline
# lets choose some random sample of 10 training images
examples_id = random.sample(range(n_training), 10)
fig, axarr = plt.subplots(2,len(examples_id), figsize=(15,3))
for i in range(len(examples_id)):
id = examples_id[i]
axarr[0,i].imshow(raw_data_training[id][0][:,:])
axarr[0,i].axis('off')
axarr[1,i].imshow(Xdata_training[id],cmap='gray')
axarr[1,i].axis('off')
print('Few examples after preprocessing')
plt.show()
Explanation: Lets look how images looks like
End of explanation
# Configuring HOG descriptor
# see http://scikit-image.org/docs/dev/api/skimage.feature.html#skimage.feature.hog
# Configuration of HOG descriptor
normalize = True # True ==> yields a little bit better score
#
block_norm = 'L2-Hys' # or 'L1'
orientations = 9 #
pixels_per_cell = [8, 8] # see section 'Additional remarks' for some explanation
cells_per_block = [2, 2] #
def extractFeature(img, vis=False):
from skimage.feature import hog
return hog(img, orientations, pixels_per_cell, cells_per_block, block_norm, visualise=vis, transform_sqrt=normalize)
Explanation: Feature extraction
We use HOG descriptor from scikit-image library.
End of explanation
# extracting one sample data
nfeatures = extractFeature(Xdata_training[0], vis=False).size
print('Number of features = {}'.format(nfeatures))
fig, axarr = plt.subplots(3,len(examples_id), figsize=(16,5))
for i in range(len(examples_id)):
id = examples_id[i]
axarr[0,i].imshow(raw_data_training[id][0][:,:])
axarr[0,i].axis('off')
axarr[1,i].imshow(Xdata_training[id],cmap='gray')
axarr[1,i].axis('off')
_, hog_vis = extractFeature(Xdata_training[id], vis=True)
axarr[2,i].imshow(hog_vis,cmap='gray')
axarr[2,i].axis('off')
plt.show()
# feature extraction
import numpy as np
X_training = np.array( [ extractFeature(Xdata_training[i], vis=False) for i in range(n_training) ] )
y_training = np.array( [ raw_data_training[i][1] for i in range(n_training) ] )
X_testing = np.array( [ extractFeature(Xdata_testing[i], vis=False) for i in range(n_testing) ] )
y_testing = np.array( [ raw_data_testing[i][1] for i in range(n_testing) ] )
Explanation: Visualization of HOG histograms
End of explanation
print( 'X_training shape is {}'.format( X_training.shape ) )
print( 'y_training shape is {}'.format( y_training.shape ) )
print( 'X_testing shape is {}'.format( X_testing.shape ) )
print( 'y_testing shape is {}'.format( y_testing.shape ) )
import pandas as pd
print( 'X_training data description')
pd.DataFrame( X_training ).describe()
print( 'y_training data description')
pd.DataFrame( y_training ).describe()
Explanation: Short look at the training data
End of explanation
from sklearn import decomposition
pca = decomposition.PCA(n_components=2)
pca.fit(X_training)
X = pca.transform(X_training)
print(pca.explained_variance_ratio_)
plt.figure( figsize=(15,15) )
plt.scatter( X[:, 0], X[:, 1], c=y_training, cmap='tab10' )
# plt.colorbar()
plt.show()
# TODO: remove outliers
Explanation: PCA 2D decomposition
End of explanation
from sklearn.svm import LinearSVC
# parameter C chosen experimentally (see explanation below)
C = 1.0
clf = LinearSVC(C=C)
Explanation: Classification
We simply use LinearSVC from scikit-learn library.
End of explanation
# this may take some time
clf.fit(X_training, y_training)
Explanation: Training
End of explanation
clf.score( X_testing, y_testing )
Explanation: Testing and score
We obtain 0.4914 score on CIFAR10 testing dataset.
End of explanation
y_predict = clf.predict( X_testing )
import numpy as np
np.unique( y_predict )
Explanation: Short look at the prediction
End of explanation
for C in [ 0.001, 0.01, 0.1, 1.0, 1.2, 1.5, 2.0, 10.0 ]:
clf = LinearSVC(C=C)
clf.fit(X_training, y_training)
print( 'normalize={norm}, C={C}, score={score}'.format(norm=normalize, C=C, score=clf.score( X_testing, y_testing )) )
Explanation: Our results for different parameters
TODO:
* show misclassifications
Additional remarks
Tuning C parameter
End of explanation
from sklearn.svm import SVC
svc_lin_clf = SVC(kernel='linear', C=1)
svc_lin_clf.fit(X_training, y_training)
svc_lin_clf.score(X_testing, y_testing)
from sklearn.svm import SVC
svc_clf = SVC(C=1)
svc_clf.fit(X_training, y_training)
svc_clf.score(X_testing, y_testing)
Explanation: SVC(kernel=linear) and SVC()
End of explanation |
10,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numerical Integration
Preliminaries
We have to import the array library numpy and the plotting library matplotlib.pyplot, note that we define shorter aliases for these.
Next we import from numpy some of the functions that we will use more frequently and from an utility library functions to format conveniently our results.
Eventually we tell to the plotting library how we want to format our plots (fivethirtyeight is a predefined style that mimics a popular site of statistical analysis and yes, statistical analysis and popular apply both to a single item... while 00_mplrc refers to some tweaks I prefer to use)
Step1: Plot of the load
We start analyzing the loading process.
Substituting $t_1=1.2$s in the expression of $p(t)$ we have
$$p(t) = \left({15625 \over 27} {t^3 \over s^3} - {8750 \over 9} {t^2 \over s^2}
+ {1100 \over 3} {t \over s} + 400 \right)\text{N}$$
Step2: We will use the above expression everywhere in the following.
Let's visualize this loading with the help of a plot
Step3: From the plot it is apparent that the loading can be approximated by a constant rectangle plus a sine, of relatively small amplitude, with a period of about 1 second.
I expect that the forced response will be a damped oscillation about the static displacement $400/k$, followed by a free response in which we will observe a damped oscillation about the zero displacement.
The Particular Integral
Writing the particuar integral as
$$\xi(t) = \left(A {t^3 \over s^3} + B {t^2 \over s^2}
+ C {t \over s} + D \right)$$
where $A, B, C, D$ all have the dimension of a length, deriving with respect to time and substituting in the equation of motion we have
$$\left( 6A\frac{t}{s} + 2 B\right)\frac{m}{s^2}
+ \left( 3A\frac{t^2}{s^2} + 2B\frac{t}{s} + C \right) \frac{c}{s}
+ \left( A\frac{t^3}{s^3} + B\frac{t^2}{s^2} + C\frac{t}{s} + D \right) k=\left({15625 \over 27} {t^3 \over s^3} - {8750 \over 9} {t^2 \over s^2}
+ {1100 \over 3} {t \over s} + 400 \right)\text{N}$$
Collecting the powers of $t/s$ in the left member we have
$$A k \frac{t^3}{s^3} + \left( B k + \frac{3 A c}{s} \right) \frac{t^2}{s^2} + \left( C k + \frac{2 B c}{s} + \frac{6 A m}{s^2} \right) \frac{t}{s}
+ \left(D k + \frac{C c}{s} + \frac{2 B m}{s^2}\right)=\left({15625 \over 27} {t^3 \over s^3} - {8750 \over 9} {t^2 \over s^2}
+ {1100 \over 3} {t \over s} + 400 \right)\text{N}$$
and equating the coefficients of the different powers, starting from the higher to the lower ones, we have a sequence of equations of a single unknown, solving this sequence of equations gives, upon substitution of the numerical values of the system parameters,
\begin{align}
A& = \frac{625}{14256}\text{m}=0.0438411896745
\text{m}\
B& = \left(\frac{-175}{2376} - \frac{19}{209088} \sqrt{11}\right)\text{m}=-0.0739545830991
\text{m}\
C& = \left(\frac{133}{1306800} \sqrt{11} + \frac{218117}{7920000}\right)\text{m}=0.0278775758319\text{m}\
D& = \left(\frac{3013181}{99000000} - \frac{135571859}{7187400000000} \sqrt{11}\right)\text{m}=0.0303736121006\text{m}\
\end{align}
Substituting in $\xi(t)$ and taking the time derivative we have
\begin{align}
\xi(t) &= \frac{625}{14256} t^3 -
\left( \frac{19}{209088} \sqrt{11} + \frac{175}{2376} \right) t^2 +
\left( \frac{133}{1306800} \sqrt{11} + \frac{218117}{7920000} \right) t -
\frac{135571859}{7187400000000} \sqrt{11} + \frac{3013181}{99000000},\
\dot\xi(t) &= \frac{625}{4752} t^2 -
2 \left( \frac{19}{209088} \sqrt{11} + \frac{175}{2376} \right) t +
\frac{133}{1306800} \sqrt{11} + \frac{218117}{7920000}
\end{align}
Step4: Plot of the particular integral
Step6: System Response
Forced Response $0\le t\le t_1$
The response in terms of displacement and velocity is
\begin{align}
x(t) &= \exp(-\zeta\omega_nt) \left(
R\cos(\omega_Dt) + S\sin(\omega_Dt)\right) + \xi(t),\
v(t) &= \exp(-\zeta\omega_nt) \left(
\left(S\cos(\omega_Dt)-R\sin(\omega_Dt)\right)\omega_D -
\left(R\cos(\omega_Dt)+S\sin(\omega_Dt)\right)\zeta\omega_n
\right) + \dot\xi(t)
\end{align}
and we can write the following initial conditions, taking into account that at time $t=0$ the system is at rest,
\begin{align}
x(0) &= 1 \cdot \left(
R\cdot1 + S\cdot0\right) + \xi_0\&=R+\xi_0=0,\
\dot x(0) &= 1 \cdot \left(
\left(S\cdot1-R\cdot0\right)\omega_D -
\left(R\cdot1+S\cdot0\right)\zeta\omega_n
\right) + \dot\xi_0\&=S\omega_D-R\zeta\omega_n+\dot\xi_0=0
\end{align}
The constants of integration are
\begin{align}
R &= -\xi_0,\
S &= \frac{R\zeta\omega_n-\dot\xi_0}{\omega_D}
\end{align}
Step8: Free Response
For $t\ge t_1$ the response (no external force) is given by
\begin{align}
x^(t) &= \exp(-\zeta\omega_nt) \left(R^\cos(\omega_Dt) + S^\sin(\omega_Dt)\right),\
v^(t) &= \exp(-\zeta\omega_nt) \left(
\left(S^\cos(\omega_Dt)-R^\sin(\omega_Dt)\right)\omega_D -
\left(R^\cos(\omega_Dt)+S^\sin(\omega_Dt)\right)\zeta\omega_n
\right).
\end{align}
By the new initial conditions,
$$x^(t_1) = x(t_1) = x_1, \qquad v^(t_1) = v(t_1) = v_1,$$
we have, with
$e_1 = \exp(-\zeta\omega_{n}t_1)$,
$c_1 = \cos(\omega_Dt_1)$ and
$s_1 = \sin(\omega_Dt_1)$
$$ e_1 \begin{bmatrix}
c_1 & s_1 \
-\omega_D s_1 - \zeta\omega_n c_1 & \omega_D c_1 - \zeta\omega_n s_1 \end{bmatrix} \,
\begin{Bmatrix}R^\S^\end{Bmatrix} = \begin{Bmatrix}x_1\v_1\end{Bmatrix}
$$
that gives
$$
\begin{Bmatrix}R^\S^\end{Bmatrix} =
\frac{1}{\omega_D\,e_1}\,
\begin{bmatrix}
\omega_D c_1 - \zeta\omega_n s_1 & -s_1 \
\zeta\omega_n c_1 + \omega_D s_1 & c_1
\end{bmatrix}\,
\begin{Bmatrix}x_1\v_1\end{Bmatrix}.
$$
Step9: Putting it all together
First, the homogeneous response
Step10: then, we put together the homogeneous response and the particular integral, in the different intervals
Step11: Plot of the response
Step12: Numerical Integration
The time step we are going to use is specified in terms of the natural period of vibration, $h=T_n/12$
Step13: We need a function that returns the value of the load,
Step14: Initialization
The final time for the computation, the factors that modify the increment of the load to take into account the initial conditions, the modified stiffness, the containers for the results, the initial values of the results.
Step15: Computational Loop
Step16: The Plot | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from numpy import cos, exp, pi, sin, sqrt
from IPython.display import Latex, display
plt.style.use(['fivethirtyeight', './00_mplrc'])
Explanation: Numerical Integration
Preliminaries
We have to import the array library numpy and the plotting library matplotlib.pyplot, note that we define shorter aliases for these.
Next we import from numpy some of the functions that we will use more frequently and from an utility library functions to format conveniently our results.
Eventually we tell to the plotting library how we want to format our plots (fivethirtyeight is a predefined style that mimics a popular site of statistical analysis and yes, statistical analysis and popular apply both to a single item... while 00_mplrc refers to some tweaks I prefer to use)
End of explanation
N = 6000
def it(t): return int((t+1)*N/3)
t = np.linspace(-1,2,N+1)
p = np.where(t<0, 0, np.where(
t<=1.2, 15625./27*t**3 - 8750./9*t**2 + 1100./3*t + 400, 0))
Explanation: Plot of the load
We start analyzing the loading process.
Substituting $t_1=1.2$s in the expression of $p(t)$ we have
$$p(t) = \left({15625 \over 27} {t^3 \over s^3} - {8750 \over 9} {t^2 \over s^2}
+ {1100 \over 3} {t \over s} + 400 \right)\text{N}$$
End of explanation
plt.plot(t,p)
plt.ylim((-20,520))
plt.xticks((-1,0,1,1.2,2))
plt.xlabel('t [second]')
plt.ylabel('p(t) [Newton]')
plt.title(r'The load $p(t)$')
plt.show()
Explanation: We will use the above expression everywhere in the following.
Let's visualize this loading with the help of a plot:
End of explanation
xi = np.where(t<0, 0,
np.where(t<=1.2001,
625./14256*t**3 - (19./209088*sqrt(11) + 175./2376)*t**2
+ t*(133./1306800*sqrt(11) + 218117./7920000)
- 135571859./7187400000000*sqrt(11) + 3013181./99000000, 0))
dot_xi = np.where(t<0, 0,
np.where(t<=1.2001,
625./4752*t**2 - 2*(t*(19./209088*sqrt(11) + 175./2376))
+ 133./1306800*sqrt(11) + 218117./7920000, 0))
xi_0 = - 135571859./7187400000000*sqrt(11) + 3013181./99000000
dot_xi_0 = + 133./1306800*sqrt(11) + 218117./7920000
Explanation: From the plot it is apparent that the loading can be approximated by a constant rectangle plus a sine, of relatively small amplitude, with a period of about 1 second.
I expect that the forced response will be a damped oscillation about the static displacement $400/k$, followed by a free response in which we will observe a damped oscillation about the zero displacement.
The Particular Integral
Writing the particuar integral as
$$\xi(t) = \left(A {t^3 \over s^3} + B {t^2 \over s^2}
+ C {t \over s} + D \right)$$
where $A, B, C, D$ all have the dimension of a length, deriving with respect to time and substituting in the equation of motion we have
$$\left( 6A\frac{t}{s} + 2 B\right)\frac{m}{s^2}
+ \left( 3A\frac{t^2}{s^2} + 2B\frac{t}{s} + C \right) \frac{c}{s}
+ \left( A\frac{t^3}{s^3} + B\frac{t^2}{s^2} + C\frac{t}{s} + D \right) k=\left({15625 \over 27} {t^3 \over s^3} - {8750 \over 9} {t^2 \over s^2}
+ {1100 \over 3} {t \over s} + 400 \right)\text{N}$$
Collecting the powers of $t/s$ in the left member we have
$$A k \frac{t^3}{s^3} + \left( B k + \frac{3 A c}{s} \right) \frac{t^2}{s^2} + \left( C k + \frac{2 B c}{s} + \frac{6 A m}{s^2} \right) \frac{t}{s}
+ \left(D k + \frac{C c}{s} + \frac{2 B m}{s^2}\right)=\left({15625 \over 27} {t^3 \over s^3} - {8750 \over 9} {t^2 \over s^2}
+ {1100 \over 3} {t \over s} + 400 \right)\text{N}$$
and equating the coefficients of the different powers, starting from the higher to the lower ones, we have a sequence of equations of a single unknown, solving this sequence of equations gives, upon substitution of the numerical values of the system parameters,
\begin{align}
A& = \frac{625}{14256}\text{m}=0.0438411896745
\text{m}\
B& = \left(\frac{-175}{2376} - \frac{19}{209088} \sqrt{11}\right)\text{m}=-0.0739545830991
\text{m}\
C& = \left(\frac{133}{1306800} \sqrt{11} + \frac{218117}{7920000}\right)\text{m}=0.0278775758319\text{m}\
D& = \left(\frac{3013181}{99000000} - \frac{135571859}{7187400000000} \sqrt{11}\right)\text{m}=0.0303736121006\text{m}\
\end{align}
Substituting in $\xi(t)$ and taking the time derivative we have
\begin{align}
\xi(t) &= \frac{625}{14256} t^3 -
\left( \frac{19}{209088} \sqrt{11} + \frac{175}{2376} \right) t^2 +
\left( \frac{133}{1306800} \sqrt{11} + \frac{218117}{7920000} \right) t -
\frac{135571859}{7187400000000} \sqrt{11} + \frac{3013181}{99000000},\
\dot\xi(t) &= \frac{625}{4752} t^2 -
2 \left( \frac{19}{209088} \sqrt{11} + \frac{175}{2376} \right) t +
\frac{133}{1306800} \sqrt{11} + \frac{218117}{7920000}
\end{align}
End of explanation
plt.plot(t,xi)
plt.title('Particular integral')
plt.xlabel('s')
plt.ylabel('m')
plt.show()
plt.plot(t, dot_xi)
plt.title('Time derivative of particular integral')
plt.xlabel('s')
plt.ylabel('m/s')
plt.show()
Explanation: Plot of the particular integral
End of explanation
mass = 12.0
stif = 13200.0
zeta = 0.038
w2_n = stif/mass
w1_n = sqrt(w2_n)
w1_D = w1_n*sqrt(1-zeta**2)
damp = 2*zeta*w1_n*mass
R = -xi_0
S = (R*zeta*w1_n-dot_xi_0)/w1_D
display(HTML("<center><h3>Forced Displacements</h3></center>"))
display(Latex(r
$x(t) = \exp(-%g\cdot%g\,t) \left(
%+g\cos(%g\,t)%+g\sin(%g\,t)
\right)+\xi(t)$
%(zeta,w1_n,R,w1_D,S,w1_D)))
t1 = 1.2
it1 = it(t1)
x_t1 = exp(-zeta*w1_n*t1)*(R*cos(w1_D*t1)+S*sin(w1_D*t1))+xi[it1]
v_t1 =(exp(-zeta*w1_n*t1)*(R*cos(w1_D*t1)+S*sin(w1_D*t1))*(-zeta)*w1_n
+exp(-zeta*w1_n*t1)*(S*cos(w1_D*t1)-R*sin(w1_D*t1))*w1_D
+dot_xi[it1])
display(Latex(
r'$$x(t_1)=x_1=%+g\,\text{m},\qquad v(t_1)=v_1=%+g\,\text{m/s}$$'%
(x_t1,v_t1)))
Explanation: System Response
Forced Response $0\le t\le t_1$
The response in terms of displacement and velocity is
\begin{align}
x(t) &= \exp(-\zeta\omega_nt) \left(
R\cos(\omega_Dt) + S\sin(\omega_Dt)\right) + \xi(t),\
v(t) &= \exp(-\zeta\omega_nt) \left(
\left(S\cos(\omega_Dt)-R\sin(\omega_Dt)\right)\omega_D -
\left(R\cos(\omega_Dt)+S\sin(\omega_Dt)\right)\zeta\omega_n
\right) + \dot\xi(t)
\end{align}
and we can write the following initial conditions, taking into account that at time $t=0$ the system is at rest,
\begin{align}
x(0) &= 1 \cdot \left(
R\cdot1 + S\cdot0\right) + \xi_0\&=R+\xi_0=0,\
\dot x(0) &= 1 \cdot \left(
\left(S\cdot1-R\cdot0\right)\omega_D -
\left(R\cdot1+S\cdot0\right)\zeta\omega_n
\right) + \dot\xi_0\&=S\omega_D-R\zeta\omega_n+\dot\xi_0=0
\end{align}
The constants of integration are
\begin{align}
R &= -\xi_0,\
S &= \frac{R\zeta\omega_n-\dot\xi_0}{\omega_D}
\end{align}
End of explanation
e_t1 = exp(-zeta*w1_n*t1)
c_t1 = cos(w1_D*t1)
s_t1 = sin(w1_D*t1)
Rs = (w1_D*c_t1*x_t1 - zeta*w1_n*s_t1*x_t1 - s_t1*v_t1) /w1_D /e_t1
Ss = (w1_D*s_t1*x_t1 + zeta*w1_n*c_t1*x_t1 + c_t1*v_t1) /w1_D /e_t1
display(HTML("<center><h3>Free Displacements</h3></center>"))
display(Latex(r
$$x^*(t) = \exp(-%g\cdot%g\,t) \left(
%+g\cos(%g\,t)%+g\sin(%g\,t)
\right)$$
%(zeta,w1_n,Rs,w1_D,Ss,w1_D)))
xs_t1 = exp(-zeta*w1_n*t1)*(Rs*cos(w1_D*t1)+Ss*sin(w1_D*t1))
vs_t1 = ((exp(-zeta*w1_n*t1)*(Rs*cos(w1_D*t1)+Ss*sin(w1_D*t1))*(-zeta)*w1_n
+exp(-zeta*w1_n*t1)*(Ss*cos(w1_D*t1)-Rs*sin(w1_D*t1))*w1_D))
display(Latex(r'$$x^*(t_1)=%+g\,\text{m},\qquad v^*(t_1) = %+g\,\text{m/s}$$'%
(xs_t1,vs_t1)))
Explanation: Free Response
For $t\ge t_1$ the response (no external force) is given by
\begin{align}
x^(t) &= \exp(-\zeta\omega_nt) \left(R^\cos(\omega_Dt) + S^\sin(\omega_Dt)\right),\
v^(t) &= \exp(-\zeta\omega_nt) \left(
\left(S^\cos(\omega_Dt)-R^\sin(\omega_Dt)\right)\omega_D -
\left(R^\cos(\omega_Dt)+S^\sin(\omega_Dt)\right)\zeta\omega_n
\right).
\end{align}
By the new initial conditions,
$$x^(t_1) = x(t_1) = x_1, \qquad v^(t_1) = v(t_1) = v_1,$$
we have, with
$e_1 = \exp(-\zeta\omega_{n}t_1)$,
$c_1 = \cos(\omega_Dt_1)$ and
$s_1 = \sin(\omega_Dt_1)$
$$ e_1 \begin{bmatrix}
c_1 & s_1 \
-\omega_D s_1 - \zeta\omega_n c_1 & \omega_D c_1 - \zeta\omega_n s_1 \end{bmatrix} \,
\begin{Bmatrix}R^\S^\end{Bmatrix} = \begin{Bmatrix}x_1\v_1\end{Bmatrix}
$$
that gives
$$
\begin{Bmatrix}R^\S^\end{Bmatrix} =
\frac{1}{\omega_D\,e_1}\,
\begin{bmatrix}
\omega_D c_1 - \zeta\omega_n s_1 & -s_1 \
\zeta\omega_n c_1 + \omega_D s_1 & c_1
\end{bmatrix}\,
\begin{Bmatrix}x_1\v_1\end{Bmatrix}.
$$
End of explanation
x_hom = np.where(t<0, 0,
np.where(t<1.2001, exp(-zeta*w1_n*t) * (R *cos(w1_D*t) + S *sin(w1_D*t)),
exp(-zeta*w1_n*t) * (Rs*cos(w1_D*t) + Ss*sin(w1_D*t))))
v_hom = np.where(t<0, 0,
np.where(t<1.2001,
exp(-t*zeta*w1_n) * (
( S*w1_D- R*zeta*w1_n)*cos(t*w1_D) -
( S*zeta*w1_n+ R*w1_D)*sin(t*w1_D)),
exp(-t*zeta*w1_n) * (
(Ss*w1_D-Rs*zeta*w1_n)*cos(t*w1_D) -
(Ss*zeta*w1_n+Rs*w1_D)*sin(t*w1_D))))
Explanation: Putting it all together
First, the homogeneous response
End of explanation
x = x_hom+xi
v = v_hom+dot_xi
Explanation: then, we put together the homogeneous response and the particular integral, in the different intervals
End of explanation
plt.plot(t,x)
plt.title('Displacement')
plt.show()
plt.plot(t,v)
plt.title('Velocity')
plt.show()
plt.plot(t,x)
plt.title('Displacement, zoom around t_1')
plt.xlim((1.16,1.24))
plt.show()
plt.plot(t,v)
plt.title('Velocity, zoom around t_1')
plt.xlim((1.16,1.24))
plt.show()
Explanation: Plot of the response
End of explanation
t_n = 2*pi/w1_n
h = t_n/12.0
Explanation: Numerical Integration
The time step we are going to use is specified in terms of the natural period of vibration, $h=T_n/12$:
End of explanation
def load(t):
return 0 if t > 1.20001 else (
+ 578.703703703703705*t**3
- 972.22222222222222*t**2
+ 366.666666666666667*t
+ 400.)
Explanation: We need a function that returns the value of the load,
End of explanation
stop = 2.0 + h/2
a0fac = 3.0*mass + damp*h/2.0
v0fac = 6.0*mass/h + 3.0*damp
ks = stif + 3.0*damp/h + 6.0*mass/h**2
T, X, V, A = [], [], [], []
x0 = 0
v0 = 0
t0 = 0
p0 = load(t0)
Explanation: Initialization
The final time for the computation, the factors that modify the increment of the load to take into account the initial conditions, the modified stiffness, the containers for the results, the initial values of the results.
End of explanation
while t0 < stop:
a0 = (p0-stif*x0-damp*v0)/mass
for current, vec in zip((t0,x0,v0,a0),(T,X,V,A)):
vec.append(current)
t1 = t0+h
p1 = load(t1)
dp = p1-p0
dx = (dp+a0fac*a0+v0fac*v0)/ks
dv = 3*dx/h -a0*h/2 -3*v0
t0 = t1 ; p0 = p1 ; x0 = x0+dx ; v0 = v0+dv
Explanation: Computational Loop
End of explanation
plt.plot(T,X,label='numerical')
plt.plot(t,x,label='analytical')
plt.xlim((0,2))
plt.legend(loc=3);
plt.plot(T,X,'-o',label='numerical')
plt.plot(t,x,'-x',label='analytical')
plt.title('Displacement, zoom around t_1')
plt.xlim((1.16,1.24))
plt.ylim((0,0.04))
plt.legend(loc=3)
plt.show()
# an IPython incantation that properly formats this notebook
from IPython.display import HTML
HTML(open("00_custom.sav.css", "r").read())
Explanation: The Plot
End of explanation |
10,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations
Step1: Part 1 | Python Code:
# Setup
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# Setup
f0 = 1.
omega0 = 2. * np.pi * f0
a = 1.
Explanation: Ordinary Differential Equations : Practical work on the harmonic oscillator
In this example, you will simulate an harmonic oscillator and compare the numerical solution to the closed form one.
Theory
Read about the theory of harmonic oscillators on Wikipedia
Mechanical oscillator
The case of the one dimensional mechanical oscillator leads to the following equation:
$$
m \ddot x + \mu \dot x + k x = m \ddot x_d
$$
Where:
$x$ is the position,
$\dot x$ and $\ddot x$ are respectively the speed and acceleration,
$m$ is the mass,
$\mu$ the
$k$ the stiffness,
and $\ddot x_d$ the driving acceleration which is null if the oscillator is free.
Canonical equation
Most 1D oscilators follow the same canonical equation:
$$
\ddot x + 2 \zeta \omega_0 \dot x + \omega_0^2 x = \ddot x_d
$$
Where:
$\omega_0$ is the undamped pulsation,
$\zeta$ is damping ratio,
$\ddot x_d$ is the imposed acceleration.
In the case of the mechanical oscillator:
$$
\omega_0 = \sqrt{\dfrac{k}{m}}
$$
$$
\zeta = \dfrac{\mu}{2\sqrt{mk}}
$$
Undampened oscillator
First, you will focus on the case of an undamped free oscillator ($\zeta = 0$, $\ddot x_d = 0$) with the following initial conditions:
$$
\left \lbrace
\begin{split}
x(t = 0) = 1 \
\dot x(t = 0) = 0
\end{split}\right.
$$
The closed form solution is:
$$
x(t) = a\cos \omega_0 t
$$
End of explanation
# Complete here
#t =
#xth =
Explanation: Part 1: theoretical solution
Plot the closed form solution of the undamped free oscillator for 5 periods.
Steps:
Create an array $t$ reprenting time,
Create a function $x_{th}$ representing the amplitude of the closed form solution,
Plot $x_{th}$ vs $t$.
End of explanation |
10,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modelos de Classificação
Este laboratório irá cobrir os passos para tratar a base de dados de taxa de cliques (click-through rate - CTR) e criar um modelo de classificação para tentar determinar se um usuário irá ou não clicar em um banner.
Para isso utilizaremos a base de dados Criteo Labs que foi utilizado em uma competição do Kaggle.
Nesse notebook
Step1: (1b) Vetores Esparsos
Pontos de dados categóricos geralmente apresentam um pequeno conjunto de OHE não-nulos relativo ao total de possíveis atributos. Tirando proveito dessa propriedade podemos representar nossos dados como vetores esparsos, economizando espaço de armazenamento e cálculos computacionais.
No próximo exercício transforme os vetores com nome precedidos de Dense para vetores esparsos. Utilize a classe SparseVector para representá-los e verifique que ambas as representações retornam o mesmo resultado nos cálculos dos produtos interno.
Use SparseVector(tamanho, *args) para criar um novo vetor esparso onde tamanho é o tamanho do vetor e args pode ser um dicionário, uma lista de tuplas (índice, valor) ou duas arrays separadas de índices e valores ordenados por índice.
Step2: (1c) Atributos OHE como vetores esparsos
Agora vamos representar nossos atributos OHE como vetores esparsos. Utilizando o dicionário sampleOHEDictManual, crie um vetor esparso para cada amostra de nossa base de dados. Todo atributo que ocorre em uma amostra deve ter valor 1.0. Por exemplo, um vetor para um ponto com os atributos 2 e 4 devem ser [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0].
Step4: (1d) Função de codificação OHE
Vamos criar uma função que gera um vetor esparso codificado por um dicionário de OHE. Ele deve fazer o procedimento similar ao exercício anterior.
Step5: (1e) Aplicar OHE em uma base de dados
Finalmente, use a função da parte (1d) para criar atributos OHE para todos os 3 objetos da base de dados artificial.
Step6: Part 2
Step7: (2b) Dicionário OHE de atributos únicos
Agora, vamos criar um RDD de tuplas para cada (IDatributo, categoria) em sampleDistinctFeats. A chave da tupla é a própria tupla original, e o valor será um inteiro variando de 0 até número de tuplas - 1.
Em seguida, converta essa RDD em um dicionário, utilizando o comando collectAsMap.
Use o comando zipWithIndex seguido de collectAsMap.
Step9: (2c) Criação automática do dicionário OHE
Agora use os códigos dos exercícios anteriores para criar uma função que retorna um dicionário OHE a partir dos atributos categóricos de uma base de dados.
Step10: Part 3
Step11: (3a) Carregando e dividindo os dados
Da mesma forma que no notebook anterior, vamos dividir os dados entre treinamento, validação e teste. Use o método randomSplit com os pesos (weights) e semente aleatória (seed) especificados para criar os conjuntos, então faça o cache de cada RDD, pois utilizaremos cada uma delas com frequência durante esse exercício.
Step13: (3b) Extração de atributos
Como próximo passo, crie uma função para ser aplicada em cada objeto do RDD para gerar uma RDD de tuplas (IDatributo, categoria). Ignore o primeiro campo, que é o rótulo e gere uma lista de tuplas para os atributos seguintes. Utilize o comando enumerate para criar essas tuplas.
Step14: (3c) Crie o dicionário de OHE dessa base de dados
Note que a função parsePoint retorna um objeto em forma de lista (IDatributo, categoria), que é o mesmo formato utilizado pela função createOneHotDict. Utilize o RDD parsedTrainFeat para criar um dicionário OHE.
Step16: (3d) Aplicando OHE à base de dados
Agora vamos usar o dicionário OHE para criar um RDD de objetos LabeledPoint usando atributos OHE. Complete a função parseOHEPoint. Dica
Step19: Visualização 1
Step21: (3e) Atributos não observados
Naturalmente precisaremos aplicar esse mesmo procedimento para as outras bases (validação e teste), porém nessas bases podem existir atributos não observados na base de treino.
Precisamos adaptar a função oneHotEncoding para ignorar os atributos que não existem no dicionário.
Step22: Part 4
Step24: (4b) Log loss
Uma forma de avaliar um classificador binário é através do log-loss, definido como
Step25: (4c) Baseline log loss
Agora, vamos utilizar a função da Parte (4b) para calcular um baseline da métrica de log-loss na nossa base de treino. Uma forma de calcular um baseline é predizer sempre a média dos rótulos observados. Primeiro calcule a média dos rótulos da base e, em seguida, calcule o log-loss médio para a base de treino.
Step27: (4d) Probabilidade da Predição
O modelo gerado na Parte (4a) possui um método chamado predict, porém esse método retorna apenas 0's e 1's. Para calcular a probabilidade de um evento, vamos criar uma função getP que recebe como parâmetro o ponto x, o conjunto de pesos w e o intercept.
Calcule o modelo de regressão linear nesse ponto x e aplique a função sigmoidal $ \scriptsize \sigma(t) = (1+ e^{-t})^{-1} $ para retornar a probabilidade da predição do objeto x.
Step29: (4e) Avalie o modelo
Finalmente, crie uma função evaluateResults que calcula o log-loss médio do modelo em uma base de dados. Em seguida, execute essa função na nossa base de treino.
Step30: (4f) log-loss da validação
Agora aplique o modelo na nossa base de validação e calcule o log-loss médio, compare com o nosso baseline.
Step31: Visualização 2
Step33: Parte 5
Step35: (5b) Criando hashed features
Agora vamos usar essa função hash para criar hashed features para nossa base CTR. Primeiro escreva uma função que usa a função hash da Parte (5a) com numBuckets = $ \scriptsize 2^{15} \approx 33K $ para criar um LabeledPoint com os hashed features armazenados como um SparseVector. Então use esta função para criar uma nova base de treino, validação e teste com hashed features. Dica
Step37: (5c) Esparsidade
Uma vez que temos 33 mil hashed features contra 233 mil OHE, devemos esperar que os atributos OHE sejam mais esparsos. Verifique essa hipótese computando a esparsidade média do OHE e do hashed features.
Note que se você tem um SparseVector chamado sparse, chamar len(sparse) retornará o total de atributos, e não o número de valores não nulos. SparseVector tem atributos indices e values que contém informações sobre quais atributos são não nulos.
Step38: (5d) Modelo logístico com hashed features
Agora treine um modelo de regressão logística para os hashed features. Execute um grid search para encontrar parâmetros adequados para essa base, avaliando o log-loss no conjunto de validação. Nota
Step39: (5e) Avaliando a base de testes
Finalmente, avalie o melhor modelo da Parte (5d) na base de testes. Compare o log-loss do resultado com o log-loss do nosso baseline no conjunto de testes, calculando da mesma forma que na Parte (4f). | Python Code:
# Data for manual OHE
# Note: the first data point does not include any value for the optional third feature
#from pyspark import SparkContext
#sc =SparkContext()
sampleOne = [(0, 'mouse'), (1, 'black')]
sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
sampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree])
print sampleDataRDD.collect()
# EXERCICIO
#aux = sampleOne+sampleTwo+sampleThree
#aux.sort(cmp=lambda x,y:x[0]<y[0])
#sampleOHEDictManual = dict((aux[k], k) for k in range(len(aux)))
sampleOHEDictManual = {}
sampleOHEDictManual[(0,'bear')] = 0
sampleOHEDictManual[(0,'cat')] = 1
sampleOHEDictManual[(0,'mouse')] = 2
sampleOHEDictManual[(1,'black')] = 3
sampleOHEDictManual[(1,'tabby')] = 4
sampleOHEDictManual[(2,'mouse')] = 5
sampleOHEDictManual[(2,'salmon')] = 6
print sampleOHEDictManual
#print sampleOHEDictManual
#<COMPLETAR>
# TEST One-hot-encoding (1a)
from test_helper import Test
Test.assertEqualsHashed(sampleOHEDictManual[(0,'bear')],
'b6589fc6ab0dc82cf12099d1c2d40ab994e8410c',
"incorrect value for sampleOHEDictManual[(0,'bear')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'cat')],
'356a192b7913b04c54574d18c28d46e6395428ab',
"incorrect value for sampleOHEDictManual[(0,'cat')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')],
'da4b9237bacccdf19c0760cab7aec4a8359010b0',
"incorrect value for sampleOHEDictManual[(0,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'black')],
'77de68daecd823babbb58edb1c8e14d7106e83bb',
"incorrect value for sampleOHEDictManual[(1,'black')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')],
'1b6453892473a467d07372d45eb05abc2031647a',
"incorrect value for sampleOHEDictManual[(1,'tabby')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')],
'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4',
"incorrect value for sampleOHEDictManual[(2,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')],
'c1dfd96eea8cc2b62785275bca38ac261256e278',
"incorrect value for sampleOHEDictManual[(2,'salmon')]")
Test.assertEquals(len(sampleOHEDictManual.keys()), 7,
'incorrect number of keys in sampleOHEDictManual')
Explanation: Modelos de Classificação
Este laboratório irá cobrir os passos para tratar a base de dados de taxa de cliques (click-through rate - CTR) e criar um modelo de classificação para tentar determinar se um usuário irá ou não clicar em um banner.
Para isso utilizaremos a base de dados Criteo Labs que foi utilizado em uma competição do Kaggle.
Nesse notebook:
Parte 1: Utilização do one-hot-encoding (OHE) para transformar atributos categóricos em numéricos
Parte 2: Construindo um dicionário OHE
Parte 3: Geração de atributos OHE na base de dados CTR
Visualização 1: Frequência de atributos
Parte 4: Predição de CTR e avaliação da perda logarítimica (logloss)
Visualização 2: Curva ROC
Parte 5: Reduzindo a dimensão dos atributos através de hashing (feature hashing)
Referências de métodos: Spark's Python APIe NumPy Reference
Part 1: Utilização do one-hot-encoding (OHE) para transformar atributos categóricos em numéricos
(1a) One-hot-encoding
Para um melhor entendimento do processo da codificação OHE vamos trabalhar com uma base de dados pequena e sem rótulos. Cada objeto dessa base pode conter três atributos, o primeiro indicando o animal, o segundo a cor e o terceiro qual animal que ele come.
No esquema OHE, queremos representar cada tupla (IDatributo, categoria) através de um atributo binário. Nós podemos fazer isso no Python criando um dicionário que mapeia cada possível tupla em um inteiro que corresponde a sua posição no vetor de atributos binário.
Para iniciar crie um dicionário correspondente aos atributos categóricos da base construída logo abaixo. Faça isso manualmente.
End of explanation
import numpy as np
from pyspark.mllib.linalg import SparseVector
# EXERCICIO
aDense = np.array([0., 3., 0., 4.])
aSparse = SparseVector(len(aDense),dict((k,aDense[k]) for k in range(len(aDense))))#<COMPLETAR>)
bDense = np.array([0., 0., 0., 1.])
bSparse = SparseVector(len(bDense),dict((k,bDense[k]) for k in range(len(aDense))))#<COMPLETAR>)
w = np.array([0.4, 3.1, -1.4, -.5])
print aDense.dot(w)
print aSparse.dot(w)
print bDense.dot(w)
print bSparse.dot(w)
# TEST Sparse Vectors (1b)
Test.assertTrue(isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(aDense.dot(w) == aSparse.dot(w),
'dot product of aDense and w should equal dot product of aSparse and w')
Test.assertTrue(bDense.dot(w) == bSparse.dot(w),
'dot product of bDense and w should equal dot product of bSparse and w')
Explanation: (1b) Vetores Esparsos
Pontos de dados categóricos geralmente apresentam um pequeno conjunto de OHE não-nulos relativo ao total de possíveis atributos. Tirando proveito dessa propriedade podemos representar nossos dados como vetores esparsos, economizando espaço de armazenamento e cálculos computacionais.
No próximo exercício transforme os vetores com nome precedidos de Dense para vetores esparsos. Utilize a classe SparseVector para representá-los e verifique que ambas as representações retornam o mesmo resultado nos cálculos dos produtos interno.
Use SparseVector(tamanho, *args) para criar um novo vetor esparso onde tamanho é o tamanho do vetor e args pode ser um dicionário, uma lista de tuplas (índice, valor) ou duas arrays separadas de índices e valores ordenados por índice.
End of explanation
# Reminder of the sample features
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# EXERCICIO
sampleOneOHEFeatManual = SparseVector(len(sampleOHEDictManual), [(sampleOHEDictManual[i],1.0) for i in sampleOne])#<COMPLETAR>)
sampleTwoOHEFeatManual = SparseVector(len(sampleOHEDictManual), [(sampleOHEDictManual[i],1.0) for i in sampleTwo])#<COMPLETAR>)
sampleThreeOHEFeatManual = SparseVector(len(sampleOHEDictManual), [(sampleOHEDictManual[i],1.0) for i in sampleThree])#<COMPLETAR>)
# TEST OHE Features as sparse vectors (1c)
Test.assertTrue(isinstance(sampleOneOHEFeatManual, SparseVector),
'sampleOneOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleTwoOHEFeatManual, SparseVector),
'sampleTwoOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleThreeOHEFeatManual, SparseVector),
'sampleThreeOHEFeatManual needs to be a SparseVector')
Test.assertEqualsHashed(sampleOneOHEFeatManual,
'ecc00223d141b7bd0913d52377cee2cf5783abd6',
'incorrect value for sampleOneOHEFeatManual')
Test.assertEqualsHashed(sampleTwoOHEFeatManual,
'26b023f4109e3b8ab32241938e2e9b9e9d62720a',
'incorrect value for sampleTwoOHEFeatManual')
Test.assertEqualsHashed(sampleThreeOHEFeatManual,
'c04134fd603ae115395b29dcabe9d0c66fbdc8a7',
'incorrect value for sampleThreeOHEFeatManual')
Explanation: (1c) Atributos OHE como vetores esparsos
Agora vamos representar nossos atributos OHE como vetores esparsos. Utilizando o dicionário sampleOHEDictManual, crie um vetor esparso para cada amostra de nossa base de dados. Todo atributo que ocorre em uma amostra deve ter valor 1.0. Por exemplo, um vetor para um ponto com os atributos 2 e 4 devem ser [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0].
End of explanation
# EXERCICIO
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
sampleOne = [(0, 'mouse'), (1, 'black')]
sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
aux = sampleOne+sampleTwo+sampleThree
sampleOHEDictManual = dict((aux[k], k) for k in range(len(aux)))
sampleOneOHEFeatManual = SparseVector(len(sampleOHEDictManual), [(sampleOHEDictManual[i],1.0) for i in sampleOne])#<COMPLETAR>)
Note:
You should ensure that the indices used to create a SparseVector are sorted.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
rawFeats.sort(cmp=lambda x,y:x[0]<y[0])
sampleOneOHEFeatManual = SparseVector((numOHEFeats), [(OHEDict[i],1.0) for i in rawFeats])#<COMPLETAR>)
return sampleOneOHEFeatManual#<COMPLETAR>
# Calculate the number of features in sampleOHEDictManual
numSampleOHEFeats = len(sampleOHEDictManual)
# Run oneHotEnoding on sampleOne
print sampleOne,sampleOHEDictManual,"\n\n"
sampleOneOHEFeat = oneHotEncoding(sampleOne, sampleOHEDictManual, numSampleOHEFeats)
print sampleOneOHEFeat
# TEST Define an OHE Function (1d)
Test.assertTrue(sampleOneOHEFeat == sampleOneOHEFeatManual,
'sampleOneOHEFeat should equal sampleOneOHEFeatManual')
Test.assertEquals(sampleOneOHEFeat, SparseVector(7, [2,3], [1.0,1.0]),
'incorrect value for sampleOneOHEFeat')
Test.assertEquals(oneHotEncoding([(1, 'black'), (0, 'mouse')], sampleOHEDictManual,
numSampleOHEFeats), SparseVector(7, [2,3], [1.0,1.0]),
'incorrect definition for oneHotEncoding')
Explanation: (1d) Função de codificação OHE
Vamos criar uma função que gera um vetor esparso codificado por um dicionário de OHE. Ele deve fazer o procedimento similar ao exercício anterior.
End of explanation
# EXERCICIO
sampleOHEData = sampleDataRDD.map(lambda x:oneHotEncoding(x, sampleOHEDictManual, numSampleOHEFeats))#<COMPLETAR>
print sampleOHEData.collect()
# TEST Apply OHE to a dataset (1e)
sampleOHEDataValues = sampleOHEData.collect()
Test.assertTrue(len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements')
Test.assertEquals(sampleOHEDataValues[0], SparseVector(7, {2: 1.0, 3: 1.0}),
'incorrect OHE for first sample')
Test.assertEquals(sampleOHEDataValues[1], SparseVector(7, {1: 1.0, 4: 1.0, 5: 1.0}),
'incorrect OHE for second sample')
Test.assertEquals(sampleOHEDataValues[2], SparseVector(7, {0: 1.0, 3: 1.0, 6: 1.0}),
'incorrect OHE for third sample')
Explanation: (1e) Aplicar OHE em uma base de dados
Finalmente, use a função da parte (1d) para criar atributos OHE para todos os 3 objetos da base de dados artificial.
End of explanation
# EXERCICIO
sampleDistinctFeats = (sampleDataRDD
.flatMap(lambda x: x)#<COMPLETAR>
.distinct()#<COMPLETAR>
)
print sampleDistinctFeats.collect()
# TEST Pair RDD of (featureID, category) (2a)
Test.assertEquals(sorted(sampleDistinctFeats.collect()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'incorrect value for sampleDistinctFeats')
Explanation: Part 2: Construindo um dicionário OHE
(2a) Tupla RDD de (IDatributo, categoria)
Crie um RDD de pares distintos de (IDatributo, categoria). Em nossa base de dados você deve gerar (0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon'). Repare que 'black' aparece duas vezes em nossa base de dados mas contribui apenas para um item do RDD: (1, 'black'), por outro lado 'mouse' aparece duas vezes e contribui para dois itens: (0, 'mouse') and (2, 'mouse').
Dica: use flatMap e distinct.
End of explanation
# EXERCICIO
sampleOHEDict = (sampleDistinctFeats
.zipWithIndex()#<COMPLETAR>
.collectAsMap())#<COMPLETAR>)
print sampleOHEDict
# TEST OHE Dictionary from distinct features (2b)
Test.assertEquals(sorted(sampleOHEDict.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDict has unexpected keys')
Test.assertEquals(sorted(sampleOHEDict.values()), range(7), 'sampleOHEDict has unexpected values')
Explanation: (2b) Dicionário OHE de atributos únicos
Agora, vamos criar um RDD de tuplas para cada (IDatributo, categoria) em sampleDistinctFeats. A chave da tupla é a própria tupla original, e o valor será um inteiro variando de 0 até número de tuplas - 1.
Em seguida, converta essa RDD em um dicionário, utilizando o comando collectAsMap.
Use o comando zipWithIndex seguido de collectAsMap.
End of explanation
# EXERCICIO
def createOneHotDict(inputData):
Creates a one-hot-encoder dictionary based on the input data.
Args:
inputData (RDD of lists of (int, str)): An RDD of observations where each observation is
made up of a list of (featureID, value) tuples.
Returns:
dict: A dictionary where the keys are (featureID, value) tuples and map to values that are
unique integers.
return (inputData
.flatMap(lambda x:x)#<COMPLETAR>
.distinct()#<COMPLETAR>
.zipWithIndex()#<COMPLETAR>
.collectAsMap()#<COMPLETAR>
)
sampleOHEDictAuto = createOneHotDict(sampleDataRDD)
print sampleOHEDictAuto
# TEST Automated creation of an OHE dictionary (2c)
Test.assertEquals(sorted(sampleOHEDictAuto.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDictAuto has unexpected keys')
Test.assertEquals(sorted(sampleOHEDictAuto.values()), range(7),
'sampleOHEDictAuto has unexpected values')
Explanation: (2c) Criação automática do dicionário OHE
Agora use os códigos dos exercícios anteriores para criar uma função que retorna um dicionário OHE a partir dos atributos categóricos de uma base de dados.
End of explanation
import os.path
baseDir = os.path.join('Data')
inputPath = os.path.join('Aula04', 'dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
if os.path.isfile(fileName):
rawData = (sc
.textFile(fileName, 2)
.map(lambda x: x.replace('\t', ','))) # work with either ',' or '\t' separated data
print rawData.take(1)
Explanation: Part 3: Parse CTR data and generate OHE features
Antes de começar essa parte, vamos carregar a base de dados e verificar o formato dela.
Repare que o primeiro campo é o rótulo de cada objeto, sendo 0 se o usuário não clicou no banner e 1 caso tenha clicado. O restante dos atributos ou são numéricos ou são strings representando categorias anônimas. Vamos tratar todos os atributos como categóricos.
End of explanation
# EXERCICIO
weights = [.8, .1, .1]
seed = 42
# Use randomSplit with weights and seed
rawTrainData, rawValidationData, rawTestData = rawData.randomSplit(weights, seed)
# Cache the data
rawTrainData.cache()#<COMPLETAR>
rawValidationData.cache()#<COMPLETAR>
rawTestData.cache()#<COMPLETAR>
nTrain = rawTrainData.count()
nVal = rawValidationData.count()
nTest = rawTestData.count()
print nTrain, nVal, nTest, nTrain + nVal + nTest
print rawData.take(1)
# TEST Loading and splitting the data (3a)
Test.assertTrue(all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]),
'you must cache the split data')
Test.assertEquals(nTrain, 79911, 'incorrect value for nTrain')
Test.assertEquals(nVal, 10075, 'incorrect value for nVal')
Test.assertEquals(nTest, 10014, 'incorrect value for nTest')
Explanation: (3a) Carregando e dividindo os dados
Da mesma forma que no notebook anterior, vamos dividir os dados entre treinamento, validação e teste. Use o método randomSplit com os pesos (weights) e semente aleatória (seed) especificados para criar os conjuntos, então faça o cache de cada RDD, pois utilizaremos cada uma delas com frequência durante esse exercício.
End of explanation
# EXERCICIO
def parsePoint(point):
Converts a comma separated string into a list of (featureID, value) tuples.
Note:
featureIDs should start at 0 and increase to the number of features - 1.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
Returns:
list: A list of (featureID, value) tuples.
#<COMPLETAR>
return list(enumerate(point.split(",")[1:]))
parsedTrainFeat = rawTrainData.map(parsePoint)
from operator import add
numCategories = (parsedTrainFeat
.flatMap(lambda x: x)#<COMPLETAR>
.distinct()#<COMPLETAR>
.map(lambda x: (x[0],1))#<COMPLETAR>
.reduceByKey(add)#<COMPLETAR>
.sortByKey()#<COMPLETAR>
.collect()
)
#numCategories = (parsedTrainFeat
# .flatMap(lambda x: x)
# .distinct()
# .map(lambda x: (x[0],1))
# .reduceByKey(add)
# .sortByKey()
# .collect()
# )
print numCategories[2][1]
# TEST Extract features (3b)
Test.assertEquals(numCategories[2][1], 855, 'incorrect implementation of parsePoint')
Test.assertEquals(numCategories[32][1], 4, 'incorrect implementation of parsePoint')
Explanation: (3b) Extração de atributos
Como próximo passo, crie uma função para ser aplicada em cada objeto do RDD para gerar uma RDD de tuplas (IDatributo, categoria). Ignore o primeiro campo, que é o rótulo e gere uma lista de tuplas para os atributos seguintes. Utilize o comando enumerate para criar essas tuplas.
End of explanation
# EXERCICIO
ctrOHEDict = createOneHotDict(parsedTrainFeat)#<COMPLETAR>
numCtrOHEFeats = len(ctrOHEDict.keys())
print numCtrOHEFeats
print ctrOHEDict[(0, '')]
# TEST Create an OHE dictionary from the dataset (3c)
Test.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDict')
Test.assertTrue((0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict')
Explanation: (3c) Crie o dicionário de OHE dessa base de dados
Note que a função parsePoint retorna um objeto em forma de lista (IDatributo, categoria), que é o mesmo formato utilizado pela função createOneHotDict. Utilize o RDD parsedTrainFeat para criar um dicionário OHE.
End of explanation
from pyspark.mllib.regression import LabeledPoint
# EXERCICIO
def parseOHEPoint(point, OHEDict, numOHEFeats):
Obtain the label and feature vector for this raw observation.
Note:
You must use the function `oneHotEncoding` in this implementation or later portions
of this lab may not function as expected.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer.
numOHEFeats (int): The number of unique features in the training dataset.
Returns:
LabeledPoint: Contains the label for the observation and the one-hot-encoding of the
raw features based on the provided OHE dictionary.
#def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
#<COMPLETAR>
return LabeledPoint(point.split(',')[0],oneHotEncoding(parsePoint(point), OHEDict, numOHEFeats))
OHETrainData = rawTrainData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHETrainData.cache()
print OHETrainData.take(1)
# Check that oneHotEncoding function was used in parseOHEPoint
backupOneHot = oneHotEncoding
oneHotEncoding = None
withOneHot = False
try: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats)
except TypeError: withOneHot = True
oneHotEncoding = backupOneHot
# TEST Apply OHE to the dataset (3d)
numNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5))
numNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5))
Test.assertEquals(numNZ, numNZAlt, 'incorrect implementation of parseOHEPoint')
Test.assertTrue(withOneHot, 'oneHotEncoding not present in parseOHEPoint')
Explanation: (3d) Aplicando OHE à base de dados
Agora vamos usar o dicionário OHE para criar um RDD de objetos LabeledPoint usando atributos OHE. Complete a função parseOHEPoint. Dica: essa função é uma extensão da função parsePoint criada anteriormente e que usa a função oneHotEncoding.
End of explanation
def bucketFeatByCount(featCount):
Bucket the counts by powers of two.
for i in range(11):
size = 2 ** i
if featCount <= size:
return size
return -1
featCounts = (OHETrainData
.flatMap(lambda lp: lp.features.indices)
.map(lambda x: (x, 1))
.reduceByKey(lambda x, y: x + y))
featCountsBuckets = (featCounts
.map(lambda x: (bucketFeatByCount(x[1]), 1))
.filter(lambda (k, v): k != -1)
.reduceByKey(lambda x, y: x + y)
.collect())
print featCountsBuckets
%matplotlib inline
import matplotlib.pyplot as plt
x, y = zip(*featCountsBuckets)
x, y = np.log(x), np.log(y)
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2))
ax.set_xlabel(r'$\log_e(bucketSize)$'), ax.set_ylabel(r'$\log_e(countInBucket)$')
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
pass
Explanation: Visualização 1: Frequência dos Atributos
Vamos agora visualizar o número de vezes que cada um dos 233.286 atributos OHE aparecem na base de treino. Para isso primeiro contabilizamos quantas vezes cada atributo aparece na base, então alocamos cada atributo em um balde de histograma. Os baldes tem tamanhos de potência de 2, então o primeiro balde conta os atributos que aparecem exatamente uma vez ( $ \scriptsize 2^0 $ ), o segundo atributos que aparecem duas vezes ( $ \scriptsize 2^1 $ ), o terceiro os atributos que aparecem de 3 a 4 vezes ( $ \scriptsize 2^2 $ ), o quinto balde é para atributos que ocorrem de cinco a oito vezes ( $ \scriptsize 2^3 $ ) e assim por diante. O gráfico de dispersão abaixo mostra o logarítmo do tamanho dos baldes versus o logarítmo da frequência de atributos que caíram nesse balde.
End of explanation
# EXERCICIO
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
rawFeats.sort(cmp=lambda x,y:x[0]<y[0])
sampleOneOHEFeatManual = SparseVector((numOHEFeats), [(OHEDict[i],1.0) for i in rawFeats])#<COMPLETAR>)
return sampleOneOHEFeatManual#<COMPLETAR>
Note:
If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be
ignored.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
rawFeats.sort(cmp=lambda x,y:x[0]<y[0])
sampleOneOHEFeatManual = SparseVector((numOHEFeats), [(OHEDict[i],1.0) for i in rawFeats if i in OHEDict.keys()])#<COMPLETAR>)
return sampleOneOHEFeatManual#<COMPLETAR>
#<COMPLETAR>
OHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHEValidationData.cache()
print OHEValidationData.take(1)
# TEST Handling unseen features (3e)
numNZVal = (OHEValidationData
.map(lambda lp: len(lp.features.indices))
.sum())
Test.assertEquals(numNZVal, 372080, 'incorrect number of features')
Explanation: (3e) Atributos não observados
Naturalmente precisaremos aplicar esse mesmo procedimento para as outras bases (validação e teste), porém nessas bases podem existir atributos não observados na base de treino.
Precisamos adaptar a função oneHotEncoding para ignorar os atributos que não existem no dicionário.
End of explanation
from pyspark.mllib.classification import LogisticRegressionWithSGD
# fixed hyperparameters
numIters = 50
stepSize = 10.
regParam = 1e-6
regType = 'l2'
includeIntercept = True
# EXERCICIO
model0 = LogisticRegressionWithSGD.train(OHETrainData,numIters,stepSize,regParam=regParam,regType=regType,intercept=includeIntercept)#<COMPLETAR>
sortedWeights = sorted(model0.weights)
print sortedWeights[:5], model0.intercept
# TEST Logistic regression (4a)
Test.assertTrue(np.allclose(model0.intercept, 0.56455084025), 'incorrect value for model0.intercept')
Test.assertTrue(np.allclose(sortedWeights[0:5],
[-0.45899236853575609, -0.37973707648623956, -0.36996558266753304,
-0.36934962879928263, -0.32697945415010637]), 'incorrect value for model0.weights')
Explanation: Part 4: Predição do CTR e avaliação da perda-log (logloss)
(4a) Regressão Logística
Um classificador que podemos utilizar nessa base de dados é a regressão logística, que nos dá a probabilidade de um evento de clique em banner ocorrer. Vamos utilizar a função LogisticRegressionWithSGD para treinar um modelo usando OHETrainData com a configuração de parâmetros dada. LogisticRegressionWithSGD retorna um LogisticRegressionModel.
Em seguida, imprima LogisticRegressionModel.weights e LogisticRegressionModel.intercept para verificar o modelo gerado.
End of explanation
# EXERCICIO
from math import log
def computeLogLoss(p, y):
Calculates the value of log loss for a given probabilty and label.
Note:
log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it
and when p is 1 we need to subtract a small value (epsilon) from it.
Args:
p (float): A probabilty between 0 and 1.
y (int): A label. Takes on the values 0 and 1.
Returns:
float: The log loss value.
epsilon = 10e-12
if p == 0:
p += epsilon
elif p == 1:
p -= epsilon
if y==1:
return -log(p)
else:
return -log(1-p)
#<COMPLETAR>
print computeLogLoss(.5, 1)
print computeLogLoss(.5, 0)
print computeLogLoss(.99, 1)
print computeLogLoss(.99, 0)
print computeLogLoss(.01, 1)
print computeLogLoss(.01, 0)
print computeLogLoss(0, 1)
print computeLogLoss(1, 1)
print computeLogLoss(1, 0)
# TEST Log loss (4b)
Test.assertTrue(np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)],
[0.69314718056, 0.0100503358535, 4.60517018599]),
'computeLogLoss is not correct')
Test.assertTrue(np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)],
[25.3284360229, 1.00000008275e-11, 25.3284360229]),
'computeLogLoss needs to bound p away from 0 and 1 by epsilon')
Explanation: (4b) Log loss
Uma forma de avaliar um classificador binário é através do log-loss, definido como: $$ \begin{align} \scriptsize \ell_{log}(p, y) = \begin{cases} -\log (p) & \text{if } y = 1 \\ -\log(1-p) & \text{if } y = 0 \end{cases} \end{align} $$ onde $ \scriptsize p$ é uma probabilidade entre 0 e 1 e $ \scriptsize y$ é o rótulo binário (0 ou 1). Log loss é um critério de avaliação muito utilizado quando deseja-se predizer eventos raros. Escreva uma função para calcular o log-loss, e avalie algumas entradas de amostra.
End of explanation
# EXERCICIO
# Note that our dataset has a very high click-through rate by design
# In practice click-through rate can be one to two orders of magnitude lower
classOneFracTrain = OHETrainData.mean()#<COMPLETAR>
print classOneFracTrain,"\ncheou\n"
logLossTrBase = OHETrainData.mean()#<COMPLETAR>
print 'Baseline Train Logloss = {0:.3f}\n'.format(logLossTrBase)
# TEST Baseline log loss (4c)
Test.assertTrue(np.allclose(classOneFracTrain, 0.22717773523), 'incorrect value for classOneFracTrain')
Test.assertTrue(np.allclose(logLossTrBase, 0.535844), 'incorrect value for logLossTrBase')
Explanation: (4c) Baseline log loss
Agora, vamos utilizar a função da Parte (4b) para calcular um baseline da métrica de log-loss na nossa base de treino. Uma forma de calcular um baseline é predizer sempre a média dos rótulos observados. Primeiro calcule a média dos rótulos da base e, em seguida, calcule o log-loss médio para a base de treino.
End of explanation
# EXERCICIO
from math import exp # exp(-t) = e^-t
def getP(x, w, intercept):
Calculate the probability for an observation given a set of weights and intercept.
Note:
We'll bound our raw prediction between 20 and -20 for numerical purposes.
Args:
x (SparseVector): A vector with values of 1.0 for features that exist in this
observation and 0.0 otherwise.
w (DenseVector): A vector of weights (betas) for the model.
intercept (float): The model's intercept.
Returns:
float: A probability between 0 and 1.
# calculate rawPrediction = w.x + intercept
rawPrediction = <COMPLETAR>
# Bound the raw prediction value
rawPrediction = min(rawPrediction, 20)
rawPrediction = max(rawPrediction, -20)
# calculate (1+e^-rawPrediction)^-1
return <COMPLETAR>
trainingPredictions = OHETrainData.<COMPLETAR>
print trainingPredictions.take(5)
# TEST Predicted probability (4d)
Test.assertTrue(np.allclose(trainingPredictions.sum(), 18135.4834348),
'incorrect value for trainingPredictions')
Explanation: (4d) Probabilidade da Predição
O modelo gerado na Parte (4a) possui um método chamado predict, porém esse método retorna apenas 0's e 1's. Para calcular a probabilidade de um evento, vamos criar uma função getP que recebe como parâmetro o ponto x, o conjunto de pesos w e o intercept.
Calcule o modelo de regressão linear nesse ponto x e aplique a função sigmoidal $ \scriptsize \sigma(t) = (1+ e^{-t})^{-1} $ para retornar a probabilidade da predição do objeto x.
End of explanation
# EXERCICIO
def evaluateResults(model, data):
Calculates the log loss for the data given the model.
Args:
model (LogisticRegressionModel): A trained logistic regression model.
data (RDD of LabeledPoint): Labels and features for each observation.
Returns:
float: Log loss for the data.
return (data
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
)
logLossTrLR0 = evaluateResults(model0, OHETrainData)
print ('OHE Features Train Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTrBase, logLossTrLR0))
# TEST Evaluate the model (4e)
Test.assertTrue(np.allclose(logLossTrLR0, 0.456903), 'incorrect value for logLossTrLR0')
Explanation: (4e) Avalie o modelo
Finalmente, crie uma função evaluateResults que calcula o log-loss médio do modelo em uma base de dados. Em seguida, execute essa função na nossa base de treino.
End of explanation
# EXERCICIO
logLossValBase = OHEValidationData.<COMPLETAR>
logLossValLR0 = evaluateResults(model0, OHEValidationData)
print ('OHE Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, logLossValLR0))
# TEST Validation log loss (4f)
Test.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase')
Test.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0')
Explanation: (4f) log-loss da validação
Agora aplique o modelo na nossa base de validação e calcule o log-loss médio, compare com o nosso baseline.
End of explanation
labelsAndScores = OHEValidationData.map(lambda lp:
(lp.label, getP(lp.features, model0.weights, model0.intercept)))
labelsAndWeights = labelsAndScores.collect()
labelsAndWeights.sort(key=lambda (k, v): v, reverse=True)
labelsByWeight = np.array([k for (k, v) in labelsAndWeights])
length = labelsByWeight.size
truePositives = labelsByWeight.cumsum()
numPositive = truePositives[-1]
falsePositives = np.arange(1.0, length + 1, 1.) - truePositives
truePositiveRate = truePositives / numPositive
falsePositiveRate = falsePositives / (length - numPositive)
# Generate layout and plot data
fig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1))
ax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05)
ax.set_ylabel('True Positive Rate (Sensitivity)')
ax.set_xlabel('False Positive Rate (1 - Specificity)')
plt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.)
plt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model
pass
Explanation: Visualização 2: Curva ROC
A curva ROC nos mostra o custo-benefício entre a taxa de falso positivo e a taxa de verdadeiro positivo, conforme diminuimos o limiar de predição. Um modelo aleatório é representado por uma linha pontilhada. Idealmente nosso modelo deve formar uma curva acima dessa linha.
End of explanation
from collections import defaultdict
import hashlib
def hashFunction(numBuckets, rawFeats, printMapping=False):
Calculate a feature dictionary for an observation's features based on hashing.
Note:
Use printMapping=True for debug purposes and to better understand how the hashing works.
Args:
numBuckets (int): Number of buckets to use as features.
rawFeats (list of (int, str)): A list of features for an observation. Represented as
(featureID, value) tuples.
printMapping (bool, optional): If true, the mappings of featureString to index will be
printed.
Returns:
dict of int to float: The keys will be integers which represent the buckets that the
features have been hashed to. The value for a given key will contain the count of the
(featureID, value) tuples that have hashed to that key.
mapping = {}
for ind, category in rawFeats:
featureString = category + str(ind)
mapping[featureString] = int(int(hashlib.md5(featureString).hexdigest(), 16) % numBuckets)
if(printMapping): print mapping
sparseFeatures = defaultdict(float)
for bucket in mapping.values():
sparseFeatures[bucket] += 1.0
return dict(sparseFeatures)
# Reminder of the sample values:
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# EXERCICIO
# Use four buckets
sampOneFourBuckets = hashFunction(4, sampleOne, True)#<COMPLETAR>
sampTwoFourBuckets = hashFunction(4, sampleTwo, True)#<COMPLETAR>
sampThreeFourBuckets = hashFunction(4, sampleThree, True)#<COMPLETAR>
# Use one hundred buckets
sampOneHundredBuckets = hashFunction(100, sampleOne, True)#<COMPLETAR>
sampTwoHundredBuckets = hashFunction(100, sampleTwo, True)#<COMPLETAR>
sampThreeHundredBuckets = hashFunction(100, sampleThree, True)#<COMPLETAR>
print '\t\t 4 Buckets \t\t\t 100 Buckets'
print 'SampleOne:\t {0}\t\t {1}'.format(sampOneFourBuckets, sampOneHundredBuckets)
print 'SampleTwo:\t {0}\t\t {1}'.format(sampTwoFourBuckets, sampTwoHundredBuckets)
print 'SampleThree:\t {0}\t {1}'.format(sampThreeFourBuckets, sampThreeHundredBuckets)
# TEST Hash function (5a)
Test.assertEquals(sampOneFourBuckets, {2: 1.0, 3: 1.0}, 'incorrect value for sampOneFourBuckets')
Test.assertEquals(sampThreeHundredBuckets, {72: 1.0, 5: 1.0, 14: 1.0},
'incorrect value for sampThreeHundredBuckets')
Explanation: Parte 5: Reduzindo a dimensão dos atributos via feature hashing
(5a) Função Hash
Nosso modelo OHE consegue criar uma representação numérica boa o suficiente para ser aplicável em algoritmos de classificação que não conseguem tratar dados categóricos. Porém, para nossa base de dados isso gerou um número enorme de atributos (233 mil) que pode tornar o problema intratável. Para reduzir o espaço de atributos vamos utilizar um truque através de funções hash chamado de feature hashing.
Logo abaixo, já está implementada a função de hash que usaremos nessa parte do notebook. Vamos aplicá-la na nossa base artificial criada na Parte (1a) para termos uma intuição do que está acontecendo. Execute essa função para valores diferentes de numBuckets e observe o resultado.
End of explanation
# EXERCICIO
def parseHashPoint(point, numBuckets):
Create a LabeledPoint for this observation using hashing.
Args:
point (str): A comma separated string where the first value is the label and the rest are
features.
numBuckets: The number of buckets to hash to.
Returns:
LabeledPoint: A LabeledPoint with a label (0.0 or 1.0) and a SparseVector of hashed
features.
return LabeledPoint(point.split(",")[0], SparseVector(numBuckets, hashFunction(numBuckets, parsePoint(point))))
#<COMPLETAR>
numBucketsCTR = 2 ** 15
hashTrainData = rawTrainData.map(lambda x: parseHashPoint(x,numBucketsCTR))
hashTrainData.cache()
hashValidationData = rawValidationData.map(lambda x: parseHashPoint(x,numBucketsCTR))
hashValidationData.cache()
hashTestData = rawTestData.map(lambda x: parseHashPoint(x,numBucketsCTR))
hashTestData.cache()
print hashTrainData.take(1)
# TEST Creating hashed features (5b)
hashTrainDataFeatureSum = sum(hashTrainData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTrainDataLabelSum = sum(hashTrainData
.map(lambda lp: lp.label)
.take(100))
hashValidationDataFeatureSum = sum(hashValidationData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashValidationDataLabelSum = sum(hashValidationData
.map(lambda lp: lp.label)
.take(100))
hashTestDataFeatureSum = sum(hashTestData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTestDataLabelSum = sum(hashTestData
.map(lambda lp: lp.label)
.take(100))
Test.assertEquals(hashTrainDataFeatureSum, 772, 'incorrect number of features in hashTrainData')
Test.assertEquals(hashTrainDataLabelSum, 24.0, 'incorrect labels in hashTrainData')
Test.assertEquals(hashValidationDataFeatureSum, 776,
'incorrect number of features in hashValidationData')
Test.assertEquals(hashValidationDataLabelSum, 16.0, 'incorrect labels in hashValidationData')
Test.assertEquals(hashTestDataFeatureSum, 774, 'incorrect number of features in hashTestData')
Test.assertEquals(hashTestDataLabelSum, 23.0, 'incorrect labels in hashTestData')
Explanation: (5b) Criando hashed features
Agora vamos usar essa função hash para criar hashed features para nossa base CTR. Primeiro escreva uma função que usa a função hash da Parte (5a) com numBuckets = $ \scriptsize 2^{15} \approx 33K $ para criar um LabeledPoint com os hashed features armazenados como um SparseVector. Então use esta função para criar uma nova base de treino, validação e teste com hashed features. Dica: parsedHashPoint é similar a parseOHEPoint da Parte (3d).
End of explanation
# EXERCICIO
def computeSparsity(data, d, n):
Calculates the average sparsity for the features in an RDD of LabeledPoints.
Args:
data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation.
d (int): The total number of features.
n (int): The number of observations in the RDD.
Returns:
float: The average of the ratio of features in a point to total features.
return (data
.map(lambda x : x.features)#<COMPLETAR>
.map(add)#<COMPLETAR>
)/(d*n*1.)
averageSparsityHash = computeSparsity(hashTrainData, numBucketsCTR, nTrain)
averageSparsityOHE = computeSparsity(OHETrainData, numCtrOHEFeats, nTrain)
print 'Average OHE Sparsity: {0:.7e}'.format(averageSparsityOHE)
print 'Average Hash Sparsity: {0:.7e}'.format(averageSparsityHash)
# TEST Sparsity (5c)
Test.assertTrue(np.allclose(averageSparsityOHE, 1.6717677e-04),
'incorrect value for averageSparsityOHE')
Test.assertTrue(np.allclose(averageSparsityHash, 1.1805561e-03),
'incorrect value for averageSparsityHash')
Explanation: (5c) Esparsidade
Uma vez que temos 33 mil hashed features contra 233 mil OHE, devemos esperar que os atributos OHE sejam mais esparsos. Verifique essa hipótese computando a esparsidade média do OHE e do hashed features.
Note que se você tem um SparseVector chamado sparse, chamar len(sparse) retornará o total de atributos, e não o número de valores não nulos. SparseVector tem atributos indices e values que contém informações sobre quais atributos são não nulos.
End of explanation
numIters = 500
regType = 'l2'
includeIntercept = True
# Initialize variables using values from initial model training
bestModel = None
bestLogLoss = 1e10
# EXERCICIO
stepSizes = [1, 10]
regParams = [1e-6, 1e-3]
for stepSize in stepSizes:
for regParam in regParams:
model = (<COMPLETAR>)
logLossVa = <COMPLETAR>
print ('\tstepSize = {0:.1f}, regParam = {1:.0e}: logloss = {2:.3f}'
.format(stepSize, regParam, logLossVa))
if (logLossVa < bestLogLoss):
bestModel = model
bestLogLoss = logLossVa
print ('Hashed Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, bestLogLoss))
# TEST Logistic model with hashed features (5d)
Test.assertTrue(np.allclose(bestLogLoss, 0.4481683608), 'incorrect value for bestLogLoss')
Explanation: (5d) Modelo logístico com hashed features
Agora treine um modelo de regressão logística para os hashed features. Execute um grid search para encontrar parâmetros adequados para essa base, avaliando o log-loss no conjunto de validação. Nota: isso pode demorar alguns minutos para terminar. Use stepSizes de 1 e 10 e regParams de 1e-6 e 1e-3.
End of explanation
# EXERCICIO
# Log loss for the best model from (5d)
logLossValLR0 = <COMPLETAR>
logLossTest = <COMPLETAR>
# Log loss for the baseline model
logLossTestBaseline = hashTestData.map(lambda lp: computeLogLoss(classOneFracTrain,lp.label)).mean()
print ('Hashed Features Test Log Loss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTestBaseline, logLossTest))
# TEST Evaluate on the test set (5e)
Test.assertTrue(np.allclose(logLossTestBaseline, 0.537438),
'incorrect value for logLossTestBaseline')
Test.assertTrue(np.allclose(logLossTest, 0.455616931), 'incorrect value for logLossTest')
Explanation: (5e) Avaliando a base de testes
Finalmente, avalie o melhor modelo da Parte (5d) na base de testes. Compare o log-loss do resultado com o log-loss do nosso baseline no conjunto de testes, calculando da mesma forma que na Parte (4f).
End of explanation |
10,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Basis Functions Test
In this notebook we test our random basis functions against the kernel functions they are designed to approximate. This is a qualitative test in that we just plot the inner product of our bases with the kernel evaluations for an array of inputs, i.e., we compare the shapes of the kernels.
We evaluate these kernels in D > 1, since their Fourier transformation may be a function of D.
Step1: Settings
Step2: Kernel functions
Step3: Basis functions
Step4: Evaluate kernels and bases
Step5: Plot the kernel functions | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as pl
from revrand.basis_functions import RandomRBF, RandomLaplace, RandomCauchy, RandomMatern32, RandomMatern52, \
FastFoodRBF, OrthogonalRBF, FastFoodGM, BasisCat
from revrand import Parameter, Positive
# Style
pl.style.use('ggplot')
pl.rc('font', **{'size': 26})
Explanation: Random Basis Functions Test
In this notebook we test our random basis functions against the kernel functions they are designed to approximate. This is a qualitative test in that we just plot the inner product of our bases with the kernel evaluations for an array of inputs, i.e., we compare the shapes of the kernels.
We evaluate these kernels in D > 1, since their Fourier transformation may be a function of D.
End of explanation
N = 1000
D = 10 # Kernel dimension
nbases = 1500
lenscale = 1.5
mean = -1.
inrange = 4
# Data
x = np.zeros((N, D))
y = np.tile(np.linspace(-inrange, inrange, N), (D, 1)).T
Explanation: Settings
End of explanation
def dist(power=2):
return (np.abs((x - y)**power)).sum(axis=1)
# RBF
def kern_rbf():
return np.exp(- dist() / (2 * lenscale**2))
# Cauchy
def kern_cau():
return 1. / (1 + dist() / lenscale**2)
# Laplace
def kern_lap():
return np.exp(- dist(power=1) / lenscale)
# Matern 3/2
def kern_m32():
dterm = np.sqrt(3) * np.sqrt(dist()) / lenscale
return (1 + dterm) * np.exp(-dterm)
# Matern 5/2
def kern_m52():
dterm = np.sqrt(5) * np.sqrt(dist()) / lenscale
return (1 + dterm + dterm**2 / 3.) * np.exp(-dterm)
def kern_combo():
return 0.5 * kern_lap() + 2 * kern_rbf()
Explanation: Kernel functions
End of explanation
rbf = RandomRBF(Xdim=D, nbases=nbases)
cau = RandomCauchy(Xdim=D, nbases=nbases)
lap = RandomLaplace(Xdim=D, nbases=nbases)
m32 = RandomMatern32(Xdim=D, nbases=nbases)
m52 = RandomMatern52(Xdim=D, nbases=nbases)
ff_rbf = FastFoodRBF(Xdim=D, nbases=nbases)
or_rbf = OrthogonalRBF(Xdim=D, nbases=nbases)
r_lap = Parameter(0.5, Positive())
r_rbf = Parameter(2., Positive())
combo = RandomLaplace(Xdim=D, nbases=nbases, regularizer=r_lap) + \
RandomRBF(Xdim=D, nbases=nbases, regularizer=r_rbf)
# Get expected kernel evaluations
def radialbasis2kern(basis):
V, _ = basis.regularizer_diagonal(x)
l = [lenscale] * len(basis.bases) if isinstance(basis, BasisCat) else [lenscale]
return (basis.transform(x, *l) * basis.transform(y, *l)).dot(V)
Explanation: Basis functions
End of explanation
k_rbf = kern_rbf()
b_rbf = radialbasis2kern(rbf)
k_cau = kern_cau()
b_cau = radialbasis2kern(cau)
k_lap = kern_lap()
b_lap = radialbasis2kern(lap)
k_m32 = kern_m32()
b_m32 = radialbasis2kern(m32)
k_m52 = kern_m52()
b_m52 = radialbasis2kern(m52)
f_rbf = radialbasis2kern(ff_rbf)
o_rbf = radialbasis2kern(or_rbf)
k_combo = kern_combo()
f_combo = radialbasis2kern(combo)
Explanation: Evaluate kernels and bases
End of explanation
distfrom00 = np.sign(y[:, 0]) * np.sqrt(dist(power=2))
def plotkern(k1, k2, k1_label=None, k2_label=None):
pl.figure(figsize=(15, 10))
pl.plot(distfrom00, k1, 'b', linewidth=3, alpha=0.5, label=k1_label)
pl.plot(distfrom00, k2, 'r--', linewidth=3, alpha=0.7, label=k2_label)
pl.grid(True)
pl.axis('tight')
pl.xlabel('$\| x - y \|$')
pl.ylabel('$k(x - y)$')
pl.legend()
pl.show()
plotkern(k_rbf, b_rbf, 'RBF kernel', 'RBF basis')
plotkern(k_cau, b_cau, 'Cauchy kernel', 'Cauchy basis')
plotkern(k_lap, b_lap, 'Laplace kernel', 'Laplace basis')
plotkern(k_m32, b_m32, 'Matern32 kernel', 'Matern32 basis')
plotkern(k_m52, b_m52, 'Matern52 kernel', 'Matern52 basis')
plotkern(k_rbf, f_rbf, 'RBF kernel', 'FastFood RBF basis')
plotkern(k_rbf, o_rbf, 'RBF kernel', 'Orthogonal RBF basis')
plotkern(k_combo, f_combo, 'Combo kernel', 'Combo basis')
Explanation: Plot the kernel functions
End of explanation |
10,558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Equation for Neuron Paper
A dendritic segment can robustly classify a pattern by subsampling a small number of cells from a larger population. Assuming a random distribution of patterns, the exact probability of a false match is given by the following equation
Step1: where n refers to the size of the population of cells, a is the number of active cells at any instance in time, s is the number of actual synapses on a dendritic segment, and θ is the threshold for NMDA spikes. Following (Ahmad & Hawkins, 2015), the numerator counts the number of possible ways θ or more cells can match a fixed set of s synapses. The denominator counts the number of ways a cells out of n can be active.
Example usage
Step2: Table 1B
Step3: Table 1C
Step4: Table 1D
Step5: Charts for SDR Paper
The following sections calculates the numbers for some of the SDR paper charts.
Importance of large n
Step6: Small sparsity is insufficient
Step7: A small subsample can be very reliable (but not too small)
Step8: Impact of noise on false negatives
Step9: Charts for BAMI | Python Code:
oxp = Symbol("Omega_x'")
b = Symbol("b")
n = Symbol("n")
theta = Symbol("theta")
s = Symbol("s")
a = Symbol("a")
subsampledOmega = (binomial(s, b) * binomial(n - s, a - b)) / binomial(n, a)
subsampledFpF = Sum(subsampledOmega, (b, theta, s))
subsampledOmegaSlow = (binomial(s, b) * binomial(n - s, a - b))
subsampledFpFSlow = Sum(subsampledOmegaSlow, (b, theta, s))/ binomial(n, a)
display(subsampledFpF)
display(subsampledFpFSlow)
Explanation: Equation for Neuron Paper
A dendritic segment can robustly classify a pattern by subsampling a small number of cells from a larger population. Assuming a random distribution of patterns, the exact probability of a false match is given by the following equation:
End of explanation
display("n=10000, a=64, s=24, theta=12", subsampledFpF.subs(s,24).subs(n, 10000).subs(a, 64).subs(theta, 12).evalf())
display("n=10000, a=300, s=24, theta=12", subsampledFpFSlow.subs(theta, 12).subs(s, 24).subs(n, 10000).subs(a, 300).evalf())
display("n=2048, a=400, s=40, theta=20", subsampledFpF.subs(theta, 15).subs(s, 30).subs(n, 10000).subs(a, 300).evalf())
Explanation: where n refers to the size of the population of cells, a is the number of active cells at any instance in time, s is the number of actual synapses on a dendritic segment, and θ is the threshold for NMDA spikes. Following (Ahmad & Hawkins, 2015), the numerator counts the number of possible ways θ or more cells can match a fixed set of s synapses. The denominator counts the number of ways a cells out of n can be active.
Example usage
End of explanation
T1B = subsampledFpFSlow.subs(n, 100000).subs(a, 2000).subs(theta,s).evalf()
print "n=100000, a=2000, theta=s"
display("s=6",T1B.subs(s,6).evalf())
display("s=8",T1B.subs(s,8).evalf())
display("s=10",T1B.subs(s,10).evalf())
Explanation: Table 1B
End of explanation
T1C = subsampledFpFSlow.subs(n, 100000).subs(a, 2000).subs(s,2*theta).evalf()
print "n=10000, a=300, s=2*theta"
display("theta=6",T1C.subs(theta,6).evalf())
display("theta=8",T1C.subs(theta,8).evalf())
display("theta=10",T1C.subs(theta,10).evalf())
display("theta=12",T1C.subs(theta,12).evalf())
Explanation: Table 1C
End of explanation
m = Symbol("m")
T1D = subsampledFpF.subs(n, 100000).subs(a, 2000).subs(s,2*m*theta).evalf()
print "n=100000, a=2000, s=2*m*theta"
display("theta=10, m=2",T1D.subs(theta,10).subs(m,2).evalf())
display("theta=10, m=4",T1D.subs(theta,10).subs(m,4).evalf())
display("theta=10, m=6",T1D.subs(theta,10).subs(m,6).evalf())
display("theta=20, m=6",T1D.subs(theta,20).subs(m,6).evalf())
Explanation: Table 1D
End of explanation
eq1 = subsampledFpFSlow.subs(s, 64).subs(theta, 12)
print "a=64 cells active, s=16 synapses on segment, dendritic threshold is theta=8\n"
errorList = []
nList = []
for n0 in range(300,20100,200):
error = eq1.subs(n, n0).subs(a,64).evalf()
errorList += [error]
nList += [n0]
print "population n = %5d, sparsity = %5.2f%%, probability of false match = "%(n0, 64/n0), error
print errorList
print nList
Explanation: Charts for SDR Paper
The following sections calculates the numbers for some of the SDR paper charts.
Importance of large n
End of explanation
print ("2% sparsity with n=400")
print subsampledFpFSlow.subs(s, 4).subs(a, 8).subs(theta, 2).subs(n,400).evalf()
print ("2% sparsity with n=4000")
print subsampledFpFSlow.subs(s, 4).subs(a, 400).subs(theta, 2).subs(n,4000).evalf()
Explanation: Small sparsity is insufficient
End of explanation
eq2 = subsampledFpFSlow.subs(n, 10000).subs(a, 300)
print "a=200 cells active out of population of n=10000 cells\n"
errorList = []
sList = []
for s0 in range(2,31,1):
print "synapses s = %3d, theta = s/2 = %3d, probability of false match = "%(s0,s0/2), eq2.subs(s, s0).subs(theta,s0/2).evalf()
errorList += [eq2.subs(s, s0).subs(theta,s0/2).evalf()]
sList += [s0]
print errorList
print sList
Explanation: A small subsample can be very reliable (but not too small)
End of explanation
b = Symbol("b")
v = Symbol("v")
theta = Symbol("theta")
s = Symbol("s")
a = Symbol("a")
overlapSetNoise = (binomial(s, b) * binomial(a - s, v - b)) / binomial(a, v)
noiseFN = Sum(overlapSetNoise, (b, s-theta+1, s))
eqn = noiseFN.subs(s, 30).subs(a, 128)
print "a=128 cells active with segment containing s=30 synapses (n doesn't matter here)\n"
for t in range(8,20,4):
print "theta = ",t
errorList = []
noiseList = []
noisePct = 0.05
while noisePct <= 0.85:
noise = int(round(noisePct*128,0))
errorList += [eqn.subs(v, noise).subs(theta,t).evalf()]
noiseList += [noise/128.0]
noisePct += 0.05
print errorList
print noiseList
Explanation: Impact of noise on false negatives
End of explanation
w0 = 32
print "a=%d cells active, s=%d synapses on segment, dendritic threshold is s/2\n" % (w0,w0)
errorList = []
nList = []
for n0 in range(50,500,50):
w0 = n0/2
eq1 = subsampledFpFSlow.subs(s, w0).subs(theta, w0/2)
error = eq1.subs(n, n0).subs(a,w0).evalf()
errorList += [error]
nList += [n0]
print "population n = %5d, sparsity = %7.4f%%, probability of false match = "%(n0, float(w0)/n0), error
print errorList
print nList
Explanation: Charts for BAMI
End of explanation |
10,559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Binary with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Spots
Let's add one spot to each of our stars in the binary.
NOTE
Step3: As a shortcut, we can also call add_spot directly.
Step4: Spot Parameters
A spot is defined by the colatitude and longitude of its center, its angular radius, and the ratio of temperature of the spot to the local intrinsic value.
NOTE
Step5: To see the spot, let's compute and plot the protomesh.
Step6: Spot Corotation
NOTE
Step7: At time=t0=0, we can see that the spot is where defined
Step8: At a later time, the spot is still technically at the same coordinates, but longitude of 0 no longer corresponds to pointing to the companion star. The coordinate system has rotated along with the asyncronous rotation of the star.
Step9: Since the syncpar was set to 1.5, one full orbit later the star (and the spot) has made an extra half-rotation. | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: Binary with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_feature('spot', component='primary', feature='spot01')
Explanation: Adding Spots
Let's add one spot to each of our stars in the binary.
NOTE: the parameter name was changed from "colon" to "long" in 2.0.2. For all further releases in 2.0.X, the "colon" parameter still exists but is read-only. Starting with 2.1.0, the "colon" parameter will no longer exist.
A spot is a feature, and needs to be attached directly to a component upon creation. Providing a tag for 'feature' is entirely optional - if one is not provided it will be created automatically.
End of explanation
b.add_spot(component='secondary', feature='spot02')
Explanation: As a shortcut, we can also call add_spot directly.
End of explanation
print b['spot01']
b.set_value(qualifier='relteff', feature='spot01', value=0.9)
b.set_value(qualifier='radius', feature='spot01', value=30)
b.set_value(qualifier='colat', feature='spot01', value=45)
b.set_value(qualifier='long', feature='spot01', value=90)
Explanation: Spot Parameters
A spot is defined by the colatitude and longitude of its center, its angular radius, and the ratio of temperature of the spot to the local intrinsic value.
NOTE: the parameter name was changed from "colon" to "long" in 2.0.2. For all further releases in 2.0.X, the "colon" parameter still exists but is read-only. Starting with 2.1.0, the "colon" parameter will no longer exist.
End of explanation
b.run_compute(protomesh=True)
axs, artists = b.plot(component='primary', facecolor='teffs', facecmap='YlOrRd', edgecolor=None)
Explanation: To see the spot, let's compute and plot the protomesh.
End of explanation
b.set_value('syncpar@primary', 1.5)
b.add_dataset('mesh', times=[0,0.25,0.5,0.75,1.0])
b.run_compute(irrad_method='none')
Explanation: Spot Corotation
NOTE: spots failed to corotate correctly before version 2.0.2.
The positions (colat, long) of a spot are defined at t0 (note: t0@system, not necessarily t0_perpass or t0_supconj). If the stars are not synchronous, then the spots will corotate with the star. To illustrate this, let's set the syncpar > 1 and plot the mesh at three different phases from above.
End of explanation
print "t0 = {}".format(b.get_value('t0', context='system'))
axs, artists = b.plot(time=0, facecolor='teffs', facecmap='YlOrRd', edgecolor=None, y='zs')
Explanation: At time=t0=0, we can see that the spot is where defined: 45 degrees south of the north pole and 90 degree longitude (where longitude of 0 is defined as pointing towards the companion star at t0).
End of explanation
axs, artists = b.plot(time=0.25, facecolor='teffs', facecmap='YlOrRd', edgecolor=None, y='zs')
axs, artists = b.plot(time=0.5, facecolor='teffs', facecmap='YlOrRd', edgecolor=None, y='zs')
axs, artists = b.plot(time=0.75, facecolor='teffs', facecmap='YlOrRd', edgecolor=None, y='zs')
Explanation: At a later time, the spot is still technically at the same coordinates, but longitude of 0 no longer corresponds to pointing to the companion star. The coordinate system has rotated along with the asyncronous rotation of the star.
End of explanation
axs, artists = b.plot(time=1.0, facecolor='teffs', facecmap='YlOrRd', edgecolor=None, y='zs')
Explanation: Since the syncpar was set to 1.5, one full orbit later the star (and the spot) has made an extra half-rotation.
End of explanation |
10,560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Übungsblatt 11
Präsenzaufgaben
Aufgabe 1 Grammatikinduktion
In dieser Aufgabe soll vollautomatisch aus Daten (Syntaxbäumen) eine probabilistische, kontextfreie Grammatik erzeugt werden.
Füllen Sie die Lücken und versuchen Sie mithilfe Ihrer automatisch erstellten Grammatik die folgenden Sätze zu parsen
Step2: Aufgabe 2 Informationsextraktion per Syntaxanalyse
Gegenstand dieser Aufgabe ist eine anwendungsnahe Möglichkeit, Ergebnisse einer Syntaxanalyse weiterzuverarbeiten. Aus den syntaktischen Abhängigkeiten eines Textes soll (unter Zuhilfenahme einiger Normalisierungsschritte) eine semantische Repräsentation der im Text enthaltenen Informationen gewonnen werden.
Für die syntaktische Analyse soll der DependencyParser der Stanford CoreNLP Suite verwendet werden. Die semantische Repräsentation eines Satzes sei ein zweistelliges, logisches Prädikat, dessen Argumente durch Subjekt und Objekt gefüllt sind. (Bei Fehlen eines der beiden Elemente soll None geschrieben werden.)
Folgendes Beispiel illustriert das gewünschte Ergebnis
Step3: Hausaufgaben
Aufgabe 3 Parent Annotation
Parent Annotation kann die Performanz einer CFG wesentlich verbessern. Schreiben Sie eine Funktion, die einen gegebenen Syntaxbaum dieser Optimierung unterzieht. Auf diese Art und Weise transformierte Bäume können dann wiederum zur Grammatikinduktion verwendet werden.
parentHistory soll dabei die Anzahl der Vorgänger sein, die zusätzlich zum direkten Elternknoten berücksichtigt werden. (Kann bei der Lösung der Aufgabe auch ignoriert werden.)
parentChar soll ein Trennzeichen sein, das bei den neuen Knotenlabels zwischen dem ursprünglichen Knotenlabel und der Liste von Vorgängern eingefügt wird.
Step5: Aufgabe 4 Mehr Semantik für IE
Zusätzlich zu den in Aufgabe 2 behandelten Konstruktionen sollen jetzt auch negierte und komplexe Sätze mit Konjunktionen sinnvoll verarbeitet werden.
Eingabe | Python Code:
test_sentences = [
"the men saw a car .",
"the woman gave the man a book .",
"she gave a book to the man .",
"yesterday , all my trouble seemed so far away ."
]
import nltk
from nltk.corpus import treebank
from nltk.grammar import ProbabilisticProduction, PCFG
# Production count: the number of times a given production occurs
pcount = {}
# LHS-count: counts the number of times a given lhs occurs
lcount = {}
for tree in treebank.parsed_sents():
for prod in tree.productions():
pcount[prod] = pcount.get(prod, 0) + 1
lcount[prod.lhs()] = lcount.get(prod.lhs(), 0) + 1
productions = [
ProbabilisticProduction(
p.lhs(), p.rhs(),
prob=pcount[p] / lcount[p.lhs()]
)
for p in pcount
]
start = nltk.Nonterminal('S')
grammar = PCFG(start, productions)
parser = nltk.ViterbiParser(grammar)
from IPython.display import display
for s in test_sentences:
for t in parser.parse(s.split()):
display(t)
Explanation: Übungsblatt 11
Präsenzaufgaben
Aufgabe 1 Grammatikinduktion
In dieser Aufgabe soll vollautomatisch aus Daten (Syntaxbäumen) eine probabilistische, kontextfreie Grammatik erzeugt werden.
Füllen Sie die Lücken und versuchen Sie mithilfe Ihrer automatisch erstellten Grammatik die folgenden Sätze zu parsen:
End of explanation
from nltk.parse.stanford import StanfordDependencyParser
PATH_TO_CORE = "/pfad/zu/stanford-corenlp-full-2017-06-09"
jar = PATH_TO_CORE + '/' + "stanford-corenlp-3.8.0.jar"
model = PATH_TO_CORE + '/' + "stanford-corenlp-3.8.0-models.jar"
dep_parser = StanfordDependencyParser(
jar, model,
model_path="edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz"
)
from collections import defaultdict
def generate_predicates_for_sentence(sentence):
verbs = set()
sbj = {}
obj = {}
sbj_candidates = defaultdict(list)
case = {}
relcl_triples = []
for result in dep_parser.raw_parse(sentence):
for triple in result.triples():
# print(*triple)
if triple[1] == "nsubj":
# whenever we find a subject, its head can be called verb
# if something is added twice it does not matter --> sets
# so it is better to add too often than not enough !
# remember that nouns can be "verbs" in that sense together with copula
verbs.add(triple[0])
sbj[triple[0]] = triple[2]
if triple[1] == "dobj" or triple[1] == "nsubjpass":
# everything that has a direct object should be called a verb as well
verbs.add(triple[0])
obj[triple[0]] = triple[2]
if triple[0][1].startswith('V'):
# everything with a 'verb' as part of speech can be called a verb
verbs.add(triple[0])
if triple[1] == "nmod":
sbj_candidates[triple[0]].append(triple[2])
if triple[1] == "case":
case[triple[0]] = triple[2][0]
if triple[1] == "acl:relcl":
relcl_triples.append(triple)
for triple in relcl_triples:
if triple[2] not in sbj or sbj[triple[2]][1] in ["WP", "WDT"]:
sbj[triple[2]] = triple[0]
else:
obj[triple[2]] = triple[0]
for v in verbs:
if v not in sbj:
if v in sbj_candidates:
for cand in sbj_candidates[v]:
if case[cand] == "by":
sbj[v] = cand
predicates = []
for v in verbs:
if v in sbj:
subject = sbj[v]
else:
subject = ("None",)
if v in obj:
object = obj[v]
else:
object = ("None",)
predicates.append(
v[0] + "(" + subject[0] + ", " + object[0] + ")"
)
return predicates
for pred in generate_predicates_for_sentence(
"The man who saw the raven laughed out loud."
):
print(pred)
def generate_predicates_for_text(text):
predicates = []
for sent in nltk.tokenize.sent_tokenize(text):
predicates.extend(generate_predicates_for_sentence(sent))
return predicates
text =
I shot an elephant in my pajamas.
The elephant was seen by a giraffe.
The bird I need is a raven.
The man who saw the raven laughed out loud.
for pred in generate_predicates_for_text(text):
print(pred)
Explanation: Aufgabe 2 Informationsextraktion per Syntaxanalyse
Gegenstand dieser Aufgabe ist eine anwendungsnahe Möglichkeit, Ergebnisse einer Syntaxanalyse weiterzuverarbeiten. Aus den syntaktischen Abhängigkeiten eines Textes soll (unter Zuhilfenahme einiger Normalisierungsschritte) eine semantische Repräsentation der im Text enthaltenen Informationen gewonnen werden.
Für die syntaktische Analyse soll der DependencyParser der Stanford CoreNLP Suite verwendet werden. Die semantische Repräsentation eines Satzes sei ein zweistelliges, logisches Prädikat, dessen Argumente durch Subjekt und Objekt gefüllt sind. (Bei Fehlen eines der beiden Elemente soll None geschrieben werden.)
Folgendes Beispiel illustriert das gewünschte Ergebnis:
Eingabe:
I shot an elephant in my pajamas.
The elephant was seen by a giraffe in the desert.
The bird I need is a raven.
The man who saw the raven laughed out loud.
Ausgabe:
shot(I, elephant)
seen(giraffe, elephant)
need(I, bird)
raven(bird, None)
saw(man, raven)
laughed(man, None)
Beachten Sie, dass PATH_TO_CORE in folgender Code-Zelle Ihrem System entsprechend angepasst werden muss!
End of explanation
def parent_annotation(tree, parentHistory=0, parentChar="^"):
pass
test_tree = nltk.Tree(
"S",
[
nltk.Tree("NP", [
nltk.Tree("DET", []),
nltk.Tree("N", [])
]),
nltk.Tree("VP", [
nltk.Tree("V", []),
nltk.Tree("NP", [
nltk.Tree("DET", []),
nltk.Tree("N", [])
])
])
]
)
parent_annotation(
test_tree
)
Explanation: Hausaufgaben
Aufgabe 3 Parent Annotation
Parent Annotation kann die Performanz einer CFG wesentlich verbessern. Schreiben Sie eine Funktion, die einen gegebenen Syntaxbaum dieser Optimierung unterzieht. Auf diese Art und Weise transformierte Bäume können dann wiederum zur Grammatikinduktion verwendet werden.
parentHistory soll dabei die Anzahl der Vorgänger sein, die zusätzlich zum direkten Elternknoten berücksichtigt werden. (Kann bei der Lösung der Aufgabe auch ignoriert werden.)
parentChar soll ein Trennzeichen sein, das bei den neuen Knotenlabels zwischen dem ursprünglichen Knotenlabel und der Liste von Vorgängern eingefügt wird.
End of explanation
def generate_predicates_for_sentence(sentence):
pass
def generate_predicates_for_text(text):
pass
text =
I see an elephant.
You didn't see the elephant.
Peter saw the elephant and drank wine.
Explanation: Aufgabe 4 Mehr Semantik für IE
Zusätzlich zu den in Aufgabe 2 behandelten Konstruktionen sollen jetzt auch negierte und komplexe Sätze mit Konjunktionen sinnvoll verarbeitet werden.
Eingabe:
I see an elephant.
You didn't see the elephant.
Peter saw the elephant and drank wine.
Gewünschte Ausgabe:
see(I, elephant)
not_see(You, elephant)
saw(Peter, elephant)
drank(Peter, wine)
Kopieren Sie am besten Ihren aktuellen Stand von oben herunter und fügen Sie Ihre Erweiterungen dann hier ein.
End of explanation |
10,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
mnist.train.images.shape[1]
# Size of the encoding layer (the hidden layer)
encoding_dim = 16 # feel free to change this value
image_dim = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, image_dim), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_dim), name='targets')
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_ , encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(encoded, image_dim, activation=None)
# Sigmoid output from logits
decoded = tf.sigmoid(logits, name='outputs')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer().minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=20, sharex=True, sharey=True, figsize=(40,4))
in_imgs = mnist.test.images[:20]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
10,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Movie Review Sentiment with BERT on TF Hub
If you’ve been following Natural Language Processing over the past year, you’ve probably heard of BERT
Step1: In addition to the standard libraries we imported above, we'll need to install BERT's python package.
Step2: Below, we'll set an output directory location to store our model output and checkpoints. This can be a local directory, in which case you'd set OUTPUT_DIR to the name of the directory you'd like to create. If you're running this code in Google's hosted Colab, the directory won't persist after the Colab session ends.
Alternatively, if you're a GCP user, you can store output in a GCP bucket. To do that, set a directory name in OUTPUT_DIR and the name of the GCP bucket in the BUCKET field.
Set DO_DELETE to rewrite the OUTPUT_DIR if it exists. Otherwise, Tensorflow will load existing model checkpoints from that directory (if they exist).
Step3: Data
First, let's download the dataset, hosted by Stanford. The code below, which downloads, extracts, and imports the IMDB Large Movie Review Dataset, is borrowed from this Tensorflow tutorial.
Step4: To keep training fast, we'll take a sample of 5000 train and test examples, respectively.
Step5: For us, our input data is the 'sentence' column and our label is the 'polarity' column (0, 1 for negative and positive, respecitvely)
Step6: Data Preprocessing
We'll need to transform our data into a format BERT understands. This involves two steps. First, we create InputExample's using the constructor provided in the BERT library.
text_a is the text we want to classify, which in this case, is the Request field in our Dataframe.
text_b is used if we're training a model to understand the relationship between sentences (i.e. is text_b a translation of text_a? Is text_b an answer to the question asked by text_a?). This doesn't apply to our task, so we can leave text_b blank.
label is the label for our example, i.e. True, False
Step8: Next, we need to preprocess our data so that it matches the data BERT was trained on. For this, we'll need to do a couple of things (but don't worry--this is also included in the Python library)
Step9: Great--we just learned that the BERT model we're using expects lowercase data (that's what stored in tokenization_info["do_lower_case"]) and we also loaded BERT's vocab file. We also created a tokenizer, which breaks words into word pieces
Step10: Using our tokenizer, we'll call run_classifier.convert_examples_to_features on our InputExamples to convert them into features BERT understands.
Step12: Creating a model
Now that we've prepared our data, let's focus on building a model. create_model does just this below. First, it loads the BERT tf hub module again (this time to extract the computation graph). Next, it creates a single new layer that will be trained to adapt BERT to our sentiment task (i.e. classifying whether a movie review is positive or negative). This strategy of using a mostly trained model is called fine-tuning.
Step15: Next we'll wrap our model function in a model_fn_builder function that adapts our model to work for training, evaluation, and prediction.
Step16: Next we create an input builder function that takes our training feature set (train_features) and produces a generator. This is a pretty standard design pattern for working with Tensorflow Estimators.
Step17: Now we train our model! For me, using a Colab notebook running on Google's GPUs, my training time was about 14 minutes.
Step18: Now let's use our test data to see how well our model did
Step19: Now let's write code to make predictions on new sentences
Step20: Voila! We have a sentiment classifier! | Python Code:
from sklearn.model_selection import train_test_split
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from datetime import datetime
Explanation: Predicting Movie Review Sentiment with BERT on TF Hub
If you’ve been following Natural Language Processing over the past year, you’ve probably heard of BERT: Bidirectional Encoder Representations from Transformers. It’s a neural network architecture designed by Google researchers that’s totally transformed what’s state-of-the-art for NLP tasks, like text classification, translation, summarization, and question answering.
Now that BERT's been added to TF Hub as a loadable module, it's easy(ish) to add into existing Tensorflow text pipelines. In an existing pipeline, BERT can replace text embedding layers like ELMO and GloVE. Alternatively, finetuning BERT can provide both an accuracy boost and faster training time in many cases.
Here, we'll train a model to predict whether an IMDB movie review is positive or negative using BERT in Tensorflow with tf hub. Some code was adapted from this colab notebook. Let's get started!
End of explanation
!pip install bert-tensorflow
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
Explanation: In addition to the standard libraries we imported above, we'll need to install BERT's python package.
End of explanation
# Set the output directory for saving model file
# Optionally, set a GCP bucket location
OUTPUT_DIR = 'OUTPUT_DIR_NAME'#@param {type:"string"}
#@markdown Whether or not to clear/delete the directory and create a new one
DO_DELETE = False #@param {type:"boolean"}
#@markdown Set USE_BUCKET and BUCKET if you want to (optionally) store model output on GCP bucket.
USE_BUCKET = True #@param {type:"boolean"}
BUCKET = 'BUCKET_NAME' #@param {type:"string"}
if USE_BUCKET:
OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET, OUTPUT_DIR)
from google.colab import auth
auth.authenticate_user()
if DO_DELETE:
try:
tf.gfile.DeleteRecursively(OUTPUT_DIR)
except:
# Doesn't matter if the directory didn't exist
pass
tf.gfile.MakeDirs(OUTPUT_DIR)
print('***** Model output directory: {} *****'.format(OUTPUT_DIR))
Explanation: Below, we'll set an output directory location to store our model output and checkpoints. This can be a local directory, in which case you'd set OUTPUT_DIR to the name of the directory you'd like to create. If you're running this code in Google's hosted Colab, the directory won't persist after the Colab session ends.
Alternatively, if you're a GCP user, you can store output in a GCP bucket. To do that, set a directory name in OUTPUT_DIR and the name of the GCP bucket in the BUCKET field.
Set DO_DELETE to rewrite the OUTPUT_DIR if it exists. Otherwise, Tensorflow will load existing model checkpoints from that directory (if they exist).
End of explanation
from tensorflow import keras
import os
import re
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
data["sentiment"] = []
for file_path in os.listdir(directory):
with tf.gfile.GFile(os.path.join(directory, file_path), "r") as f:
data["sentence"].append(f.read())
data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1))
return pd.DataFrame.from_dict(data)
# Merge positive and negative examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join(directory, "pos"))
neg_df = load_directory_data(os.path.join(directory, "neg"))
pos_df["polarity"] = 1
neg_df["polarity"] = 0
return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
# Download and process the dataset files.
def download_and_load_datasets(force_download=False):
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True)
train_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "train"))
test_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "test"))
return train_df, test_df
train, test = download_and_load_datasets()
Explanation: Data
First, let's download the dataset, hosted by Stanford. The code below, which downloads, extracts, and imports the IMDB Large Movie Review Dataset, is borrowed from this Tensorflow tutorial.
End of explanation
train = train.sample(5000)
test = test.sample(5000)
train.columns
Explanation: To keep training fast, we'll take a sample of 5000 train and test examples, respectively.
End of explanation
DATA_COLUMN = 'sentence'
LABEL_COLUMN = 'polarity'
# label_list is the list of labels, i.e. True, False or 0, 1 or 'dog', 'cat'
label_list = [0, 1]
Explanation: For us, our input data is the 'sentence' column and our label is the 'polarity' column (0, 1 for negative and positive, respecitvely)
End of explanation
# Use the InputExample class from BERT's run_classifier code to create examples from the data
train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=None, # Globally unique ID for bookkeeping, unused in this example
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
test_InputExamples = test.apply(lambda x: bert.run_classifier.InputExample(guid=None,
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
Explanation: Data Preprocessing
We'll need to transform our data into a format BERT understands. This involves two steps. First, we create InputExample's using the constructor provided in the BERT library.
text_a is the text we want to classify, which in this case, is the Request field in our Dataframe.
text_b is used if we're training a model to understand the relationship between sentences (i.e. is text_b a translation of text_a? Is text_b an answer to the question asked by text_a?). This doesn't apply to our task, so we can leave text_b blank.
label is the label for our example, i.e. True, False
End of explanation
# This is a path to an uncased (all lowercase) version of BERT
BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1"
def create_tokenizer_from_hub_module():
Get the vocab file and casing info from the Hub module.
with tf.Graph().as_default():
bert_module = hub.Module(BERT_MODEL_HUB)
tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return bert.tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=do_lower_case)
tokenizer = create_tokenizer_from_hub_module()
Explanation: Next, we need to preprocess our data so that it matches the data BERT was trained on. For this, we'll need to do a couple of things (but don't worry--this is also included in the Python library):
Lowercase our text (if we're using a BERT lowercase model)
Tokenize it (i.e. "sally says hi" -> ["sally", "says", "hi"])
Break words into WordPieces (i.e. "calling" -> ["call", "##ing"])
Map our words to indexes using a vocab file that BERT provides
Add special "CLS" and "SEP" tokens (see the readme)
Append "index" and "segment" tokens to each input (see the BERT paper)
Happily, we don't have to worry about most of these details.
To start, we'll need to load a vocabulary file and lowercasing information directly from the BERT tf hub module:
End of explanation
tokenizer.tokenize("This here's an example of using the BERT tokenizer")
Explanation: Great--we just learned that the BERT model we're using expects lowercase data (that's what stored in tokenization_info["do_lower_case"]) and we also loaded BERT's vocab file. We also created a tokenizer, which breaks words into word pieces:
End of explanation
# We'll set sequences to be at most 128 tokens long.
MAX_SEQ_LENGTH = 128
# Convert our train and test features to InputFeatures that BERT understands.
train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
Explanation: Using our tokenizer, we'll call run_classifier.convert_examples_to_features on our InputExamples to convert them into features BERT understands.
End of explanation
def create_model(is_predicting, input_ids, input_mask, segment_ids, labels,
num_labels):
Creates a classification model.
bert_module = hub.Module(
BERT_MODEL_HUB,
trainable=True)
bert_inputs = dict(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids)
bert_outputs = bert_module(
inputs=bert_inputs,
signature="tokens",
as_dict=True)
# Use "pooled_output" for classification tasks on an entire sentence.
# Use "sequence_outputs" for token-level output.
output_layer = bert_outputs["pooled_output"]
hidden_size = output_layer.shape[-1].value
# Create our own layer to tune for politeness data.
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
# Dropout helps prevent overfitting
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
log_probs = tf.nn.log_softmax(logits, axis=-1)
# Convert labels into one-hot encoding
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
predicted_labels = tf.squeeze(tf.argmax(log_probs, axis=-1, output_type=tf.int32))
# If we're predicting, we want predicted labels and the probabiltiies.
if is_predicting:
return (predicted_labels, log_probs)
# If we're train/eval, compute loss between predicted and actual label
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
return (loss, predicted_labels, log_probs)
Explanation: Creating a model
Now that we've prepared our data, let's focus on building a model. create_model does just this below. First, it loads the BERT tf hub module again (this time to extract the computation graph). Next, it creates a single new layer that will be trained to adapt BERT to our sentiment task (i.e. classifying whether a movie review is positive or negative). This strategy of using a mostly trained model is called fine-tuning.
End of explanation
# model_fn_builder actually creates our model function
# using the passed parameters for num_labels, learning_rate, etc.
def model_fn_builder(num_labels, learning_rate, num_train_steps,
num_warmup_steps):
Returns `model_fn` closure for TPUEstimator.
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
The `model_fn` for TPUEstimator.
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_predicting = (mode == tf.estimator.ModeKeys.PREDICT)
# TRAIN and EVAL
if not is_predicting:
(loss, predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
train_op = bert.optimization.create_optimizer(
loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False)
# Calculate evaluation metrics.
def metric_fn(label_ids, predicted_labels):
accuracy = tf.metrics.accuracy(label_ids, predicted_labels)
f1_score = tf.contrib.metrics.f1_score(
label_ids,
predicted_labels)
auc = tf.metrics.auc(
label_ids,
predicted_labels)
recall = tf.metrics.recall(
label_ids,
predicted_labels)
precision = tf.metrics.precision(
label_ids,
predicted_labels)
true_pos = tf.metrics.true_positives(
label_ids,
predicted_labels)
true_neg = tf.metrics.true_negatives(
label_ids,
predicted_labels)
false_pos = tf.metrics.false_positives(
label_ids,
predicted_labels)
false_neg = tf.metrics.false_negatives(
label_ids,
predicted_labels)
return {
"eval_accuracy": accuracy,
"f1_score": f1_score,
"auc": auc,
"precision": precision,
"recall": recall,
"true_positives": true_pos,
"true_negatives": true_neg,
"false_positives": false_pos,
"false_negatives": false_neg
}
eval_metrics = metric_fn(label_ids, predicted_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)
else:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
eval_metric_ops=eval_metrics)
else:
(predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
predictions = {
'probabilities': log_probs,
'labels': predicted_labels
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
# Return the actual model function in the closure
return model_fn
# Compute train and warmup steps from batch size
# These hyperparameters are copied from this colab notebook (https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb)
BATCH_SIZE = 32
LEARNING_RATE = 2e-5
NUM_TRAIN_EPOCHS = 3.0
# Warmup is a period of time where hte learning rate
# is small and gradually increases--usually helps training.
WARMUP_PROPORTION = 0.1
# Model configs
SAVE_CHECKPOINTS_STEPS = 500
SAVE_SUMMARY_STEPS = 100
# Compute # train and warmup steps from batch size
num_train_steps = int(len(train_features) / BATCH_SIZE * NUM_TRAIN_EPOCHS)
num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)
# Specify outpit directory and number of checkpoint steps to save
run_config = tf.estimator.RunConfig(
model_dir=OUTPUT_DIR,
save_summary_steps=SAVE_SUMMARY_STEPS,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS)
model_fn = model_fn_builder(
num_labels=len(label_list),
learning_rate=LEARNING_RATE,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps)
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params={"batch_size": BATCH_SIZE})
Explanation: Next we'll wrap our model function in a model_fn_builder function that adapts our model to work for training, evaluation, and prediction.
End of explanation
# Create an input function for training. drop_remainder = True for using TPUs.
train_input_fn = bert.run_classifier.input_fn_builder(
features=train_features,
seq_length=MAX_SEQ_LENGTH,
is_training=True,
drop_remainder=False)
Explanation: Next we create an input builder function that takes our training feature set (train_features) and produces a generator. This is a pretty standard design pattern for working with Tensorflow Estimators.
End of explanation
print(f'Beginning Training!')
current_time = datetime.now()
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print("Training took time ", datetime.now() - current_time)
Explanation: Now we train our model! For me, using a Colab notebook running on Google's GPUs, my training time was about 14 minutes.
End of explanation
test_input_fn = run_classifier.input_fn_builder(
features=test_features,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=False)
estimator.evaluate(input_fn=test_input_fn, steps=None)
Explanation: Now let's use our test data to see how well our model did:
End of explanation
def getPrediction(in_sentences):
labels = ["Negative", "Positive"]
input_examples = [run_classifier.InputExample(guid="", text_a = x, text_b = None, label = 0) for x in in_sentences] # here, "" is just a dummy label
input_features = run_classifier.convert_examples_to_features(input_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
predict_input_fn = run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False)
predictions = estimator.predict(predict_input_fn)
return [(sentence, prediction['probabilities'], labels[prediction['labels']]) for sentence, prediction in zip(in_sentences, predictions)]
pred_sentences = [
"That movie was absolutely awful",
"The acting was a bit lacking",
"The film was creative and surprising",
"Absolutely fantastic!"
]
predictions = getPrediction(pred_sentences)
Explanation: Now let's write code to make predictions on new sentences:
End of explanation
predictions
Explanation: Voila! We have a sentiment classifier!
End of explanation |
10,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CSIRO-BOM
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:56
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
10,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Review
Before we start let's do some quick review from last time.
Let's create a new numpy arrray. Fill it with any values you want.
Let's say I want to calculate the average between the 1st and 2nd values, 2nd and 3rd value, and so on. How should I do it?
Creating a new numpy arrray
Step1: So what happens if you want to access the last value but don't know how many items are there in the array?
Step2: Alternatively, we can also write k[-1] to access the last value.
Step3: Geometric Mean (Average)
Let's say I want to calculate the average between the 1st and 2nd values, 2nd and 3rd value, and so on. How should I do it?
Step4: To calculate the average of the whole vector, of course, add all the numbers up and divide by the number of values.
Step5: Harmonic Mean
Let's say I want to harmonic mean between the 1st and 2nd values, 2nd and 3rd value, and so on. The formula is
Step6: Plotting sin(x) vs. x
Step7: Let's read the GASISData.csv file
Step8: Creating a Cumulative Distributive Function (CDF)
If you do some statistics work, you will know there's a thing called Cumulative Distribution Function. Which basically is counting what's the chance that a random number you pick within the data set is going to be less than the current number.
1st way of plotting the CDF for Permeability [k]
Use the ['AVPERM'] from the data set for this example.
Remove all zeros within the column since it's a fluff value.
Sort the column from smallest to largest
Create a another column called Frequency [%] that calculates the row number by the total size of the column. For example, the first sorted value will be 1/size, second sorted value will be 2/size, and so on.
Plot AVGPERM vs. the Frequency.
Again, Plot AVGPERM vs. the Frequency but using log scale.
Step9: 2nd Way for plotting a Cumulative Distributive Function for Permeability
Use the ['AVPERM'] from the data set for this example.
Remove all zeros within the column since it's a fluff value.
Create a new Count column. In this column, find how many other permeability values in the column is less than the current value.
Create a new Frequency [%] column that is Count/total size of whole column.
Take Home Problem
Step10: Take Home Problem | Python Code:
k = np.array([1,3,5,5,6,6,7,10,23123123,31232]) # create a new array
k
k[1]
k[0]
k[4]
k[10] # even though the list has 9 items, the index actually goes from 0 to 9
k[9]
Explanation: Review
Before we start let's do some quick review from last time.
Let's create a new numpy arrray. Fill it with any values you want.
Let's say I want to calculate the average between the 1st and 2nd values, 2nd and 3rd value, and so on. How should I do it?
Creating a new numpy arrray
End of explanation
k.size # the length of the list/vector is still and always be 10
k[k.size] # again, index starts at 0!
k[k.size-1] # yay
Explanation: So what happens if you want to access the last value but don't know how many items are there in the array?
End of explanation
k[-1]
k[-3]
print(k[3:])
print(k[:-4])
print(k[1:])
print(k[:-1])
Explanation: Alternatively, we can also write k[-1] to access the last value.
End of explanation
print(k)
print(k[1:])
print(k[:-1])
average_k = (k[1:]+k[:-1])/2
average
Explanation: Geometric Mean (Average)
Let's say I want to calculate the average between the 1st and 2nd values, 2nd and 3rd value, and so on. How should I do it?
End of explanation
k_sum = np.sum(k) # add all numbers up
k.size
#k_sum/k.size # divide by total number of values
np.mean(k) # verify with numpy's own function
Explanation: To calculate the average of the whole vector, of course, add all the numbers up and divide by the number of values.
End of explanation
harmonic_mean_k = 2*(1/k[:-1] + 1/k[1:])**-1
harmonic_mean_k
print("ki")
print(k)
print()
print("1/ki")
print(1/k)
print()
print("Sum of all 1/ki")
print(sum(1/k))
print()
print("size/ (Sum of all 1/ki)")
k.size / np.sum(1.0/k)
Explanation: Harmonic Mean
Let's say I want to harmonic mean between the 1st and 2nd values, 2nd and 3rd value, and so on. The formula is:
$$H = 2 (\frac{1}{k_i} + \frac{1}{k_{i+1}})^{-1}$$
And for the whole vector, it would be . . .
End of explanation
xs = np.linspace(0, 2*np.pi, 100)
print(xs)
ys = np.sin(xs) # np.sin is a universal function
print(ys)
plt.plot(xs, ys);
xs = np.arange(10)
print (xs)
print (-xs)
print (xs+xs)
print (xs*xs)
print (xs**3)
print (xs < 5)
Explanation: Plotting sin(x) vs. x
End of explanation
data = pd.read_csv(r'C:\Users\jenng\Documents\texaspse-blog\media\f16-scientific-python\week2\GASISData.csv')
#Show a preview of the GASIS data
data.head()
# It's not required but let's see how large this file is
row, col = data.shape
# data.shape will result in a (19220, 186).
# We are just recasting these values to equal row and col respectively
print("Number of Rows: {}".format(row))
print("Number of Column: {}".format(col))
Explanation: Let's read the GASISData.csv file
End of explanation
# drop all 0s
#avg_permeability = avg_permeability[avg_permeability !=0] # drop all values that are 0 because it's fluff
#avg_permeability = avg_permeability.sort_values() # sort permeability from smallest to largest
#n = np.linspace(1,avg_permeability.size,avg_permeability.size) # create a new thing that goes from 1, 2, ... size of the column
#frequency = n/avg_permeability.size # calculate the
avg_k = data['AVPERM'] # assign a single dataframe to this variable avg_k. At this point, avg_k will be of type pandas.Series
avg_k = avg_k[avg_k != 0] # drop all values that are 0
cdf_df = avg_k.to_frame() # convert series to a dataframe, assign it to variable cdf_df
cdf_df = cdf_df.sort_values(by='AVPERM') # sort dataset from smallest to largest by column AVPERM
total_num_of_values = avg_k.size # find size of column, assign it to variable total_num_of_values
# create a new column called Count that goes from 1,2..total_num_of_values
cdf_df['Count'] = np.linspace(1,total_num_of_values,total_num_of_values)
# create a new column called Frequency that divides Count by total num of val
cdf_df['Relative Frequency'] = cdf_df.Count/total_num_of_values
print(cdf_df)
# plot
plt.plot(cdf_df.AVPERM, cdf_df['Relative Frequency'],label="CDF")
plt.scatter(cdf_df.AVPERM, cdf_df['Relative Frequency'],label="Data")
plt.xlabel("Average Permeability [miliDarcy]")
plt.ylabel("Relative Frequency [%]")
plt.title("Plot")
plt.legend()
plt.show()
# plot
plt.semilogx(cdf_df.AVPERM, cdf_df['Relative Frequency'],label="CDF",color='purple',marker=".")
plt.xlabel("Avg")
plt.ylabel("Frequency")
plt.title("Plot")
plt.legend()
plt.show()
Explanation: Creating a Cumulative Distributive Function (CDF)
If you do some statistics work, you will know there's a thing called Cumulative Distribution Function. Which basically is counting what's the chance that a random number you pick within the data set is going to be less than the current number.
1st way of plotting the CDF for Permeability [k]
Use the ['AVPERM'] from the data set for this example.
Remove all zeros within the column since it's a fluff value.
Sort the column from smallest to largest
Create a another column called Frequency [%] that calculates the row number by the total size of the column. For example, the first sorted value will be 1/size, second sorted value will be 2/size, and so on.
Plot AVGPERM vs. the Frequency.
Again, Plot AVGPERM vs. the Frequency but using log scale.
End of explanation
avg_p = data['AVPOR'] # assign a single dataframe to this variable avg_k. At this point, avg_k will be of type pandas.Series
avg_p = avg_p[avg_p != 0]
p_df = avg_p.to_frame()
p_df.head()
p_df.hist(column='AVPOR',bins=10)
p_df.hist(column='AVPOR',bins=100)
# let's try to see what would happen if we change the x-axis to log scale
# There's really no reason why we are doing this
fig, ax = plt.subplots()
p_df.hist(ax=ax,column='AVPOR',bins=100)
ax.set_yscale('log')
#let's try to see what would happen if we change the y-axis to log scale
fig, ax = plt.subplots()
p_df.hist(ax=ax,column='AVPOR',bins=100)
ax.set_xscale('log')
Explanation: 2nd Way for plotting a Cumulative Distributive Function for Permeability
Use the ['AVPERM'] from the data set for this example.
Remove all zeros within the column since it's a fluff value.
Create a new Count column. In this column, find how many other permeability values in the column is less than the current value.
Create a new Frequency [%] column that is Count/total size of whole column.
Take Home Problem:
Try to create the CDF for Porosity ['AVPOR'] using the first method.
Try to create the CDF for Permeability and Porosity using the second method.
Creating a Histogram
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.hist.html
End of explanation
p_df.plot.density()
Explanation: Take Home Problem:
Instead of using the .hist() function, try to manually re-create it on your own.
+ Define you bins (from 0-10, 10-20, etc.)
+ Count the number of values that fall within those bins
+ Plot using bar charts
Creating a Probability Density Function (PDF)
A probability density function (PDF), or density of a continuous random variable, is a function that describes the relative likelihood for this random variable to take on a given value.
You will probably see this quite a bit. The plot belows show the box plot and the probability density function of a normal distribution, or a Gaussian distribution.
As you can see, the big difference between the histogram we saw earlier and this plot is that the histogram is broken up by chunks, while this plot is more continuous.
The histogram is represented by bar charts while PDF is traditionally represented with a smooth line.
Oh! And the area under a PDF curve is equal to 1.
End of explanation |
10,565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Truck Fleet puzzle
This tutorial includes everything you need to set up decision optimization engines, build constraint programming models.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>
Step1: Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization.
Step 2
Step2: Step 3
Step3: Create CPO model
Step4: Define the decision variables
Step5: Express the business constraints
Step6: Express the objective
Step7: Solve with Decision Optimization solve service
Step8: Step 4 | Python Code:
from sys import stdout
try:
import docplex.cp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
Explanation: The Truck Fleet puzzle
This tutorial includes everything you need to set up decision optimization engines, build constraint programming models.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
- <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
- <i>Python 3.x</i> runtime: Community edition
- <i>Python 3.x + DO</i> runtime: full edition
- <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition
Table of contents:
Describe the business problem
How decision optimization (prescriptive analytics) can help
Use decision optimization
Step 1: Download the library
Step 2: Model the Data
Step 3: Set up the prescriptive model
Prepare data for modeling
Define the decision variables
Express the business constraints
Express the objective
Solve with Decision Optimization solve service
Step 4: Investigate the solution and run an example analysis
Summary
Describe the business problem
The problem is to deliver some orders to several clients with a single truck.
Each order consists of a given quantity of a product of a certain type.
A product type is an integer in {0, 1, 2}.
Loading the truck with at least one product of a given type requires some specific installations.
The truck can be configured in order to handle one, two or three different types of product.
There are 7 different configurations for the truck, corresponding to the 7 possible combinations of product types:
configuration 0: all products are of type 0,
configuration 1: all products are of type 1,
configuration 2: all products are of type 2,
configuration 3: products are of type 0 or 1,
configuration 4: products are of type 0 or 2,
configuration 5: products are of type 1 or 2,
configuration 6: products are of type 0 or 1 or 2.
The cost for configuring the truck from a configuration A to a configuration B depends on A and B.
The configuration of the truck determines its capacity and its loading cost.
A delivery consists of loading the truck with one or several orders for the same customer.
Both the cost (for configuring and loading the truck) and the number of deliveries needed to deliver all the orders must be minimized, the cost being the most important criterion.
Please refer to documentation for appropriate setup of solving configuration.
How decision optimization can help
Prescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
For example:
Automate complex decisions and trade-offs to better manage limited resources.
Take advantage of a future opportunity or mitigate a future risk.
Proactively update recommendations based on changing events.
Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
Use decision optimization
Step 1: Download the library
Run the following code to install Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
End of explanation
from docplex.cp.model import *
# List of possible truck configurations. Each tuple is (load, cost) with:
# load: max truck load for this configuration,
# cost: cost for loading the truck in this configuration
TRUCK_CONFIGURATIONS = ((11, 2), (11, 2), (11, 2), (11, 3), (10, 3), (10, 3), (10, 4))
# List of customer orders.
# Each tuple is (customer index, volume, product type)
CUSTOMER_ORDERS = ((0, 3, 1), (0, 4, 2), (0, 3, 0), (0, 2, 1), (0, 5, 1), (0, 4, 1), (0, 11, 0),
(1, 4, 0), (1, 5, 0), (1, 2, 0), (1, 4, 2), (1, 7, 2), (1, 3, 2), (1, 5, 0), (1, 2, 2),
(2, 5, 1), (2, 6, 0), (2, 11, 2), (2, 1, 0), (2, 6, 0), (2, 3, 0))
# Transition costs between configurations.
# Tuple (A, B, TCost) means that the cost of modifying the truck from configuration A to configuration B is TCost
CONFIGURATION_TRANSITION_COST = tuple_set(((0, 0, 0), (0, 1, 0), (0, 2, 0), (0, 3, 10), (0, 4, 10),
(0, 5, 10), (0, 6, 15), (1, 0, 0), (1, 1, 0), (1, 2, 0),
(1, 3, 10), (1, 4, 10), (1, 5, 10), (1, 6, 15), (2, 0, 0),
(2, 1, 0), (2, 2, 0), (2, 3, 10), (2, 4, 10), (2, 5, 10),
(2, 6, 15), (3, 0, 3), (3, 1, 3), (3, 2, 3), (3, 3, 0),
(3, 4, 10), (3, 5, 10), (3, 6, 15), (4, 0, 3), (4, 1, 3),
(4, 2, 3), (4, 3, 10), (4, 4, 0), (4, 5, 10), (4, 6, 15),
(5, 0, 3), (5, 1, 3), (5, 2, 3), (5, 3, 10), (5, 4, 10),
(5, 5, 0), (5, 6, 15), (6, 0, 3), (6, 1, 3), (6, 2, 3),
(6, 3, 10), (6, 4, 10), (6, 5, 10), (6, 6, 0)
))
# Compatibility between the product types and the configuration of the truck
# allowedContainerConfigs[i] = the array of all the configurations that accept products of type i
ALLOWED_CONTAINER_CONFIGS = ((0, 3, 4, 6),
(1, 3, 5, 6),
(2, 4, 5, 6))
Explanation: Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization.
Step 2: Model the data
Next section defines the data of the problem.
End of explanation
nbTruckConfigs = len(TRUCK_CONFIGURATIONS)
maxTruckConfigLoad = [tc[0] for tc in TRUCK_CONFIGURATIONS]
truckCost = [tc[1] for tc in TRUCK_CONFIGURATIONS]
maxLoad = max(maxTruckConfigLoad)
nbOrders = len(CUSTOMER_ORDERS)
nbCustomers = 1 + max(co[0] for co in CUSTOMER_ORDERS)
volumes = [co[1] for co in CUSTOMER_ORDERS]
productType = [co[2] for co in CUSTOMER_ORDERS]
# Max number of truck deliveries (estimated upper bound, to be increased if no solution)
maxDeliveries = 15
Explanation: Step 3: Set up the prescriptive model
Prepare data for modeling
Next section extracts from problem data the parts that are frequently used in the modeling section.
End of explanation
mdl = CpoModel(name="trucks")
Explanation: Create CPO model
End of explanation
# Configuration of the truck for each delivery
truckConfigs = integer_var_list(maxDeliveries, 0, nbTruckConfigs - 1, "truckConfigs")
# In which delivery is an order
where = integer_var_list(nbOrders, 0, maxDeliveries - 1, "where")
# Load of a truck
load = integer_var_list(maxDeliveries, 0, maxLoad, "load")
# Number of deliveries that are required
nbDeliveries = integer_var(0, maxDeliveries)
# Identification of which customer is assigned to a delivery
customerOfDelivery = integer_var_list(maxDeliveries, 0, nbCustomers, "customerOfTruck")
# Transition cost for each delivery
transitionCost = integer_var_list(maxDeliveries - 1, 0, 1000, "transitionCost")
Explanation: Define the decision variables
End of explanation
# transitionCost[i] = transition cost between configurations i and i+1
for i in range(1, maxDeliveries):
auxVars = (truckConfigs[i - 1], truckConfigs[i], transitionCost[i - 1])
mdl.add(allowed_assignments(auxVars, CONFIGURATION_TRANSITION_COST))
# Constrain the volume of the orders in each truck
mdl.add(pack(load, where, volumes, nbDeliveries))
for i in range(0, maxDeliveries):
mdl.add(load[i] <= element(truckConfigs[i], maxTruckConfigLoad))
# Compatibility between the product type of an order and the configuration of its truck
for j in range(0, nbOrders):
configOfContainer = integer_var(ALLOWED_CONTAINER_CONFIGS[productType[j]])
mdl.add(configOfContainer == element(truckConfigs, where[j]))
# Only one customer per delivery
for j in range(0, nbOrders):
mdl.add(element(customerOfDelivery, where[j]) == CUSTOMER_ORDERS[j][0])
# Non-used deliveries are at the end
for j in range(1, maxDeliveries):
mdl.add((load[j - 1] > 0) | (load[j] == 0))
# Dominance: the non used deliveries keep the last used configuration
mdl.add(load[0] > 0)
for i in range(1, maxDeliveries):
mdl.add((load[i] > 0) | (truckConfigs[i] == truckConfigs[i - 1]))
# Dominance: regroup deliveries with same configuration
for i in range(maxDeliveries - 2, 0, -1):
ct = true()
for p in range(i + 1, maxDeliveries):
ct = (truckConfigs[p] != truckConfigs[i - 1]) & ct
mdl.add((truckConfigs[i] == truckConfigs[i - 1]) | ct)
Explanation: Express the business constraints
End of explanation
# Objective: first criterion for minimizing the cost for configuring and loading trucks
# second criterion for minimizing the number of deliveries
cost = sum(transitionCost) + sum(element(truckConfigs[i], truckCost) * (load[i] != 0) for i in range(maxDeliveries))
mdl.add(minimize_static_lex([cost, nbDeliveries]))
Explanation: Express the objective
End of explanation
# Search strategy: first assign order to truck
mdl.set_search_phases([search_phase(where)])
# Solve model
print("\nSolving model....")
msol = mdl.solve(TimeLimit=20)
Explanation: Solve with Decision Optimization solve service
End of explanation
if msol.is_solution():
print("Solution: ")
ovals = msol.get_objective_values()
print(" Configuration cost: {}, number of deliveries: {}".format(ovals[0], ovals[1]))
for i in range(maxDeliveries):
ld = msol.get_value(load[i])
if ld > 0:
stdout.write(" Delivery {:2d}: config={}".format(i,msol.get_value(truckConfigs[i])))
stdout.write(", items=")
for j in range(nbOrders):
if (msol.get_value(where[j]) == i):
stdout.write(" <{}, {}, {}>".format(j, productType[j], volumes[j]))
stdout.write('\n')
else:
stdout.write("Solve status: {}\n".format(msol.get_solve_status()))
Explanation: Step 4: Investigate the solution and then run an example analysis
End of explanation |
10,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex AI Pipelines
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Install the latest GA version of google-cloud-pipeline-components library as well.
Step3: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step4: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
Step5: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step6: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step7: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step8: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step9: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step10: Only if your bucket doesn't already exist
Step11: Finally, validate access to your Cloud Storage bucket by examining its contents
Step12: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
Step13: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
Step14: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step15: Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines
Step16: Additional imports.
Step17: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
Step18: Define AutoML tabular regression model pipeline that uses components from google_cloud_pipeline_components
Next, you define the pipeline.
Create and deploy an AutoML tabular regression Model resource using a Dataset resource.
Step19: Compile the pipeline
Next, compile the pipeline.
Step20: Run the pipeline
Next, run the pipeline.
Step21: Click on the generated link to see your run in the Cloud Console.
<!-- It should look something like this as it is running | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex AI Pipelines: AutoML tabular regression pipelines using google-cloud-pipeline-components
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_automl_tabular.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_automl_tabular.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_automl_tabular.ipynb">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This notebook shows how to use the components defined in google_cloud_pipeline_components to build an AutoML tabular regression workflow on Vertex AI Pipelines.
Dataset
The dataset used for this tutorial is the California Housing dataset from the 1990 Census
The dataset predicts the median house price.
Objective
In this tutorial, you create an AutoML tabular regression using a pipeline with components from google_cloud_pipeline_components.
The steps performed include:
Create a Dataset resource.
Train an AutoML Model resource.
Creates an Endpoint resource.
Deploys the Model resource to the Endpoint resource.
The components are documented here.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3.
Activate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.
Run jupyter notebook on the command line in a terminal shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex AI SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
! pip3 install $USER kfp google-cloud-pipeline-components --upgrade
Explanation: Install the latest GA version of google-cloud-pipeline-components library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
! python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
! python3 -c "import google_cloud_pipeline_components; print('google_cloud_pipeline_components version: {}'.format(google_cloud_pipeline_components.__version__))"
Explanation: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
Explanation: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
End of explanation
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
Explanation: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
PIPELINE_ROOT = "{}/pipeline_root/cal_housing".format(BUCKET_NAME)
Explanation: Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines:
End of explanation
import kfp
Explanation: Additional imports.
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
TRAIN_FILE_NAME = "california_housing_train.csv"
! gsutil cp gs://aju-dev-demos-codelabs/sample_data/california_housing_train.csv {PIPELINE_ROOT}/data/
gcs_csv_path = f"{PIPELINE_ROOT}/data/{TRAIN_FILE_NAME}"
@kfp.dsl.pipeline(name="automl-tab-training-v2")
def pipeline(project: str = PROJECT_ID, region: str = REGION):
from google_cloud_pipeline_components import aiplatform as gcc_aip
from google_cloud_pipeline_components.v1.endpoint import (EndpointCreateOp,
ModelDeployOp)
dataset_create_op = gcc_aip.TabularDatasetCreateOp(
project=project, display_name="housing", gcs_source=gcs_csv_path
)
training_op = gcc_aip.AutoMLTabularTrainingJobRunOp(
project=project,
display_name="train-automl-cal_housing",
optimization_prediction_type="regression",
optimization_objective="minimize-rmse",
column_transformations=[
{"numeric": {"column_name": "longitude"}},
{"numeric": {"column_name": "latitude"}},
{"numeric": {"column_name": "housing_median_age"}},
{"numeric": {"column_name": "total_rooms"}},
{"numeric": {"column_name": "total_bedrooms"}},
{"numeric": {"column_name": "population"}},
{"numeric": {"column_name": "households"}},
{"numeric": {"column_name": "median_income"}},
{"numeric": {"column_name": "median_house_value"}},
],
dataset=dataset_create_op.outputs["dataset"],
target_column="median_house_value",
)
endpoint_op = EndpointCreateOp(
project=project,
location=region,
display_name="train-automl-flowers",
)
ModelDeployOp(
model=training_op.outputs["model"],
endpoint=endpoint_op.outputs["endpoint"],
dedicated_resources_machine_type="n1-standard-4",
dedicated_resources_min_replica_count=1,
dedicated_resources_max_replica_count=1,
)
Explanation: Define AutoML tabular regression model pipeline that uses components from google_cloud_pipeline_components
Next, you define the pipeline.
Create and deploy an AutoML tabular regression Model resource using a Dataset resource.
End of explanation
from kfp.v2 import compiler # noqa: F811
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="tabular regression_pipeline.json".replace(" ", "_"),
)
Explanation: Compile the pipeline
Next, compile the pipeline.
End of explanation
DISPLAY_NAME = "cal_housing_" + TIMESTAMP
job = aip.PipelineJob(
display_name=DISPLAY_NAME,
template_path="tabular regression_pipeline.json".replace(" ", "_"),
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
job.run()
! rm tabular_regression_pipeline.json
Explanation: Run the pipeline
Next, run the pipeline.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
try:
if delete_model and "DISPLAY_NAME" in globals():
models = aip.Model.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
model = models[0]
aip.Model.delete(model)
print("Deleted model:", model)
except Exception as e:
print(e)
try:
if delete_endpoint and "DISPLAY_NAME" in globals():
endpoints = aip.Endpoint.list(
filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time"
)
endpoint = endpoints[0]
endpoint.undeploy_all()
aip.Endpoint.delete(endpoint.resource_name)
print("Deleted endpoint:", endpoint)
except Exception as e:
print(e)
if delete_dataset and "DISPLAY_NAME" in globals():
if "tabular" == "tabular":
try:
datasets = aip.TabularDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TabularDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "image":
try:
datasets = aip.ImageDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.ImageDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "text":
try:
datasets = aip.TextDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TextDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "video":
try:
datasets = aip.VideoDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.VideoDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
try:
if delete_pipeline and "DISPLAY_NAME" in globals():
pipelines = aip.PipelineJob.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
aip.PipelineJob.delete(pipeline.resource_name)
print("Deleted pipeline:", pipeline)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Click on the generated link to see your run in the Cloud Console.
<!-- It should look something like this as it is running:
<a href="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" width="40%"/></a> -->
In the UI, many of the pipeline DAG nodes will expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).
<a href="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" width="40%"/></a>
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial -- Note: this is auto-generated and not all resources may be applicable for this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
10,567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Translate dataset
The main language of the project is English
Step1: New names are based on the "Nome do Dado" column of the table available at data/2016-08-08-datasets-format.html, not "Elemento de Dado", their current names.
Step2: When localizing categorical values, I prefer a direct translation over adaptation as much as possible. Not sure what values each attribute will contain, so I give the power of the interpretation to the people analyzing it in the future. | Python Code:
import pandas as pd
data = pd.read_csv('../data/2016-08-08-AnoAtual.csv')
data.shape
data.head()
data.iloc[0]
Explanation: Translate dataset
The main language of the project is English: works well mixed in programming languages like Python and provides a low barrier for non-Brazilian contributors. Today, the dataset we make available by default for them is a set of XMLs from The Chamber of Deputies, in Portuguese. We need attribute names and categorical values to be translated to English.
This file is intended to serve as a base for the script to translate current and future datasets in the same format.
End of explanation
data.rename(columns={
'ideDocumento': 'document_id',
'txNomeParlamentar': 'congressperson_name',
'ideCadastro': 'congressperson_id',
'nuCarteiraParlamentar': 'congressperson_document',
'nuLegislatura': 'term',
'sgUF': 'state',
'sgPartido': 'party',
'codLegislatura': 'term_id',
'numSubCota': 'subquota_number',
'txtDescricao': 'subquota_description',
'numEspecificacaoSubCota': 'subquota_group_id',
'txtDescricaoEspecificacao': 'subquota_group_description',
'txtFornecedor': 'supplier',
'txtCNPJCPF': 'cnpj_cpf',
'txtNumero': 'document_number',
'indTipoDocumento': 'document_type',
'datEmissao': 'issue_date',
'vlrDocumento': 'document_value',
'vlrGlosa': 'remark_value',
'vlrLiquido': 'net_value',
'numMes': 'month',
'numAno': 'year',
'numParcela': 'installment',
'txtPassageiro': 'passenger',
'txtTrecho': 'leg_of_the_trip',
'numLote': 'batch_number',
'numRessarcimento': 'reimbursement_number',
'vlrRestituicao': 'reimbursement_value',
'nuDeputadoId': 'applicant_id',
}, inplace=True)
data['subquota_description'] = data['subquota_description'].astype('category')
data['subquota_description'].cat.categories
Explanation: New names are based on the "Nome do Dado" column of the table available at data/2016-08-08-datasets-format.html, not "Elemento de Dado", their current names.
End of explanation
data['subquota_description'].cat.rename_categories([
'Publication subscriptions',
'Fuels and lubricants',
'Consultancy, research and technical work',
'Publicity of parliamentary activity',
'Flight ticket issue',
'Congressperson meal',
'Lodging, except for congressperson from Distrito Federal',
'Aircraft renting or charter of aircraft',
'Watercraft renting or charter',
'Automotive vehicle renting or charter',
'Maintenance of office supporting parliamentary activity',
'Participation in course, talk or similar event',
'Flight tickets',
'Terrestrial, maritime and fluvial tickets',
'Security service provided by specialized company',
'Taxi, toll and parking',
'Postal services',
'Telecommunication',
], inplace=True)
data.head()
data.iloc[0]
Explanation: When localizing categorical values, I prefer a direct translation over adaptation as much as possible. Not sure what values each attribute will contain, so I give the power of the interpretation to the people analyzing it in the future.
End of explanation |
10,568 | Given the following text description, write Python code to implement the functionality described.
Description:
Finding Quadrant of a Coordinate with respect to a Circle
Python3 Program to find the quadrant of a given coordinate w . rt . the centre of a circle ; Thus function returns the quadrant number ; Coincides with center ; Outside circle ; 1 st quadrant ; 2 nd quadrant ; 3 rd quadrant ; 4 th quadrant ; Coordinates of centre ; Radius of circle ; Coordinates of the given po
| Python Code:
import math
def getQuadrant(X , Y , R , PX , PY ) :
if(PX == X and PY == Y ) :
return 0 ;
val =(math . pow(( PX - X ) , 2 ) + math . pow(( PY - Y ) , 2 ) ) ;
if(val > pow(R , 2 ) ) :
return - 1 ;
if(PX > X and PY >= Y ) :
return 1 ;
if(PX <= X and PY > Y ) :
return 2 ;
if(PX < X and PY <= Y ) :
return 3 ;
if(PX >= X and PY < Y ) :
return 4 ;
X = 0 ;
Y = 3 ;
R = 2 ;
PX = 1 ;
PY = 4 ;
ans = getQuadrant(X , Y , R , PX , PY ) ;
if(ans == - 1 ) : print("Lies ▁ Outside ▁ the ▁ circle ") ;
elif(ans == 0 ) : print("Coincides ▁ with ▁ centre ") ;
else : print(ans , "Quadrant ") ;
|
10,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSCS530 Winter 2016
Complex Systems 530 - Computer Modeling of Complex Systems (Winter 2016)
Course ID
Step1: Initializing a network
In some problems, we can define our own rules to "grow" or "generate" a network. However, for many problems, we may want to re-use an existing methodology. networkx comes with a suite of methods to "sample" random graphs, such as
Step2: Visualizing a network
As humans, we are wired for visuals; for networks, visualizations can provide an important first characterization that isn't apparent from simple statistics.
In the cell below, we show how to calculate a "layout" for a network and visualize it using networkx.
For more examples, see the Drawing Graphs section of the NetworkX tutorial.
Step3: Outbreak in a network
Next, we'll simulate a very simple outbreak in a network. Our disease will start by infecting a random patient, then pass with probability 1 to any connected individuals.
To find the "connected" individuals, we'll need to learn how to find the neighbors of a node. NetworkX makes this easy.
Step4: First model step
In our simple disease outbreak model, each step of the model consists of the following
Step5: Second model step
As mentioned above, we now need to use a for loop over all individuals.
Step6: Running a few more steps
Let's wrap our model step method in yet another for loop, allowing us to simulate multiple steps of the model at once. | Python Code:
%matplotlib inline
# Imports
import networkx as nx
import numpy
import matplotlib.pyplot as plt
import pandas
import seaborn; seaborn.set()
seaborn.set_style("darkgrid")
# Import widget methods
from IPython.html.widgets import *
Explanation: CSCS530 Winter 2016
Complex Systems 530 - Computer Modeling of Complex Systems (Winter 2016)
Course ID: CMPLXSYS 530
Course Title: Computer Modeling of Complex Systems
Term: Winter 2016
Schedule: Wednesdays and Friday, 1:00-2:30PM ET
Location: 120 West Hall (http://www.lsa.umich.edu/cscs/research/computerlab)
Teachers: Mike Bommarito and Sarah Cherng
Basic Networks
In this notebook, we'll explore one of the most popular types of environments - a network. In the cells below, we'll be creating a small-world network and simulating a disease outbreak.
### Imports
In the import section below, you'll see an important new addition: networkx. We'll be using networkx as our preferred package for creating and managing network objects; we'll include links to more info and documentation in later cells.
End of explanation
# Create a random graph
nodes = 30
edges = 2
prob_out = 0.2
g = nx.newman_watts_strogatz_graph(nodes, edges, prob_out)
print((g.number_of_nodes(), g.number_of_edges()))
Explanation: Initializing a network
In some problems, we can define our own rules to "grow" or "generate" a network. However, for many problems, we may want to re-use an existing methodology. networkx comes with a suite of methods to "sample" random graphs, such as:
* Trees, e.g., balanced trees (networkx.balanced_tree)
* Erdos-Renyi graphs (networkx.erdos_renyi_graph)
* Watts-Strogatz small-world graphs (networkx.watts_strogatz_graph)
* Bipartite graphs
For a full list, see this page on Graph Generators in the networkx documentation.
In the sample below, we'll create a Newman-Watts-Strogatz small-world graph and print some basic info about the graph.
End of explanation
# Draw the random graph
g_layout = nx.spring_layout(g, iterations=1000)
nx.draw_networkx(g, pos=g_layout, node_color='#dddddd')
Explanation: Visualizing a network
As humans, we are wired for visuals; for networks, visualizations can provide an important first characterization that isn't apparent from simple statistics.
In the cell below, we show how to calculate a "layout" for a network and visualize it using networkx.
For more examples, see the Drawing Graphs section of the NetworkX tutorial.
End of explanation
# Let's pick a node at random to infect
patient_zero = numpy.random.choice(g.nodes())
healthy_nodes = g.nodes()
healthy_nodes.remove(patient_zero)
patient_zero
healthy_nodes
# Now we can visualize the infected node's position
f = plt.figure()
nx.draw_networkx_nodes(g, g_layout,
nodelist=[patient_zero],
node_color='red')
nx.draw_networkx_nodes(g, g_layout,
nodelist=healthy_nodes,
node_color='#dddddd')
nx.draw_networkx_edges(g, g_layout,
width=1.0,
alpha=0.5,
edge_color='#111111')
_ = nx.draw_networkx_labels(g, g_layout,
dict(zip(g.nodes(), g.nodes())),
font_size=10)
Explanation: Outbreak in a network
Next, we'll simulate a very simple outbreak in a network. Our disease will start by infecting a random patient, then pass with probability 1 to any connected individuals.
To find the "connected" individuals, we'll need to learn how to find the neighbors of a node. NetworkX makes this easy.
End of explanation
# Find patient zero's neighbors
neighbors = g.neighbors(patient_zero)
neighbors
# Let's infect all of his neighbors!
infected_patients = [patient_zero]
infected_patients.extend(neighbors)
infected_patients
# Remove the infected from healthy nodes
healthy_nodes = [node for node in healthy_nodes if node not in infected_patients]
# Now we can visualize the infected node's position
f = plt.figure()
nx.draw_networkx_nodes(g, g_layout,
nodelist=infected_patients,
node_color='red')
nx.draw_networkx_nodes(g, g_layout,
nodelist=healthy_nodes,
node_color='#dddddd')
nx.draw_networkx_edges(g, g_layout,
width=1.0,
alpha=0.5,
edge_color='#111111')
_ = nx.draw_networkx_labels(g, g_layout,
dict(zip(g.nodes(), g.nodes())),
font_size=10)
Explanation: First model step
In our simple disease outbreak model, each step of the model consists of the following:
for each infected individual, find their neighbors
infect all neighbors
For our first step, we start with a single infected person; in subsequent steps, we need to use a for loop to handle all currently infected persons.
End of explanation
# Now let's infect the neighbors of all infected patients!
newly_infected = []
for infected_patient in infected_patients:
# Find patient zero's neighbors
neighbors = [neighbor for neighbor in g.neighbors(infected_patient) if neighbor not in infected_patients]
newly_infected.extend(neighbors)
newly_infected
# Update infected and healthy
infected_patients.extend(newly_infected)
# Remove the infected from healthy nodes
healthy_nodes = [node for node in healthy_nodes if node not in infected_patients]
# Now we can visualize the infected node's position
f = plt.figure()
nx.draw_networkx_nodes(g, g_layout,
nodelist=infected_patients,
node_color='red')
nx.draw_networkx_nodes(g, g_layout,
nodelist=healthy_nodes,
node_color='#dddddd')
nx.draw_networkx_edges(g, g_layout,
width=1.0,
alpha=0.5,
edge_color='#111111')
_ = nx.draw_networkx_labels(g, g_layout,
dict(zip(g.nodes(), g.nodes())),
font_size=10)
Explanation: Second model step
As mentioned above, we now need to use a for loop over all individuals.
End of explanation
# Now let's infect the neighbors of all infected patients!
model_steps = 2
# Iterate over steps
for i in range(model_steps):
# Iterate over infected
newly_infected = []
for infected_patient in infected_patients:
# Find patient neighbors and infect
neighbors = [neighbor for neighbor in g.neighbors(infected_patient) if neighbor not in infected_patients]
newly_infected.extend(neighbors)
# Remove the infected from healthy nodes
infected_patients.extend(newly_infected)
healthy_nodes = [node for node in healthy_nodes if node not in infected_patients]
# Now we can visualize the infected node's position
f = plt.figure()
nx.draw_networkx_nodes(g, g_layout,
nodelist=infected_patients,
node_color='red')
nx.draw_networkx_nodes(g, g_layout,
nodelist=healthy_nodes,
node_color='#dddddd')
nx.draw_networkx_edges(g, g_layout,
width=1.0,
alpha=0.5,
edge_color='#111111')
_ = nx.draw_networkx_labels(g, g_layout,
dict(zip(g.nodes(), g.nodes())),
font_size=10)
Explanation: Running a few more steps
Let's wrap our model step method in yet another for loop, allowing us to simulate multiple steps of the model at once.
End of explanation |
10,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Single Station Analysis - Historic
Building on the API Exploration Notebook and the Filtering Observed Arrivals notebook. Let's explore a different approach for analyzing the data. Note that, I modified the api scraper to only retrieve the soonest time from the next subway API. This should (hopefully) help with some of the issues we were previously having. I made a new database and ran the API for a few hours on Sunday polling only St. George station (station_id == 10) at a poll frequency of once every 10 seconds. I will post the data online so that others can try it out.
Created by Rami on May 6/2018
Step1: Retrieving data from the database
Let's start by getting data from our database by joining along requestid to simplify things for us we're only going to look at Southbound trains for now.
Step2: Extracting some useful information
Now we need to process the data to extract some useful information from the raw ntas_data. To do this we're going to go row by row through the table shown above to get arrival times, departure times and wait times.
arrival_times are the times at which a train arrives at St. George station
departure_times are the times at which a train leaves St. George station
all_wait_times are all the reported wait times from every API call (which in this case is every 10 seconds)
expected_wait_times are the expected wait times immediately after a train has departed the station. They represent the worst case wait times.
Step3: We can look at all the reported wait times. While this is somewhat interesting, it doesn't tell us very much
Step4: Headway analysis
By looking at the difference in arrival times at St. Geore we can determine the headway (aka. the time between trains) as the approach St. George station
Step5: Analyzing time spent at the station
We can also look at how long trains spend at the station by looking at the difference between the departure and arrival times. St. George station is an interchange station, as such, trains do tend to spend longer here than at intermediary station.
Step6: Expected wait times
The expected wait times represent the worst-case wait reported wait time immediately after the previous train has left the station
Step7: Actual wait time
It's instructive if we can look at the actual worst-case wait time and compare this to the expected worst case wait time. In this case, we will also consider the actual worst-case wait time as the time between when a train departs and the next train arrives (i.e the difference between the arrival time and the previous departed time)
Step8: Comparing actual and expected wait times
Now let's put everything together and compare the actual and expected wait times.
Step9: We can also plot all the reported wait times too!
Step10: We can also look at how long trains spend at St. George | Python Code:
import datetime
from psycopg2 import connect
import configparser
import pandas as pd
import pandas.io.sql as pandasql
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
%matplotlib qt
try:
con.close()
except:
print("No existing connection... moving on")
CONFIG = configparser.ConfigParser(interpolation=None)
CONFIG.read('../db.cfg')
dbset = CONFIG['DBSETTINGS']
con = connect(**dbset)
Explanation: Single Station Analysis - Historic
Building on the API Exploration Notebook and the Filtering Observed Arrivals notebook. Let's explore a different approach for analyzing the data. Note that, I modified the api scraper to only retrieve the soonest time from the next subway API. This should (hopefully) help with some of the issues we were previously having. I made a new database and ran the API for a few hours on Sunday polling only St. George station (station_id == 10) at a poll frequency of once every 10 seconds. I will post the data online so that others can try it out.
Created by Rami on May 6/2018
End of explanation
sql = '''SELECT requestid, stationid, lineid, create_date, request_date, station_char, subwayline, system_message_type,
timint, traindirection, trainid, train_message
FROM requests
INNER JOIN ntas_data USING (requestid)
WHERE request_date >= '2017-06-14'::DATE + interval '6 hours 5 minutes'
AND request_date < '2017-06-14'::DATE + interval '29 hours'
AND stationid = 10
AND traindirection = 'South'
ORDER BY request_date
'''
stg_south = pandasql.read_sql(sql, con)
stg_south
stg_south_resamp = stg_south[stg_south.index % 3 == 0]
stg_south_resamp
Explanation: Retrieving data from the database
Let's start by getting data from our database by joining along requestid to simplify things for us we're only going to look at Southbound trains for now.
End of explanation
arrival_times = []
departure_times = []
all_wait_times = []
all_time_stamps = []
expected_wait_times = []
prev_arrival_train_id = -1
for index, row in stg_south_resamp.iterrows():
if index == 0:
prev_departure_train_id = row['trainid']
all_wait_times.append(row['timint'])
all_time_stamps.append(row['create_date'])
if (row['trainid'] != prev_arrival_train_id):
arrival_times.append(row['create_date'])
prev_arrival_train_id = row['trainid']
#elif (row['trainid'] != prev_departure_train_id):
departure_times.append(row['create_date'])
expected_wait_times.append(row['timint'])
#prev_departure_train_id = row['trainid']
Explanation: Extracting some useful information
Now we need to process the data to extract some useful information from the raw ntas_data. To do this we're going to go row by row through the table shown above to get arrival times, departure times and wait times.
arrival_times are the times at which a train arrives at St. George station
departure_times are the times at which a train leaves St. George station
all_wait_times are all the reported wait times from every API call (which in this case is every 10 seconds)
expected_wait_times are the expected wait times immediately after a train has departed the station. They represent the worst case wait times.
End of explanation
plt.plot(all_time_stamps,all_wait_times)
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('All reported wait times at St. George')
plt.savefig('all_wait_times.png', dpi=500)
plt.show()
def timeToArrival(all_time_stamps,all_wait_times,arrival_times):
actual_wait_times = []
i = 0
k = 0
arrival_time = arrival_times[i]
for time in all_time_stamps:
if (all_wait_times[k] == 0):
actual_wait_times.append(arrival_times[0]-arrival_times[0])
k+=1
continue
while ((arrival_time - time).total_seconds() < 0):
i+=1
if (i > (len(arrival_times) -1)):
break
arrival_time = arrival_times[i]
actual_wait_times.append(arrival_time - time)
k+=1
return actual_wait_times
print(len(all_time_stamps[0:-1]))
actual_wait_times_all = timeToArrival(all_time_stamps,all_wait_times,arrival_times)
def sliding_window_filter(input_mat,window_size,overlap):
average_time = []
max_time = []
for i in range(0,len(input_mat)-window_size,overlap):
window = input_mat[i:(i+window_size)]
average_time.append(np.mean(window))
max_time.append(np.mean(window))
return average_time #, max_time
window_size = 30
overlap = 25
#average_time, max_time = sliding_window_filter(all_wait_times,window_size, overlap)
#times = all_time_stamps[0:len(all_time_stamps)-window_size:overlap]
#times = all_time_stamps[0:len(actual_wait_times_all)]
times = all_time_stamps[0:len(all_time_stamps)-window_size:overlap]
plt.plot(times,np.floor(sliding_window_filter(convert_timedelta_to_mins(actual_wait_times_all),window_size,overlap)))
#average_time, max_time = sliding_window_filter(convert_timedelta_to_mins(actual_wait_times_all),window_size, overlap)
plt.plot(times,np.ceil(sliding_window_filter(all_wait_times,window_size,overlap)))
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('All reported wait times at St. George')
plt.show()
class sliding_figure:
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
def __init__(self,all_time_stamps,all_wait_times):
self.fig, self.ax = plt.subplots()
plt.subplots_adjust(bottom=0.25)
self.t = all_time_stamps;
self.s = all_wait_times;
self.l, = plt.plot(self.t,self.s)
self.y_min = 0.0;
self.y_max = max(self.s)
plt.axis([self.t[0], self.t[100], self.y_min, self.y_max])
x_dt = self.t[100] - self.t[0]
self.axcolor = 'lightgoldenrodyellow'
self.axpos = plt.axes([0.2, 0.1, 0.65, 0.03], facecolor=axcolor)
self.spos = Slider(self.axpos, 'Pos', matplotlib.dates.date2num(self.t[0]), matplotlib.dates.date2num(self.t[-1]))
#self.showPlot()
# pretty date names
plt.gcf().autofmt_xdate()
self.plt = plt
#self.showPlot()
def update(self,val):
pos = self.spos.val
self.xmax_time = matplotlib.dates.num2date(pos) + x_dt
self.xmin_time = pos
self.ax.axis([self.xmin_time, self.xmax_time, self.y_min, self.y_max])
fig.canvas.draw_idle()
def showPlot(self):
self.spos.on_changed(self.update)
self.plt.show()
wait_times_figure = sliding_figure(all_time_stamps, all_wait_times)
wait_times_figure.showPlot()
Explanation: We can look at all the reported wait times. While this is somewhat interesting, it doesn't tell us very much
End of explanation
def time_delta(times):
delta_times = []
for n in range(0,len(times)-1):
time_diff = times[n+1] - times[n]
delta_times.append(time_diff/np.timedelta64(1, 's'))
return delta_times
delta_times = time_delta(arrival_times)
#delta_times
plt.plot(arrival_times[:-1],np.multiply(delta_times,1/60.0))
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Headway (mins)')
plt.title('Headway between trains as they approach St. George')
plt.savefig('headway.png', dpi=500)
Explanation: Headway analysis
By looking at the difference in arrival times at St. Geore we can determine the headway (aka. the time between trains) as the approach St. George station
End of explanation
time_at_station = np.subtract(departure_times[:],arrival_times[:])
#time_at_station
def convert_timedelta_to_mins(mat):
result = []
for element in mat:
result.append((element/np.timedelta64(1, 'm')))
return result
time_at_station_mins = convert_timedelta_to_mins(time_at_station)
plt.plot(departure_times,time_at_station_mins)
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Duration of time at station (mins)')
plt.title('Duration of time that trains spend at St. George Station')
Explanation: Analyzing time spent at the station
We can also look at how long trains spend at the station by looking at the difference between the departure and arrival times. St. George station is an interchange station, as such, trains do tend to spend longer here than at intermediary station.
End of explanation
#expected_wait_times
plt.plot(arrival_times,expected_wait_times)
plt.ylabel('Expected Wait Time (mins)')
plt.xticks(fontsize=10, rotation=45)
plt.xlabel('Time')
plt.title('Worst-case expected wait times for next train at St. George')
plt.savefig('expected_wait_times.png', dpi=500)
Explanation: Expected wait times
The expected wait times represent the worst-case wait reported wait time immediately after the previous train has left the station
End of explanation
actual_wait_times = np.subtract(arrival_times[1:],arrival_times[:-1])
actual_wait_times_mins = convert_timedelta_to_mins(actual_wait_times)
plt.plot(arrival_times[1:],actual_wait_times_mins,color = 'C1')
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Actual wait time (mins)')
plt.title('Worst-case actual wait times for next train at St. George')
plt.savefig('actual_wait_times.png', dpi=500)
plt.show()
print(len(actual_wait_times_mins))
window_size = 15
overlap = 14
average_time, max_time = sliding_window_filter(actual_wait_times_mins ,window_size, overlap)
print(len(average_time))
times = arrival_times[0:len(all_time_stamps)-window_size:overlap]
print(len(times))
plt.plot(times[1:],average_time)
plt.plot(times[1:],max_time)
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('All reported wait times at St. George')
plt.show()
Explanation: Actual wait time
It's instructive if we can look at the actual worst-case wait time and compare this to the expected worst case wait time. In this case, we will also consider the actual worst-case wait time as the time between when a train departs and the next train arrives (i.e the difference between the arrival time and the previous departed time)
End of explanation
print(len(expected_wait_times))
print(len(arrival_times))
print(len(arrival_times))
print(len(actual_wait_times_mins))
type(arrival_times[1])
arrival_times_pdt = []
for item in arrival_times:
arrival_times_pdt.append(datetime.time(item.to_pydatetime().hour,item.to_pydatetime().minute))
arrival_times_pdt[2]
plt.plot(arrival_times,expected_wait_times)
plt.plot(arrival_times[1:],np.floor(actual_wait_times_mins[:]))
#plt.legend(['Expected Wait for Next Train','Actual Wait Time for Next Train'],
# bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('Comparing actual and expected wait times at St. George')
lgd = plt.legend(['Expected Wait for Next Train','Actual Wait Time for Next Train'],
bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.savefig('actual_and_expected_wait_times.png', bbox_extra_artists=(lgd,), bbox_inches='tight', dpi=700)
window_size = 15
overlap = 12
average_time = sliding_window_filter(actual_wait_times_mins,window_size, overlap)
print(len(average_time))
times = arrival_times[0:len(all_time_stamps)-window_size:overlap]
print(len(times))
plt.plot(times[1:],np.floor(average_time))
average_time = sliding_window_filter(np.ceil(expected_wait_times),window_size, overlap)
plt.plot(times[1:],np.floor(average_time))
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('All reported wait times at St. George')
plt.show()
lgd = plt.legend(['Actual Wait for Next Train','Expected Wait Time for Next Train'])
Explanation: Comparing actual and expected wait times
Now let's put everything together and compare the actual and expected wait times.
End of explanation
plt.plot(departure_times,expected_wait_times)
plt.plot(departure_times[1:],actual_wait_times_mins)
plt.plot(all_time_stamps,all_wait_times)
plt.legend(['Expected Wait for Next Train','Actual Wait Time for Next Train','All Reported Wait Times'],
bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=45)
plt.ylabel('Wait Time (mins)')
plt.title('Comparing actual and expected wait times at St. George')
Explanation: We can also plot all the reported wait times too!
End of explanation
plt.plot(all_time_stamps,all_wait_times)
plt.plot(arrival_times[:],time_at_station_mins)
plt.title('Durtion of time trains spend at St.George')
plt.xlabel('Time')
plt.xticks(fontsize=10, rotation=90)
plt.ylabel('Time (mins)')
plt.legend(['All Reported Wait Times','Time train spends at station (mins)'],
bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: We can also look at how long trains spend at St. George
End of explanation |
10,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gauss integration of Finite Elements
Step1: Predefinition
The constitutive model tensor in Voigt notation (plane strain) is
$$C = \frac{(1 - \nu) E}{(1 - 2\nu) (1 + \nu) }
\begin{pmatrix}
1 & \frac{\nu}{1-\nu} & 0\
\frac{\nu}{1-\nu} & 1 & 0\
0 & 0 & \frac{1 - 2\nu}{2(1 - \nu)}
\end{pmatrix}$$
But for simplicity we are going to use
$$\hat{C} = \frac{C (1 - 2\nu) (1 + \nu)}{E} =
\begin{pmatrix}
1-\nu & \nu & 0\
\nu & 1-\nu & 0\
0 & 0 & \frac{1 - 2\nu}{2}
\end{pmatrix} \enspace ,$$
since we can always multiply by that factor afterwards to obtain the correct stiffness matrices.
Step2: Interpolation functions
The enumeration that we are using for the elements is shown below
Step3: What leads to the following shape functions
Step4: Thus, the interpolation matrix renders
Step5: And the mass matrix integrand is
$$M_\text{int}=H^TH$$
Step6: Derivatives interpolation matrix
Step7: Being the stiffness matrix integrand
$$K_\text{int} = B^T C B$$
Step8: Analytic integration
The mass matrix is obtained integrating the product of the interpolator matrix with itself, i.e.
$$\begin{align}
M &= \int\limits_{-1}^{1}\int\limits_{-1}^{1} M_\text{int} dr\, ds\
&= \int\limits_{-1}^{1}\int\limits_{-1}^{1} H^T H\, dr\, ds \enspace .
\end{align}$$
Step9: The stiffness matrix is obtained integrating the product of the interpolator-derivatives (displacement-to-strains) matrix with the constitutive tensor and itself, i.e.
$$\begin{align}
K &= \int\limits_{-1}^{1}\int\limits_{-1}^{1} K_\text{int} dr\, ds\
&= \int\limits_{-1}^{1}\int\limits_{-1}^{1} B^T C\, B\, dr\, ds \enspace .
\end{align}$$
Step10: We can generate automatically code for Fortran, C or Octave/Matlab, although it will be useful just for non-distorted elements.
Step11: We can check some numerical vales for $E=8/3$ Pa, $\nu=1/3$ and $\rho=1$ kg\m$^3$, where we can multiply by the factor that we took away from the stiffness tensor
Step12: Gauss integration
As stated before, the analytic expressions for the mass and stiffness matrices is useful for non-distorted elements. In the general case, a mapping between distorted elements and these canonical elements is used to simplify the integration domain. When this transformation is done, the functions to be integrated are more convoluted and we should use numerical integration like Gauss-Legendre quadrature.
The Gauss-Legendre quadrature approximates the integral
Step14: And the numerical integral is computed as
Step15: Best approach
A best approach is to use Python built-in functions for computing the Gauss-Legendre nodes and weights
Step16: References
[1] http | Python Code:
from __future__ import division
from sympy.utilities.codegen import codegen
from sympy import *
from sympy import init_printing
from IPython.display import Image
init_printing()
r, s, t, x, y, z = symbols('r s t x y z')
k, m, n = symbols('k m n', integer=True)
rho, nu, E = symbols('rho, nu, E')
Explanation: Gauss integration of Finite Elements
End of explanation
C = Matrix([[1 - nu, nu, 0],
[nu, 1 - nu, 0],
[0, 0, (1 - 2*nu)/2]])
C_factor = E/(1-2*nu)/(1 + nu)
C
Explanation: Predefinition
The constitutive model tensor in Voigt notation (plane strain) is
$$C = \frac{(1 - \nu) E}{(1 - 2\nu) (1 + \nu) }
\begin{pmatrix}
1 & \frac{\nu}{1-\nu} & 0\
\frac{\nu}{1-\nu} & 1 & 0\
0 & 0 & \frac{1 - 2\nu}{2(1 - \nu)}
\end{pmatrix}$$
But for simplicity we are going to use
$$\hat{C} = \frac{C (1 - 2\nu) (1 + \nu)}{E} =
\begin{pmatrix}
1-\nu & \nu & 0\
\nu & 1-\nu & 0\
0 & 0 & \frac{1 - 2\nu}{2}
\end{pmatrix} \enspace ,$$
since we can always multiply by that factor afterwards to obtain the correct stiffness matrices.
End of explanation
Image(filename='../img_src/4node_element_enumeration.png', width=300)
Explanation: Interpolation functions
The enumeration that we are using for the elements is shown below
End of explanation
N = S(1)/4*Matrix([(1 + r)*(1 + s),
(1 - r)*(1 + s),
(1 - r)*(1 - s),
(1 + r)*(1 - s)])
N
Explanation: What leads to the following shape functions
End of explanation
H = zeros(2,8)
for i in range(4):
H[0, 2*i] = N[i]
H[1, 2*i + 1] = N[i]
H.T # Transpose of the interpolation matrix
Explanation: Thus, the interpolation matrix renders
End of explanation
M_int = H.T*H
Explanation: And the mass matrix integrand is
$$M_\text{int}=H^TH$$
End of explanation
dHdr = zeros(2,4)
for i in range(4):
dHdr[0,i] = diff(N[i],r)
dHdr[1,i] = diff(N[i],s)
jaco = eye(2) # Jacobian matrix, identity for now
dHdx = jaco*dHdr
B = zeros(3,8)
for i in range(4):
B[0, 2*i] = dHdx[0, i]
B[1, 2*i+1] = dHdx[1, i]
B[2, 2*i] = dHdx[1, i]
B[2, 2*i+1] = dHdx[0, i]
B
Explanation: Derivatives interpolation matrix
End of explanation
K_int = B.T*C*B
Explanation: Being the stiffness matrix integrand
$$K_\text{int} = B^T C B$$
End of explanation
M = zeros(8,8)
for i in range(8):
for j in range(8):
M[i,j] = rho*integrate(M_int[i,j],(r,-1,1), (s,-1,1))
M
Explanation: Analytic integration
The mass matrix is obtained integrating the product of the interpolator matrix with itself, i.e.
$$\begin{align}
M &= \int\limits_{-1}^{1}\int\limits_{-1}^{1} M_\text{int} dr\, ds\
&= \int\limits_{-1}^{1}\int\limits_{-1}^{1} H^T H\, dr\, ds \enspace .
\end{align}$$
End of explanation
K = zeros(8,8)
for i in range(8):
for j in range(8):
K[i,j] = integrate(K_int[i,j], (r,-1,1), (s,-1,1))
K
Explanation: The stiffness matrix is obtained integrating the product of the interpolator-derivatives (displacement-to-strains) matrix with the constitutive tensor and itself, i.e.
$$\begin{align}
K &= \int\limits_{-1}^{1}\int\limits_{-1}^{1} K_\text{int} dr\, ds\
&= \int\limits_{-1}^{1}\int\limits_{-1}^{1} B^T C\, B\, dr\, ds \enspace .
\end{align}$$
End of explanation
K_local = MatrixSymbol('K_local', 8, 8)
code = codegen(("local_stiff", Eq(K_local, simplify(K))), "f95")
print code[0][1]
code = codegen(("local_stiff", Eq(K_local, simplify(K))), "C")
print code[0][1]
code = codegen(("local_stiff", Eq(K_local, simplify(K))), "Octave")
print code[0][1]
Explanation: We can generate automatically code for Fortran, C or Octave/Matlab, although it will be useful just for non-distorted elements.
End of explanation
(C_factor*K).subs([(E, S(8)/3), (nu, S(1)/3)])
M.subs(rho, 1)
Explanation: We can check some numerical vales for $E=8/3$ Pa, $\nu=1/3$ and $\rho=1$ kg\m$^3$, where we can multiply by the factor that we took away from the stiffness tensor
End of explanation
wts = [[2], [1,1], [S(5)/9, S(8)/9, S(5)/9],
[(18+sqrt(30))/36,(18+sqrt(30))/36, (18-sqrt(30))/36, (18-sqrt(30))/36]
]
pts = [[0], [-sqrt(S(1)/3), sqrt(S(1)/3)],
[-sqrt(S(3)/5), 0, sqrt(S(3)/5)],
[-sqrt(S(3)/7 - S(2)/7*sqrt(S(6)/5)), sqrt(S(3)/7 - S(2)/7*sqrt(S(6)/5)),
-sqrt(S(3)/7 + S(2)/7*sqrt(S(6)/5)), sqrt(S(3)/7 + S(2)/7*sqrt(S(6)/5))]]
Explanation: Gauss integration
As stated before, the analytic expressions for the mass and stiffness matrices is useful for non-distorted elements. In the general case, a mapping between distorted elements and these canonical elements is used to simplify the integration domain. When this transformation is done, the functions to be integrated are more convoluted and we should use numerical integration like Gauss-Legendre quadrature.
The Gauss-Legendre quadrature approximates the integral:
$$ \int_{-1}^1 f(x)\,dx \approx \sum_{i=1}^n w_i f(x_i)$$
The nodes $x_i$ of an order $n$ quadrature rule are the roots of $P_n$
and the weights $w_i$ are given by:
$$w_i = \frac{2}{\left(1-x_i^2\right) \left(P'_n(x_i)\right)^2}$$
For the first four orders, the weights and nodes are
End of explanation
def stiff_num(n):
Compute the stiffness matrix using Gauss quadrature
Parameters
----------
n : int
Order of the polynomial.
if n>4:
raise Exception("Number of points not valid")
K_num = zeros(8,8)
for x_i, w_i in zip(pts[n-1], wts[n-1]):
for y_j, w_j in zip(pts[n-1], wts[n-1]):
K_num = K_num + w_i*w_j*K_int.subs([(r,x_i), (s,y_j)])
return simplify(K_num)
K_num = stiff_num(3)
K_num - K
Explanation: And the numerical integral is computed as
End of explanation
from sympy.integrals.quadrature import gauss_legendre
x, w = gauss_legendre(5, 15)
print x
print w
Explanation: Best approach
A best approach is to use Python built-in functions for computing the Gauss-Legendre nodes and weights
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open('../styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: References
[1] http://en.wikipedia.org/wiki/Gaussian_quadrature
End of explanation |
10,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step5: Authenticate your Google Cloud account
If you are using Vertex AI Workbench notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step6: Import libraries and define constants
Import required libraries.
Step7: Define some constants
Step8: If EXEPERIMENT_NAME is not set, set a default one below
Step9: Concepts
To better understanding how parameters and metrics are stored and organized, we'd like to introduce the following concepts
Step10: Split dataset for training and testing.
Step11: Normalize the features in the dataset for better model performance.
Step12: Define ML model and training function
Step13: Initialize the Vertex AI SDK for Python and create an Experiment
Initialize the client for Vertex AI and create an experiment.
Step14: Start several model training runs
Training parameters and metrics are logged for each run.
Step15: Extract parameters and metrics into a dataframe for analysis
We can also extract all parameters and metrics associated with any Experiment into a dataframe for further analysis.
Step16: Visualizing an experiment's parameters and metrics
Step17: Visualizing experiments in Cloud Console
Run the following to get the URL of Vertex AI Experiments for your project. | Python Code:
import sys
if "google.colab" in sys.modules:
USER_FLAG = ""
else:
USER_FLAG = "--user"
! pip3 install -U tensorflow==2.8 $USER_FLAG
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/ml_metadata/sdk-metric-parameter-tracking-for-locally-trained-models.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/ml_metadata/sdk-metric-parameter-tracking-for-locally-trained-models.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/notebook_template.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
Vertex AI: Track parameters and metrics for locally trained models
Overview
This notebook demonstrates how to track metrics and parameters for ML training jobs and analyze this metadata using Vertex SDK for Python.
Dataset
In this notebook, we will train a simple distributed neural network (DNN) model to predict automobile's miles per gallon (MPG) based on automobile information in the auto-mpg dataset.
Objective
In this notebook, you will learn how to use Vertex SDK for Python to:
* Track parameters and metrics for a locally trained model.
* Extract and perform analysis for all parameters and metrics within an Experiment.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Vertex AI Workbench notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Run the following commands to install the Vertex SDK for Python.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
Explanation: Otherwise, set your project ID here.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebooks, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
import matplotlib.pyplot as plt
import pandas as pd
from google.cloud import aiplatform
from tensorflow.python.keras import Sequential, layers
from tensorflow.python.keras.utils import data_utils
Explanation: Import libraries and define constants
Import required libraries.
End of explanation
EXPERIMENT_NAME = "" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Define some constants
End of explanation
if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None:
EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP
Explanation: If EXEPERIMENT_NAME is not set, set a default one below:
End of explanation
def read_data(uri):
dataset_path = data_utils.get_file("auto-mpg.data", uri)
column_names = [
"MPG",
"Cylinders",
"Displacement",
"Horsepower",
"Weight",
"Acceleration",
"Model Year",
"Origin",
]
raw_dataset = pd.read_csv(
dataset_path,
names=column_names,
na_values="?",
comment="\t",
sep=" ",
skipinitialspace=True,
)
dataset = raw_dataset.dropna()
dataset["Origin"] = dataset["Origin"].map(
lambda x: {1: "USA", 2: "Europe", 3: "Japan"}.get(x)
)
dataset = pd.get_dummies(dataset, prefix="", prefix_sep="")
return dataset
dataset = read_data(
"http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data"
)
Explanation: Concepts
To better understanding how parameters and metrics are stored and organized, we'd like to introduce the following concepts:
Experiment
Experiments describe a context that groups your runs and the artifacts you create into a logical session. For example, in this notebook you create an Experiment and log data to that experiment.
Run
A run represents a single path/avenue that you executed while performing an experiment. A run includes artifacts that you used as inputs or outputs, and parameters that you used in this execution. An Experiment can contain multiple runs.
Getting started tracking parameters and metrics
You can use the Vertex SDK for Python to track metrics and parameters for models trained locally.
In the following example, you train a simple distributed neural network (DNN) model to predict automobile's miles per gallon (MPG) based on automobile information in the auto-mpg dataset.
Load and process the training dataset
Download and process the dataset.
End of explanation
def train_test_split(dataset, split_frac=0.8, random_state=0):
train_dataset = dataset.sample(frac=split_frac, random_state=random_state)
test_dataset = dataset.drop(train_dataset.index)
train_labels = train_dataset.pop("MPG")
test_labels = test_dataset.pop("MPG")
return train_dataset, test_dataset, train_labels, test_labels
train_dataset, test_dataset, train_labels, test_labels = train_test_split(dataset)
Explanation: Split dataset for training and testing.
End of explanation
def normalize_dataset(train_dataset, test_dataset):
train_stats = train_dataset.describe()
train_stats = train_stats.transpose()
def norm(x):
return (x - train_stats["mean"]) / train_stats["std"]
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
return normed_train_data, normed_test_data
normed_train_data, normed_test_data = normalize_dataset(train_dataset, test_dataset)
Explanation: Normalize the features in the dataset for better model performance.
End of explanation
def train(
train_data,
train_labels,
num_units=64,
activation="relu",
dropout_rate=0.0,
validation_split=0.2,
epochs=1000,
):
model = Sequential(
[
layers.Dense(
num_units,
activation=activation,
input_shape=[len(train_dataset.keys())],
),
layers.Dropout(rate=dropout_rate),
layers.Dense(num_units, activation=activation),
layers.Dense(1),
]
)
model.compile(loss="mse", optimizer="adam", metrics=["mae", "mse"])
print(model.summary())
history = model.fit(
train_data, train_labels, epochs=epochs, validation_split=validation_split
)
return model, history
Explanation: Define ML model and training function
End of explanation
aiplatform.init(project=PROJECT_ID, location=REGION, experiment=EXPERIMENT_NAME)
Explanation: Initialize the Vertex AI SDK for Python and create an Experiment
Initialize the client for Vertex AI and create an experiment.
End of explanation
parameters = [
{"num_units": 16, "epochs": 3, "dropout_rate": 0.1},
{"num_units": 16, "epochs": 10, "dropout_rate": 0.1},
{"num_units": 16, "epochs": 10, "dropout_rate": 0.2},
{"num_units": 32, "epochs": 10, "dropout_rate": 0.1},
{"num_units": 32, "epochs": 10, "dropout_rate": 0.2},
]
for i, params in enumerate(parameters):
aiplatform.start_run(run=f"auto-mpg-local-run-{i}")
aiplatform.log_params(params)
model, history = train(
normed_train_data,
train_labels,
num_units=params["num_units"],
activation="relu",
epochs=params["epochs"],
dropout_rate=params["dropout_rate"],
)
aiplatform.log_metrics(
{metric: values[-1] for metric, values in history.history.items()}
)
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)
aiplatform.log_metrics({"eval_loss": loss, "eval_mae": mae, "eval_mse": mse})
Explanation: Start several model training runs
Training parameters and metrics are logged for each run.
End of explanation
experiment_df = aiplatform.get_experiment_df()
experiment_df
Explanation: Extract parameters and metrics into a dataframe for analysis
We can also extract all parameters and metrics associated with any Experiment into a dataframe for further analysis.
End of explanation
plt.rcParams["figure.figsize"] = [15, 5]
ax = pd.plotting.parallel_coordinates(
experiment_df.reset_index(level=0),
"run_name",
cols=[
"param.num_units",
"param.dropout_rate",
"param.epochs",
"metric.loss",
"metric.val_loss",
"metric.eval_loss",
],
color=["blue", "green", "pink", "red"],
)
ax.set_yscale("symlog")
ax.legend(bbox_to_anchor=(1.0, 0.5))
Explanation: Visualizing an experiment's parameters and metrics
End of explanation
print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}"
)
Explanation: Visualizing experiments in Cloud Console
Run the following to get the URL of Vertex AI Experiments for your project.
End of explanation |
10,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Estimators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Advantages
Similar to a tf.keras.Model, an estimator is a model-level abstraction. The tf.estimator provides some capabilities currently still under development for tf.keras. These are
Step3: The input_fn is executed in a tf.Graph and can also directly return a (features_dics, labels) pair containing graph tensors, but this is error prone outside of simple cases like returning constants.
2. Define the feature columns.
Each tf.feature_column identifies a feature name, its type, and any input pre-processing.
For example, the following snippet creates three feature columns.
The first uses the age feature directly as a floating-point input.
The second uses the class feature as a categorical input.
The third uses the embark_town as a categorical input, but uses the hashing trick to avoid the need to enumerate the options, and to set the number of options.
For further information, check the feature columns tutorial.
Step4: 3. Instantiate the relevant pre-made Estimator.
For example, here's a sample instantiation of a pre-made Estimator named LinearClassifier
Step5: For more information, you can go the linear classifier tutorial.
4. Call a training, evaluation, or inference method.
All Estimators provide train, evaluate, and predict methods.
Step6: Benefits of pre-made Estimators
Pre-made Estimators encode best practices, providing the following benefits
Step7: Create an Estimator from the compiled Keras model. The initial model state of the Keras model is preserved in the created Estimator
Step8: Treat the derived Estimator as you would with any other Estimator.
Step9: To train, call Estimator's train function
Step10: Similarly, to evaluate, call the Estimator's evaluate function
Step12: For more details, please refer to the documentation for tf.keras.estimator.model_to_estimator.
Saving object-based checkpoints with Estimator
Estimators by default save checkpoints with variable names rather than the object graph described in the Checkpoint guide. tf.train.Checkpoint will read name-based checkpoints, but variable names may change when moving parts of a model outside of the Estimator's model_fn. For forwards compatibility saving object-based checkpoints makes it easier to train a model inside an Estimator and then use it outside of one.
Step13: tf.train.Checkpoint can then load the Estimator's checkpoints from its model_dir.
Step14: SavedModels from Estimators
Estimators export SavedModels through tf.Estimator.export_saved_model.
Step15: To save an Estimator you need to create a serving_input_receiver. This function builds a part of a tf.Graph that parses the raw data received by the SavedModel.
The tf.estimator.export module contains functions to help build these receivers.
The following code builds a receiver, based on the feature_columns, that accepts serialized tf.Example protocol buffers, which are often used with tf-serving.
Step16: You can also load and run that model, from python
Step17: tf.estimator.export.build_raw_serving_input_receiver_fn allows you to create input functions which take raw tensors rather than tf.train.Examples.
Using tf.distribute.Strategy with Estimator (Limited support)
tf.estimator is a distributed training TensorFlow API that originally supported the async parameter server approach. tf.estimator now supports tf.distribute.Strategy. If you're using tf.estimator, you can change to distributed training with very few changes to your code. With this, Estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs. This support in Estimator is, however, limited. Check out the What's supported now section below for more details.
Using tf.distribute.Strategy with Estimator is slightly different than in the Keras case. Instead of using strategy.scope, now you pass the strategy object into the RunConfig for the Estimator.
You can refer to the distributed training guide for more information.
Here is a snippet of code that shows this with a premade Estimator LinearRegressor and MirroredStrategy
Step18: Here, you use a premade Estimator, but the same code works with a custom Estimator as well. train_distribute determines how training will be distributed, and eval_distribute determines how evaluation will be distributed. This is another difference from Keras where you use the same strategy for both training and eval.
Now you can train and evaluate this Estimator with an input function | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install -U tensorflow_datasets
import tempfile
import os
import tensorflow as tf
import tensorflow_datasets as tfds
Explanation: Estimators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/estimator"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Warning: Estimators are not recommended for new code. Estimators run v1.Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our compatibility guarantees, but will receive no fixes other than security vulnerabilities. See the migration guide for details.
This document introduces tf.estimator—a high-level TensorFlow
API. Estimators encapsulate the following actions:
Training
Evaluation
Prediction
Export for serving
TensorFlow implements several pre-made Estimators. Custom estimators are still suported, but mainly as a backwards compatibility measure. Custom estimators should not be used for new code. All Estimators—pre-made or custom ones—are classes based on the tf.estimator.Estimator class.
For a quick example, try Estimator tutorials. For an overview of the API design, check the white paper.
Setup
End of explanation
def train_input_fn():
titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic = tf.data.experimental.make_csv_dataset(
titanic_file, batch_size=32,
label_name="survived")
titanic_batches = (
titanic.cache().repeat().shuffle(500)
.prefetch(tf.data.AUTOTUNE))
return titanic_batches
Explanation: Advantages
Similar to a tf.keras.Model, an estimator is a model-level abstraction. The tf.estimator provides some capabilities currently still under development for tf.keras. These are:
Parameter server based training
Full TFX integration
Estimators Capabilities
Estimators provide the following benefits:
You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model.
Estimators provide a safe distributed training loop that controls how and when to:
Load data
Handle exceptions
Create checkpoint files and recover from failures
Save summaries for TensorBoard
When writing an application with Estimators, you must separate the data input pipeline from the model. This separation simplifies experiments with different datasets.
Using pre-made Estimators
Pre-made Estimators enable you to work at a much higher conceptual level than the base TensorFlow APIs. You no longer have to worry about creating the computational graph or sessions since Estimators handle all the "plumbing" for you. Furthermore, pre-made Estimators let you experiment with different model architectures by making only minimal code changes. tf.estimator.DNNClassifier, for example, is a pre-made Estimator class that trains classification models based on dense, feed-forward neural networks.
A TensorFlow program relying on a pre-made Estimator typically consists of the following four steps:
1. Write an input functions
For example, you might create one function to import the training set and another function to import the test set. Estimators expect their inputs to be formatted as a pair of objects:
A dictionary in which the keys are feature names and the values are Tensors (or SparseTensors) containing the corresponding feature data
A Tensor containing one or more labels
The input_fn should return a tf.data.Dataset that yields pairs in that format.
For example, the following code builds a tf.data.Dataset from the Titanic dataset's train.csv file:
End of explanation
age = tf.feature_column.numeric_column('age')
cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third'])
embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32)
Explanation: The input_fn is executed in a tf.Graph and can also directly return a (features_dics, labels) pair containing graph tensors, but this is error prone outside of simple cases like returning constants.
2. Define the feature columns.
Each tf.feature_column identifies a feature name, its type, and any input pre-processing.
For example, the following snippet creates three feature columns.
The first uses the age feature directly as a floating-point input.
The second uses the class feature as a categorical input.
The third uses the embark_town as a categorical input, but uses the hashing trick to avoid the need to enumerate the options, and to set the number of options.
For further information, check the feature columns tutorial.
End of explanation
model_dir = tempfile.mkdtemp()
model = tf.estimator.LinearClassifier(
model_dir=model_dir,
feature_columns=[embark, cls, age],
n_classes=2
)
Explanation: 3. Instantiate the relevant pre-made Estimator.
For example, here's a sample instantiation of a pre-made Estimator named LinearClassifier:
End of explanation
model = model.train(input_fn=train_input_fn, steps=100)
result = model.evaluate(train_input_fn, steps=10)
for key, value in result.items():
print(key, ":", value)
for pred in model.predict(train_input_fn):
for key, value in pred.items():
print(key, ":", value)
break
Explanation: For more information, you can go the linear classifier tutorial.
4. Call a training, evaluation, or inference method.
All Estimators provide train, evaluate, and predict methods.
End of explanation
keras_mobilenet_v2 = tf.keras.applications.MobileNetV2(
input_shape=(160, 160, 3), include_top=False)
keras_mobilenet_v2.trainable = False
estimator_model = tf.keras.Sequential([
keras_mobilenet_v2,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(1)
])
# Compile the model
estimator_model.compile(
optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: Benefits of pre-made Estimators
Pre-made Estimators encode best practices, providing the following benefits:
Best practices for determining where different parts of the computational graph should run, implementing strategies on a single machine or on a
cluster.
Best practices for event (summary) writing and universally useful
summaries.
If you don't use pre-made Estimators, you must implement the preceding features yourself.
Custom Estimators
The heart of every Estimator—whether pre-made or custom—is its model function, model_fn, which is a method that builds graphs for training, evaluation, and prediction. When you are using a pre-made Estimator, someone else has already implemented the model function. When relying on a custom Estimator, you must write the model function yourself.
Note: A custom model_fn will still run in 1.x-style graph mode. This means there is no eager execution and no automatic control dependencies. You should plan to migrate away from tf.estimator with custom model_fn. The alternative APIs are tf.keras and tf.distribute. If you still need an Estimator for some part of your training you can use the tf.keras.estimator.model_to_estimator converter to create an Estimator from a keras.Model.
Create an Estimator from a Keras model
You can convert existing Keras models to Estimators with tf.keras.estimator.model_to_estimator. This is helpful if you want to modernize your model code, but your training pipeline still requires Estimators.
Instantiate a Keras MobileNet V2 model and compile the model with the optimizer, loss, and metrics to train with:
End of explanation
est_mobilenet_v2 = tf.keras.estimator.model_to_estimator(keras_model=estimator_model)
Explanation: Create an Estimator from the compiled Keras model. The initial model state of the Keras model is preserved in the created Estimator:
End of explanation
IMG_SIZE = 160 # All images will be resized to 160x160
def preprocess(image, label):
image = tf.cast(image, tf.float32)
image = (image/127.5) - 1
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
def train_input_fn(batch_size):
data = tfds.load('cats_vs_dogs', as_supervised=True)
train_data = data['train']
train_data = train_data.map(preprocess).shuffle(500).batch(batch_size)
return train_data
Explanation: Treat the derived Estimator as you would with any other Estimator.
End of explanation
est_mobilenet_v2.train(input_fn=lambda: train_input_fn(32), steps=50)
Explanation: To train, call Estimator's train function:
End of explanation
est_mobilenet_v2.evaluate(input_fn=lambda: train_input_fn(32), steps=10)
Explanation: Similarly, to evaluate, call the Estimator's evaluate function:
End of explanation
import tensorflow.compat.v1 as tf_compat
def toy_dataset():
inputs = tf.range(10.)[:, None]
labels = inputs * 5. + tf.range(5.)[None, :]
return tf.data.Dataset.from_tensor_slices(
dict(x=inputs, y=labels)).repeat().batch(2)
class Net(tf.keras.Model):
A simple linear model.
def __init__(self):
super(Net, self).__init__()
self.l1 = tf.keras.layers.Dense(5)
def call(self, x):
return self.l1(x)
def model_fn(features, labels, mode):
net = Net()
opt = tf.keras.optimizers.Adam(0.1)
ckpt = tf.train.Checkpoint(step=tf_compat.train.get_global_step(),
optimizer=opt, net=net)
with tf.GradientTape() as tape:
output = net(features['x'])
loss = tf.reduce_mean(tf.abs(output - features['y']))
variables = net.trainable_variables
gradients = tape.gradient(loss, variables)
return tf.estimator.EstimatorSpec(
mode,
loss=loss,
train_op=tf.group(opt.apply_gradients(zip(gradients, variables)),
ckpt.step.assign_add(1)),
# Tell the Estimator to save "ckpt" in an object-based format.
scaffold=tf_compat.train.Scaffold(saver=ckpt))
tf.keras.backend.clear_session()
est = tf.estimator.Estimator(model_fn, './tf_estimator_example/')
est.train(toy_dataset, steps=10)
Explanation: For more details, please refer to the documentation for tf.keras.estimator.model_to_estimator.
Saving object-based checkpoints with Estimator
Estimators by default save checkpoints with variable names rather than the object graph described in the Checkpoint guide. tf.train.Checkpoint will read name-based checkpoints, but variable names may change when moving parts of a model outside of the Estimator's model_fn. For forwards compatibility saving object-based checkpoints makes it easier to train a model inside an Estimator and then use it outside of one.
End of explanation
opt = tf.keras.optimizers.Adam(0.1)
net = Net()
ckpt = tf.train.Checkpoint(
step=tf.Variable(1, dtype=tf.int64), optimizer=opt, net=net)
ckpt.restore(tf.train.latest_checkpoint('./tf_estimator_example/'))
ckpt.step.numpy() # From est.train(..., steps=10)
Explanation: tf.train.Checkpoint can then load the Estimator's checkpoints from its model_dir.
End of explanation
input_column = tf.feature_column.numeric_column("x")
estimator = tf.estimator.LinearClassifier(feature_columns=[input_column])
def input_fn():
return tf.data.Dataset.from_tensor_slices(
({"x": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16)
estimator.train(input_fn)
Explanation: SavedModels from Estimators
Estimators export SavedModels through tf.Estimator.export_saved_model.
End of explanation
tmpdir = tempfile.mkdtemp()
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
tf.feature_column.make_parse_example_spec([input_column]))
estimator_base_path = os.path.join(tmpdir, 'from_estimator')
estimator_path = estimator.export_saved_model(estimator_base_path, serving_input_fn)
Explanation: To save an Estimator you need to create a serving_input_receiver. This function builds a part of a tf.Graph that parses the raw data received by the SavedModel.
The tf.estimator.export module contains functions to help build these receivers.
The following code builds a receiver, based on the feature_columns, that accepts serialized tf.Example protocol buffers, which are often used with tf-serving.
End of explanation
imported = tf.saved_model.load(estimator_path)
def predict(x):
example = tf.train.Example()
example.features.feature["x"].float_list.value.extend([x])
return imported.signatures["predict"](
examples=tf.constant([example.SerializeToString()]))
print(predict(1.5))
print(predict(3.5))
Explanation: You can also load and run that model, from python:
End of explanation
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
regressor = tf.estimator.LinearRegressor(
feature_columns=[tf.feature_column.numeric_column('feats')],
optimizer='SGD',
config=config)
Explanation: tf.estimator.export.build_raw_serving_input_receiver_fn allows you to create input functions which take raw tensors rather than tf.train.Examples.
Using tf.distribute.Strategy with Estimator (Limited support)
tf.estimator is a distributed training TensorFlow API that originally supported the async parameter server approach. tf.estimator now supports tf.distribute.Strategy. If you're using tf.estimator, you can change to distributed training with very few changes to your code. With this, Estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs. This support in Estimator is, however, limited. Check out the What's supported now section below for more details.
Using tf.distribute.Strategy with Estimator is slightly different than in the Keras case. Instead of using strategy.scope, now you pass the strategy object into the RunConfig for the Estimator.
You can refer to the distributed training guide for more information.
Here is a snippet of code that shows this with a premade Estimator LinearRegressor and MirroredStrategy:
End of explanation
def input_fn():
dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.]))
return dataset.repeat(1000).batch(10)
regressor.train(input_fn=input_fn, steps=10)
regressor.evaluate(input_fn=input_fn, steps=10)
Explanation: Here, you use a premade Estimator, but the same code works with a custom Estimator as well. train_distribute determines how training will be distributed, and eval_distribute determines how evaluation will be distributed. This is another difference from Keras where you use the same strategy for both training and eval.
Now you can train and evaluate this Estimator with an input function:
End of explanation |
10,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BayesSearchCV
skopt
pip3 install scikit-optimize
BayesSearchCV implements a "fit" and a "score" method. It also implements "predict", "predict_proba", "decision_function", "transform" and "inverse_transform" if they are implemented in the estimator used.
The parameters of the estimator used to apply these methods are optimized by cross-validated search over parameter settings.
In contrast to GridSearchCV, not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from the specified distributions. The number of parameter settings that are tried is given by n_iter.(n_iter!)
Step2: XGBoost
Step4: LightGBM
Step5: Scoring에서 쓸 수 있는 값은 아래와 같음 | Python Code:
import pandas as pd
import numpy as np
import xgboost as xgb
import lightgbm as lgb
from skopt import BayesSearchCV
from sklearn.model_selection import StratifiedKFold, KFold
%config InlineBackend.figure_format = 'retina'
ITERATIONS = 10 # 1000
TRAINING_SIZE = 100000 # 20000000
TEST_SIZE = 25000
# Load data
X = pd.read_csv(
'./data/train_sample.csv',
nrows=TRAINING_SIZE,
parse_dates=['click_time']
)
# Split into X and y
y = X['is_attributed']
X = X.drop(['click_time','is_attributed', 'attributed_time'], axis=1)
Explanation: BayesSearchCV
skopt
pip3 install scikit-optimize
BayesSearchCV implements a "fit" and a "score" method. It also implements "predict", "predict_proba", "decision_function", "transform" and "inverse_transform" if they are implemented in the estimator used.
The parameters of the estimator used to apply these methods are optimized by cross-validated search over parameter settings.
In contrast to GridSearchCV, not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from the specified distributions. The number of parameter settings that are tried is given by n_iter.(n_iter!)
End of explanation
# Classifier
bayes_cv_tuner = BayesSearchCV(
estimator = xgb.XGBClassifier(
n_jobs = 1,
objective = 'binary:logistic',
eval_metric = 'auc',
silent=1,
tree_method='approx'
),
search_spaces = {
'learning_rate': (0.01, 1.0, 'log-uniform'),
'min_child_weight': (0, 10),
'max_depth': (0, 50),
'max_delta_step': (0, 20),
'subsample': (0.01, 1.0, 'uniform'),
'colsample_bytree': (0.01, 1.0, 'uniform'),
'colsample_bylevel': (0.01, 1.0, 'uniform'),
'reg_lambda': (1e-9, 1000, 'log-uniform'),
'reg_alpha': (1e-9, 1.0, 'log-uniform'),
'gamma': (1e-9, 0.5, 'log-uniform'),
'min_child_weight': (0, 5),
'n_estimators': (50, 100),
'scale_pos_weight': (1e-6, 500, 'log-uniform')
},
scoring = 'roc_auc',
cv = StratifiedKFold(
n_splits=3,
shuffle=True,
random_state=42
),
n_jobs = 3,
n_iter = ITERATIONS,
verbose = 0,
refit = True,
random_state = 42
)
def status_print(optim_result):
Status callback durring bayesian hyperparameter search
# Get all the models tested so far in DataFrame format
all_models = pd.DataFrame(bayes_cv_tuner.cv_results_)
# Get current parameters and the best parameters
best_params = pd.Series(bayes_cv_tuner.best_params_)
print('Model #{}\nBest ROC-AUC: {}\nBest params: {}\n'.format(
len(all_models),
np.round(bayes_cv_tuner.best_score_, 4),
bayes_cv_tuner.best_params_
))
# Save all model results
clf_name = bayes_cv_tuner.estimator.__class__.__name__
all_models.to_csv(clf_name+"_cv_results.csv")
xgb_result = bayes_cv_tuner.fit(X.values, y.values, callback=status_print)
xgb_result.best_score_
xgb_result.best_params_
xgb_result.best_estimator_
new_model = xgb_result.best_estimator_
xgb.plot_importance(new_model);
xgb_result.cv_results_
Explanation: XGBoost
End of explanation
bayes_cv_tuner = BayesSearchCV(
estimator = lgb.LGBMRegressor(
objective='binary',
metric='auc',
n_jobs=1,
verbose=0
),
search_spaces = {
'learning_rate': (0.01, 1.0, 'log-uniform'),
'num_leaves': (1, 100),
'max_depth': (0, 50),
'min_child_samples': (0, 50),
'max_bin': (100, 1000),
'subsample': (0.01, 1.0, 'uniform'),
'subsample_freq': (0, 10),
'colsample_bytree': (0.01, 1.0, 'uniform'),
'min_child_weight': (0, 10),
'subsample_for_bin': (100000, 500000),
'reg_lambda': (1e-9, 1000, 'log-uniform'),
'reg_alpha': (1e-9, 1.0, 'log-uniform'),
'scale_pos_weight': (1e-6, 500, 'log-uniform'),
'n_estimators': (50, 100),
},
scoring = 'roc_auc',
cv = StratifiedKFold(
n_splits=3,
shuffle=True,
random_state=42
),
n_jobs = 3,
n_iter = ITERATIONS,
verbose = 0,
refit = True,
random_state = 42
)
# Fit the model
lgbm_result = bayes_cv_tuner.fit(X.values, y.values, callback=status_print)
lgbm_result.best_params_
lgbm_result.estimator
bayes_cv_tuner = BayesSearchCV(
estimator = lgb.LGBMRegressor(objective='regression', boosting_type='gbdt', subsample=0.6143), #colsample_bytree=0.6453, subsample=0.6143
search_spaces = {
'learning_rate': (0.01, 1.0, 'log-uniform'),
'num_leaves': (10, 100),
'max_depth': (0, 50),
'min_child_samples': (0, 50),
'max_bin': (100, 1000),
'subsample_freq': (0, 10),
'min_child_weight': (0, 10),
'reg_lambda': (1e-9, 1000, 'log-uniform'),
'reg_alpha': (1e-9, 1.0, 'log-uniform'),
'scale_pos_weight': (1e-6, 500, 'log-uniform'),
'n_estimators': (50, 150),
},
scoring = 'neg_mean_squared_error', #neg_mean_squared_log_error
cv = KFold(
n_splits=5,
shuffle=True,
random_state=42
),
n_jobs = 1,
n_iter = 100,
verbose = 0,
refit = True,
random_state = 42
)
def status_print(optim_result):
Status callback durring bayesian hyperparameter search
# Get all the models tested so far in DataFrame format
all_models = pd.DataFrame(bayes_cv_tuner.cv_results_)
# Get current parameters and the best parameters
best_params = pd.Series(bayes_cv_tuner.best_params_)
print('Model #{}\nBest MSE: {}\nBest params: {}\n'.format(
len(all_models),
np.round(bayes_cv_tuner.best_score_, 4),
bayes_cv_tuner.best_params_
))
# Save all model results
clf_name = bayes_cv_tuner.estimator.__class__.__name__
all_models.to_csv(clf_name+"_cv_results.csv")
# Fit the model
result = bayes_cv_tuner.fit(X.values, y.values, callback=status_print)
Explanation: LightGBM
End of explanation
import sklearn
keys = sklearn.metrics.SCORERS.keys()
for key in keys:
print(key)
Explanation: Scoring에서 쓸 수 있는 값은 아래와 같음
End of explanation |
10,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NNabla by Examples
This tutorial demonstrates how you can write a script to train a neural network by using a simple hand digits classification task.
Note
Step1: The tiny_digits module is located under this folder. It provides some utilities for loading a handwritten-digit classification dataset (MNIST) available in scikit-learn.
Logistic Regression
We will first start by defining a computation graph for logistic regression. (For details on logistic regression, see Appendix A.)
The training will be done by gradient descent, where gradients are calculated using the error backpropagation algorithm (backprop).
Preparing a Toy Dataset
This section just prepares a dataset to be used for demonstration of NNabla usage.
Step2: The next block creates a dataset loader which is a generator providing images and labels as minibatches. Note that this dataset is just an example purpose and not a part of NNabla.
Step3: A minibatch is as follows. img and label are in numpy.ndarray.
Step4: Preparing the Computation Graph
NNabla provides two different ways for backprop-based gradient descent optimization. One is with a static graph, and another is with a dynamic graph. We are going to show a static version first.
Step5: This code block shows one of the most important features in graph building in NNabla, the parameter scope. The first line defines an input variable x. The second line creates a parameter scope. The third line then applies PF.affine - an affine transform - to x, and creates a variable y holding that result. Here, the PF (parametric_function) module provides functions that contain learnable parameters, such as affine transforms (which contains weights), convolution (which contains kernels) and batch normalization (which contains transformation factors and coefficients). We will call these functions as parametric functions. The parameters are created and initialized randomly at function call, and registered by a name "affine1" using parameter_scope context.
Step6: The remaining lines shown above define a target variable and attach functions for loss at the end of the graph. Note that the static graph build doesn't execute any computation, but the shapes of output variables are inferred. Therefore, we can inspect the shapes of each variable at this time
Step7: Executing a static graph
You can execute the computation of the graph by calling the forward() method in a sink variable. Inputs can be set via .d accessor. It will borrow CPU array references as numpy.ndarray.
Step8: The output doesn't make sense since the network is just randomly initialized.
Backward propagation through the graph
The parameters registered by parameter_scope management function can be queried by get_parameters() as a dict format.
Step9: Before executing backpropagation, we should initialize gradient buffers of all parameter to zeros.
Step10: Then, you can execute backprop by calling backward() method at the sink variable.
Step11: Gradient is stored in grad field of Variable. .g accessor can be used to access grad data in numpy.ndarray format.
Optimizing parameters (=Training)
To optimize parameters, we provide solver module (aliased as S here). The solver module contains a bunch of optimizer implementations such as SGD, SGD with momentum, Adam etc. The below block creates SGD solver and sets parameters of logistic regression to it.
Step12: In the next block, we demonstrate a single step of optimization loop. solver.zero_grad() line does equivalent to calling .grad.zero() for all parameters as we shown above. After backward computation, we apply weight decay, then applying gradient descent implemented in Sgd solver class as follows
$$
\theta \leftarrow \theta - \eta \nabla_{\theta} L(\theta, X_{\mathrm minibatch})
$$
where $\eta$ denotes learning rate.
Step13: Next block iterates optimization steps, and shows the loss decreases.
Step14: Show prediction
The following code displays training results.
Step15: Dynamic graph construction support
This is another way of running computation graph in NNabla. This example doesn't show how useful dynamic graph is, but shows a bit of flavor.
The next block just define computation graph building as functions for later use.
Step16: To run a computation graph dynamically during creation, you use nnabla.auto_forward() context as you see in the below block. By this, computation is fired immediately at functions are called. (You can also use nnabla.set_auto_forward(auto) to set the auto-forward state globally.)
Step17: Backward computation can be done on a dynamically constructed graph.
Step18: Multi-Layer Perceptron (MLP)
In this section, you see an example of MLP graph building and training.
Before starting, we clear all parameters registered in the logistic regression example.
Step19: Here is the function that builds a MLP with an arbitrary depth and width for 10 class classification.
Step20: Convolutional Neural Network with CUDA acceleration
Here we demonstrates a CNN with CUDA GPU acceleration.
Step21: To enable CUDA extension in NNabla, you have to install nnabla-ext-cuda package first. See the install guide.
After installing the CUDA extension, you can easily switch to run on CUDA by specifying a context before building a graph. We strongly recommend using a CUDNN context that is fast. Although the context class can be instantiated by nn.Context(), specifying a context descriptor might be a bit complicated for users. There for we recommend create a context by using a helper function get_extension_context() found in the nnabla.ext_utils module. NNabla officially supports cpu and cudnn as a context specifier passed to the first argument (extension name). NOTE
Step22: nn.save_parameters writes parameters registered in parameter_scope system in HDF5 format. We use it a later example.
Step23: Recurrent Neural Network (Elman RNN)
This is an example of recurrent neural network training.
Step24: It is not meaningful, but just a demonstration purpose. We split an image into 2 by 2 grids, and feed them sequentially into RNN.
Step25: Siamese Network
This example show how to embed an image in a categorical dataset into 2D space using deep learning. This also demonstrates how to reuse a pretrained network.
First, we load parameters learned in the CNN example.
Step26: We define embedding function. Note that the network structure and parameter hierarchy is identical to the previous CNN example. That enables you to reuse the saved parameters and finetune from it.
Step27: We build two stream CNNs and compare them with the contrastive loss function defined above. Note that both CNNs have the same parameter hierarchy, which means both parameters are shared.
Step28: We visualize embedded training images as following. You see the images from the same class embedded near each other. | Python Code:
!pip install nnabla-ext-cuda100
!git clone https://github.com/sony/nnabla.git
%cd nnabla/tutorial
import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
import nnabla.solvers as S
from nnabla.monitor import tile_images
import numpy as np
import matplotlib.pyplot as plt
import tiny_digits
%matplotlib inline
np.random.seed(0)
imshow_opt = dict(cmap='gray', interpolation='nearest')
Explanation: NNabla by Examples
This tutorial demonstrates how you can write a script to train a neural network by using a simple hand digits classification task.
Note: This tutorial notebook requires scikit-learn and matplotlib installed in your Python environment.
First let us prepare some dependencies.
End of explanation
digits = tiny_digits.load_digits(n_class=10)
tiny_digits.plot_stats(digits)
Explanation: The tiny_digits module is located under this folder. It provides some utilities for loading a handwritten-digit classification dataset (MNIST) available in scikit-learn.
Logistic Regression
We will first start by defining a computation graph for logistic regression. (For details on logistic regression, see Appendix A.)
The training will be done by gradient descent, where gradients are calculated using the error backpropagation algorithm (backprop).
Preparing a Toy Dataset
This section just prepares a dataset to be used for demonstration of NNabla usage.
End of explanation
data = tiny_digits.data_iterator_tiny_digits(digits, batch_size=64, shuffle=True)
Explanation: The next block creates a dataset loader which is a generator providing images and labels as minibatches. Note that this dataset is just an example purpose and not a part of NNabla.
End of explanation
img, label = data.next()
plt.imshow(tile_images(img), **imshow_opt)
print("labels:", label.reshape(8, 8))
print("Label shape:", label.shape)
Explanation: A minibatch is as follows. img and label are in numpy.ndarray.
End of explanation
# Forward pass
x = nn.Variable(img.shape) # Define an image variable
with nn.parameter_scope("affine1"):
y = PF.affine(x, 10) # Output is 10 class
Explanation: Preparing the Computation Graph
NNabla provides two different ways for backprop-based gradient descent optimization. One is with a static graph, and another is with a dynamic graph. We are going to show a static version first.
End of explanation
# Building a loss graph
t = nn.Variable(label.shape) # Define an target variable
loss = F.mean(F.softmax_cross_entropy(y, t)) # Softmax Xentropy fits multi-class classification problems
Explanation: This code block shows one of the most important features in graph building in NNabla, the parameter scope. The first line defines an input variable x. The second line creates a parameter scope. The third line then applies PF.affine - an affine transform - to x, and creates a variable y holding that result. Here, the PF (parametric_function) module provides functions that contain learnable parameters, such as affine transforms (which contains weights), convolution (which contains kernels) and batch normalization (which contains transformation factors and coefficients). We will call these functions as parametric functions. The parameters are created and initialized randomly at function call, and registered by a name "affine1" using parameter_scope context.
End of explanation
print("Printing shapes of variables")
print(x.shape)
print(y.shape)
print(t.shape)
print(loss.shape) # empty tuple means scalar
Explanation: The remaining lines shown above define a target variable and attach functions for loss at the end of the graph. Note that the static graph build doesn't execute any computation, but the shapes of output variables are inferred. Therefore, we can inspect the shapes of each variable at this time:
End of explanation
# Set data
x.d = img
t.d = label
# Execute a forward pass
loss.forward()
# Showing results
print("Prediction score of 0-th image:", y.d[0])
print("Loss:", loss.d)
Explanation: Executing a static graph
You can execute the computation of the graph by calling the forward() method in a sink variable. Inputs can be set via .d accessor. It will borrow CPU array references as numpy.ndarray.
End of explanation
print(nn.get_parameters())
Explanation: The output doesn't make sense since the network is just randomly initialized.
Backward propagation through the graph
The parameters registered by parameter_scope management function can be queried by get_parameters() as a dict format.
End of explanation
for param in nn.get_parameters().values():
param.grad.zero()
Explanation: Before executing backpropagation, we should initialize gradient buffers of all parameter to zeros.
End of explanation
# Compute backward
loss.backward()
# Showing gradients.
for name, param in nn.get_parameters().items():
print(name, param.shape, param.g.flat[:20]) # Showing first 20.
Explanation: Then, you can execute backprop by calling backward() method at the sink variable.
End of explanation
# Create a solver (gradient-based optimizer)
learning_rate = 1e-3
solver = S.Sgd(learning_rate)
solver.set_parameters(nn.get_parameters()) # Set parameter variables to be updated.
Explanation: Gradient is stored in grad field of Variable. .g accessor can be used to access grad data in numpy.ndarray format.
Optimizing parameters (=Training)
To optimize parameters, we provide solver module (aliased as S here). The solver module contains a bunch of optimizer implementations such as SGD, SGD with momentum, Adam etc. The below block creates SGD solver and sets parameters of logistic regression to it.
End of explanation
# One step of training
x.d, t.d = data.next()
loss.forward()
solver.zero_grad() # Initialize gradients of all parameters to zero.
loss.backward()
solver.weight_decay(1e-5) # Applying weight decay as an regularization
solver.update()
print(loss.d)
Explanation: In the next block, we demonstrate a single step of optimization loop. solver.zero_grad() line does equivalent to calling .grad.zero() for all parameters as we shown above. After backward computation, we apply weight decay, then applying gradient descent implemented in Sgd solver class as follows
$$
\theta \leftarrow \theta - \eta \nabla_{\theta} L(\theta, X_{\mathrm minibatch})
$$
where $\eta$ denotes learning rate.
End of explanation
for i in range(1000):
x.d, t.d = data.next()
loss.forward()
solver.zero_grad() # Initialize gradients of all parameters to zero.
loss.backward()
solver.weight_decay(1e-5) # Applying weight decay as an regularization
solver.update()
if i % 100 == 0: # Print for each 10 iterations
print(i, loss.d)
Explanation: Next block iterates optimization steps, and shows the loss decreases.
End of explanation
x.d, t.d = data.next() # Here we predict images from training set although it's useless.
y.forward() # You can execute a sub graph.
plt.imshow(tile_images(x.d), **imshow_opt)
print("prediction:")
print(y.d.argmax(axis=1).reshape(8, 8)) # Taking a class index based on prediction score.
Explanation: Show prediction
The following code displays training results.
End of explanation
def logreg_forward(x):
with nn.parameter_scope("affine1"):
y = PF.affine(x, 10)
return y
def logreg_loss(y, t):
loss = F.mean(F.softmax_cross_entropy(y, t)) # Softmax Xentropy fits multi-class classification problems
return loss
Explanation: Dynamic graph construction support
This is another way of running computation graph in NNabla. This example doesn't show how useful dynamic graph is, but shows a bit of flavor.
The next block just define computation graph building as functions for later use.
End of explanation
x = nn.Variable(img.shape)
t = nn.Variable(label.shape)
x.d, t.d = data.next()
with nn.auto_forward(): # Graph are executed
y = logreg_forward(x)
loss = logreg_loss(y, t)
print("Loss:", loss.d)
plt.imshow(tile_images(x.d), **imshow_opt)
print("prediction:")
print(y.d.argmax(axis=1).reshape(8, 8))
Explanation: To run a computation graph dynamically during creation, you use nnabla.auto_forward() context as you see in the below block. By this, computation is fired immediately at functions are called. (You can also use nnabla.set_auto_forward(auto) to set the auto-forward state globally.)
End of explanation
solver.zero_grad()
loss.backward()
Explanation: Backward computation can be done on a dynamically constructed graph.
End of explanation
nn.clear_parameters() # Clear all parameters
Explanation: Multi-Layer Perceptron (MLP)
In this section, you see an example of MLP graph building and training.
Before starting, we clear all parameters registered in the logistic regression example.
End of explanation
def mlp(x, hidden=[16, 32, 16]):
hs = []
with nn.parameter_scope("mlp"): # Parameter scope can be nested
h = x
for hid, hsize in enumerate(hidden):
with nn.parameter_scope("affine{}".format(hid + 1)):
h = F.tanh(PF.affine(h, hsize))
hs.append(h)
with nn.parameter_scope("classifier"):
y = PF.affine(h, 10)
return y, hs
# Construct a MLP graph
y, hs = mlp(x)
print("Printing shapes")
print("x:", x.shape)
for i, h in enumerate(hs):
print("h{}:".format(i + 1), h.shape)
print("y:", y.shape)
# Training
loss = logreg_loss(y, t) # Reuse logreg loss function.
# Copied from the above logreg example.
def training(steps, learning_rate):
solver = S.Sgd(learning_rate)
solver.set_parameters(nn.get_parameters()) # Set parameter variables to be updated.
for i in range(steps):
x.d, t.d = data.next()
loss.forward()
solver.zero_grad() # Initialize gradients of all parameters to zero.
loss.backward()
solver.weight_decay(1e-5) # Applying weight decay as an regularization
solver.update()
if i % 100 == 0: # Print for each 10 iterations
print(i, loss.d)
# Training
training(1000, 1e-2)
# Showing responses for each layer
num_plot = len(hs) + 2
gid = 1
def scale01(h):
return (h - h.min()) / (h.max() - h.min())
def imshow(img, title):
global gid
plt.subplot(num_plot, 1, gid)
gid += 1
plt.title(title)
plt.imshow(img, **imshow_opt)
plt.axis('off')
plt.figure(figsize=(2, 5))
imshow(x.d[0, 0], 'x')
for hid, h in enumerate(hs):
imshow(scale01(h.d[0]).reshape(-1, 8), 'h{}'.format(hid + 1))
imshow(scale01(y.d[0]).reshape(2, 5), 'y')
Explanation: Here is the function that builds a MLP with an arbitrary depth and width for 10 class classification.
End of explanation
nn.clear_parameters()
def cnn(x):
with nn.parameter_scope("cnn"): # Parameter scope can be nested
with nn.parameter_scope("conv1"):
c1 = F.tanh(PF.batch_normalization(
PF.convolution(x, 4, (3, 3), pad=(1, 1), stride=(2, 2))))
with nn.parameter_scope("conv2"):
c2 = F.tanh(PF.batch_normalization(
PF.convolution(c1, 8, (3, 3), pad=(1, 1))))
c2 = F.average_pooling(c2, (2, 2))
with nn.parameter_scope("fc3"):
fc3 = F.tanh(PF.affine(c2, 32))
with nn.parameter_scope("classifier"):
y = PF.affine(fc3, 10)
return y, [c1, c2, fc3]
Explanation: Convolutional Neural Network with CUDA acceleration
Here we demonstrates a CNN with CUDA GPU acceleration.
End of explanation
# Run on CUDA
from nnabla.ext_utils import get_extension_context
cuda_device_id = 0
ctx = get_extension_context('cudnn', device_id=cuda_device_id)
print("Context:", ctx)
nn.set_default_context(ctx) # Set CUDA as a default context.
y, hs = cnn(x)
loss = logreg_loss(y, t)
training(1000, 1e-1)
# Showing responses for each layer
num_plot = len(hs) + 2
gid = 1
plt.figure(figsize=(2, 8))
imshow(x.d[0, 0], 'x')
imshow(tile_images(hs[0].d[0][:, None]), 'conv1')
imshow(tile_images(hs[1].d[0][:, None]), 'conv2')
imshow(hs[2].d[0].reshape(-1, 8), 'fc3')
imshow(scale01(y.d[0]).reshape(2, 5), 'y')
Explanation: To enable CUDA extension in NNabla, you have to install nnabla-ext-cuda package first. See the install guide.
After installing the CUDA extension, you can easily switch to run on CUDA by specifying a context before building a graph. We strongly recommend using a CUDNN context that is fast. Although the context class can be instantiated by nn.Context(), specifying a context descriptor might be a bit complicated for users. There for we recommend create a context by using a helper function get_extension_context() found in the nnabla.ext_utils module. NNabla officially supports cpu and cudnn as a context specifier passed to the first argument (extension name). NOTE: By setting the cudnn context as a global default context, Functions and solves created are instantiated with CUDNN (preferred) mode. You can also specify a context using with nn.context_scope(). See API reference for details.
End of explanation
path_cnn_params = "tmp.params.cnn.h5"
nn.save_parameters(path_cnn_params)
Explanation: nn.save_parameters writes parameters registered in parameter_scope system in HDF5 format. We use it a later example.
End of explanation
nn.clear_parameters()
def rnn(xs, h0, hidden=32):
hs = []
with nn.parameter_scope("rnn"):
h = h0
# Time step loop
for x in xs:
# Note: Parameter scopes are reused over time
# which means parameters are shared over time.
with nn.parameter_scope("x2h"):
x2h = PF.affine(x, hidden, with_bias=False)
with nn.parameter_scope("h2h"):
h2h = PF.affine(h, hidden)
h = F.tanh(x2h + h2h)
hs.append(h)
with nn.parameter_scope("classifier"):
y = PF.affine(h, 10)
return y, hs
Explanation: Recurrent Neural Network (Elman RNN)
This is an example of recurrent neural network training.
End of explanation
def split_grid4(x):
x0 = x[..., :4, :4]
x1 = x[..., :4, 4:]
x2 = x[..., 4:, :4]
x3 = x[..., 4:, 4:]
return x0, x1, x2, x3
hidden = 32
seq_img = split_grid4(img)
seq_x = [nn.Variable(subimg.shape) for subimg in seq_img]
h0 = nn.Variable((img.shape[0], hidden)) # Initial hidden state.
y, hs = rnn(seq_x, h0, hidden)
loss = logreg_loss(y, t)
# Copied from the above logreg example.
def training_rnn(steps, learning_rate):
solver = S.Sgd(learning_rate)
solver.set_parameters(nn.get_parameters()) # Set parameter variables to be updated.
for i in range(steps):
minibatch = data.next()
img, t.d = minibatch
seq_img = split_grid4(img)
h0.d = 0 # Initialize as 0
for x, subimg in zip(seq_x, seq_img):
x.d = subimg
loss.forward()
solver.zero_grad() # Initialize gradients of all parameters to zero.
loss.backward()
solver.weight_decay(1e-5) # Applying weight decay as an regularization
solver.update()
if i % 100 == 0: # Print for each 10 iterations
print(i, loss.d)
training_rnn(1000, 1e-1)
# Showing responses for each layer
num_plot = len(hs) + 2
gid = 1
plt.figure(figsize=(2, 8))
imshow(x.d[0, 0], 'x')
for hid, h in enumerate(hs):
imshow(scale01(h.d[0]).reshape(-1, 8), 'h{}'.format(hid + 1))
imshow(scale01(y.d[0]).reshape(2, 5), 'y')
Explanation: It is not meaningful, but just a demonstration purpose. We split an image into 2 by 2 grids, and feed them sequentially into RNN.
End of explanation
nn.clear_parameters()
# Loading CNN pretrained parameters.
_ = nn.load_parameters(path_cnn_params)
Explanation: Siamese Network
This example show how to embed an image in a categorical dataset into 2D space using deep learning. This also demonstrates how to reuse a pretrained network.
First, we load parameters learned in the CNN example.
End of explanation
def cnn_embed(x, test=False):
# Note: Identical configuration with the CNN example above.
# Parameters pretrained in the above CNN example are used.
with nn.parameter_scope("cnn"):
with nn.parameter_scope("conv1"):
c1 = F.tanh(PF.batch_normalization(PF.convolution(x, 4, (3, 3), pad=(1, 1), stride=(2, 2)), batch_stat=not test))
with nn.parameter_scope("conv2"):
c2 = F.tanh(PF.batch_normalization(PF.convolution(c1, 8, (3, 3), pad=(1, 1)), batch_stat=not test))
c2 = F.average_pooling(c2, (2, 2))
with nn.parameter_scope("fc3"):
fc3 = PF.affine(c2, 32)
# Additional affine for map into 2D.
with nn.parameter_scope("embed2d"):
embed = PF.affine(c2, 2)
return embed, [c1, c2, fc3]
def siamese_loss(e0, e1, t, margin=1.0, eps=1e-4):
dist = F.sum(F.squared_error(e0, e1), axis=1) # Squared distance
# Contrastive loss
sim_cost = t * dist
dissim_cost = (1 - t) * \
(F.maximum_scalar(margin - (dist + eps) ** (0.5), 0) ** 2)
return F.mean(sim_cost + dissim_cost)
Explanation: We define embedding function. Note that the network structure and parameter hierarchy is identical to the previous CNN example. That enables you to reuse the saved parameters and finetune from it.
End of explanation
x0 = nn.Variable(img.shape)
x1 = nn.Variable(img.shape)
t = nn.Variable((img.shape[0],)) # Same class or not
e0, hs0 = cnn_embed(x0)
e1, hs1 = cnn_embed(x1) # NOTE: parameters are shared
loss = siamese_loss(e0, e1, t)
def training_siamese(steps):
for i in range(steps):
minibatchs = []
for _ in range(2):
minibatch = data.next()
minibatchs.append((minibatch[0].copy(), minibatch[1].copy()))
x0.d, label0 = minibatchs[0]
x1.d, label1 = minibatchs[1]
t.d = (label0 == label1).astype(np.int).flat
loss.forward()
solver.zero_grad() # Initialize gradients of all parameters to zero.
loss.backward()
solver.weight_decay(1e-5) # Applying weight decay as an regularization
solver.update()
if i % 100 == 0: # Print for each 10 iterations
print(i, loss.d)
learning_rate = 1e-2
solver = S.Sgd(learning_rate)
with nn.parameter_scope("embed2d"):
# Only 2d embedding affine will be updated.
solver.set_parameters(nn.get_parameters())
training_siamese(2000)
# Decay learning rate
solver.set_learning_rate(solver.learning_rate() * 0.1)
training_siamese(2000)
Explanation: We build two stream CNNs and compare them with the contrastive loss function defined above. Note that both CNNs have the same parameter hierarchy, which means both parameters are shared.
End of explanation
all_image = digits.images[:512, None]
all_label = digits.target[:512]
x_all = nn.Variable(all_image.shape)
x_all.d = all_image
with nn.auto_forward():
embed, _ = cnn_embed(x_all, test=True)
plt.figure(figsize=(16, 9))
for i in range(10):
c = plt.cm.Set1(i / 10.) # Maybe it doesn't work in an older version of Matplotlib where color map lies in [0, 256)
plt.plot(embed.d[all_label == i, 0].flatten(), embed.d[
all_label == i, 1].flatten(), '.', c=c)
plt.legend(list(map(str, range(10))))
plt.grid()
Explanation: We visualize embedded training images as following. You see the images from the same class embedded near each other.
End of explanation |
10,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using a single Table vs EArray + Table
The PyTables community keep asking what can be considered a FAQ. Namely, should I use a single Table for storing my data, or should I split it in a Table and an Array?
Although there is not a totally general answer, the study below address this for the common case where one has 'raw data' and other data that can be considered 'meta'. See for example
Step1: Using Tables to store everything
Step2: So, using no compression leads to best speed, whereas Zlib can compress data by ~32x. Zlib is ~3x slower than using no compression though. On its hand, the Blosc compressor is faster but it can barely compress the dataset.
Using EArrays for storing raw data and Table for other metadata
Step3: We see that by using the Blosc compressor one can achieve around 10x faster output operation wrt Zlib, although the compression ratio can be somewhat smaller (but still pretty good).
Step4: After adding the table we continue to see that a better compression ratio is achieved for EArray + Table with respect to a single Table. Also, Blosc can make writing files significantly faster than not using compression (it has to write less).
Retrieving data from a single Table
Step5: As Blosc could not compress the table, it has a performance that is worse (quite worse actually) to the uncompressed table. On its hand, Zlib can be more than 3x slower for reading than without compression.
Retrieving data from the EArray + Table
Step6: So, the EArray + Table takes a similar time to read than a pure Table approach when no compression is used. And for some reason, when Zlib is used for compressing the data, the EArray + Table scenario degrades read speed significantly. However, when the Blosc compression is used, the EArray + Table works actually faster than for the single Table.
Some plots on speeds and sizes
Step7: Let's have a look at the speeds at which data can be stored and read using the different paradigms
Step8: And now, see the different sizes for the final files | Python Code:
import numpy as np
import tables
tables.print_versions()
LEN_PMT = int(1.2e6)
NPMTS = 12
NEVENTS = 10
!rm PMT*.h5
def gaussian(x, mu, sig):
return np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.)))
x = np.linspace(0, 1, 1e7)
rd = (gaussian(x, 1, 1.) * 1e6).astype(np.int32)
def raw_data(length):
# Return the actual data that you think it represents PM waveforms better
#return np.arange(length, dtype=np.int32)
return rd[:length]
Explanation: Using a single Table vs EArray + Table
The PyTables community keep asking what can be considered a FAQ. Namely, should I use a single Table for storing my data, or should I split it in a Table and an Array?
Although there is not a totally general answer, the study below address this for the common case where one has 'raw data' and other data that can be considered 'meta'. See for example: https://groups.google.com/forum/#!topic/pytables-users/vBEiaRzp3gI
End of explanation
class PMTRD(tables.IsDescription):
# event_id = tables.Int32Col(pos=1, indexed=True)
event_id = tables.Int32Col(pos=1)
npmt = tables.Int8Col(pos=2)
pmtrd = tables.Int32Col(shape=LEN_PMT, pos=3)
def one_table(filename, filters):
with tables.open_file("{}-{}-{}.h5".format(filename, filters.complib, filters.complevel), "w", filters=filters) as h5t:
pmt = h5t.create_table(h5t.root, "pmt", PMTRD, expectedrows=NEVENTS*NPMTS)
pmtr = pmt.row
for i in range(NEVENTS):
for j in range(NPMTS):
pmtr['event_id'] = i
pmtr['npmt'] = j
pmtr['pmtrd'] = raw_data(LEN_PMT)
pmtr.append()
# Using no compression
%time one_table("PMTs", tables.Filters(complib="zlib", complevel=0))
# Using Zlib (level 5) compression
%time one_table("PMTs", tables.Filters(complib="zlib", complevel=5))
# Using Blosc (level 9) compression
%time one_table("PMTs", tables.Filters(complib="blosc:lz4", complevel=9))
ls -sh *.h5
Explanation: Using Tables to store everything
End of explanation
def rawdata_earray(filename, filters):
with tables.open_file("{}-{}.h5".format(filename, filters.complib), "w", filters=filters) as h5a:
pmtrd = h5a.create_earray(h5a.root, "pmtrd", tables.Int32Atom(), shape=(0, NPMTS, LEN_PMT),
chunkshape=(1,1,LEN_PMT))
for i in range(NEVENTS):
rdata = []
for j in range(NPMTS):
rdata.append(raw_data(LEN_PMT))
pmtrd.append(np.array(rdata).reshape(1, NPMTS, LEN_PMT))
pmtrd.flush()
# Using no compression
%time rawdata_earray("PMTAs", tables.Filters(complib="zlib", complevel=0))
# Using Zlib (level 5) compression
%time rawdata_earray("PMTAs", tables.Filters(complib="zlib", complevel=5))
# Using Blosc (level 5) compression
%time rawdata_earray("PMTAs", tables.Filters(complib="blosc:lz4", complevel=9))
!ls -sh *.h5
Explanation: So, using no compression leads to best speed, whereas Zlib can compress data by ~32x. Zlib is ~3x slower than using no compression though. On its hand, the Blosc compressor is faster but it can barely compress the dataset.
Using EArrays for storing raw data and Table for other metadata
End of explanation
# Add the event IDs in a separate table in the same file
class PMTRD(tables.IsDescription):
# event_id = tables.Int32Col(pos=1, indexed=True)
event_id = tables.Int32Col(pos=1)
npmt = tables.Int8Col(pos=2)
def add_table(filename, filters):
with tables.open_file("{}-{}.h5".format(filename, filters.complib), "a", filters=filters) as h5a:
pmt = h5a.create_table(h5a.root, "pmt", PMTRD)
pmtr = pmt.row
for i in range(NEVENTS):
for j in range(NPMTS):
pmtr['event_id'] = i
pmtr['npmt'] = j
pmtr.append()
# Using no compression
%time add_table("PMTAs", tables.Filters(complib="zlib", complevel=0))
# Using Zlib (level 5) compression
%time add_table("PMTAs", tables.Filters(complib="zlib", complevel=5))
# Using Blosc (level 9) compression
%time add_table("PMTAs", tables.Filters(complib="blosc:lz4", complevel=9))
!ls -sh *.h5
Explanation: We see that by using the Blosc compressor one can achieve around 10x faster output operation wrt Zlib, although the compression ratio can be somewhat smaller (but still pretty good).
End of explanation
def read_single_table(complib, complevel):
with tables.open_file("PMTs-{}-{}.h5".format(complib, complevel), "r") as h5t:
pmt = h5t.root.pmt
for i, row in enumerate(pmt):
event_id, npmt, pmtrd = row["event_id"], row["npmt"], row["pmtrd"][:]
if i % 20 == 0:
print(event_id, npmt, pmtrd[0:5])
%time read_single_table("None", 0)
%time read_single_table("zlib", 5)
%time read_single_table("blosc:lz4", 9)
Explanation: After adding the table we continue to see that a better compression ratio is achieved for EArray + Table with respect to a single Table. Also, Blosc can make writing files significantly faster than not using compression (it has to write less).
Retrieving data from a single Table
End of explanation
def read_earray_table(complib, complevel):
with tables.open_file("PMTAs-{}.h5".format(complib, "r")) as h5a:
pmt = h5a.root.pmt
pmtrd_ = h5a.root.pmtrd
for i, row in enumerate(pmt):
event_id, npmt = row["event_id"], row["npmt"]
pmtrd = pmtrd_[event_id, npmt]
if i % 20 == 0:
print(event_id, npmt, pmtrd[0:5])
%time read_earray_table("None", 0)
%time read_earray_table("zlib", 5)
%time read_earray_table("blosc:lz4", 9)
Explanation: As Blosc could not compress the table, it has a performance that is worse (quite worse actually) to the uncompressed table. On its hand, Zlib can be more than 3x slower for reading than without compression.
Retrieving data from the EArray + Table
End of explanation
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: So, the EArray + Table takes a similar time to read than a pure Table approach when no compression is used. And for some reason, when Zlib is used for compressing the data, the EArray + Table scenario degrades read speed significantly. However, when the Blosc compression is used, the EArray + Table works actually faster than for the single Table.
Some plots on speeds and sizes
End of explanation
fig, (ax1, ax2) = plt.subplots(nrows=2, sharex=False, sharey=False)
fig.set_size_inches(w=8, h=7, forward=False)
f = .550 # conversion factor to GB/s
rects1 = ax1.bar(np.arange(3), f / np.array([.323, 3.17, .599]), 0.25, color='r')
rects2 = ax1.bar(np.arange(3) + 0.25, f / np.array([.250, 2.74, .258]), 0.25, color='y')
_ = ax1.set_ylabel('GB/s')
_ = ax1.set_xticks(np.arange(3) + 0.25)
_ = ax1.set_xticklabels(('No Compr', 'Zlib', 'Blosc'))
_ = ax1.legend((rects1[0], rects2[0]), ('Single Table', 'EArray+Table'), loc=9)
_ = ax1.set_title('Speed to store data')
rects1 = ax2.bar(np.arange(3), f / np.array([.099, .592, .782]), 0.25, color='r')
rects2 = ax2.bar(np.arange(3) + 0.25, f / np.array([.082, 1.09, .171]), 0.25, color='y')
_ = ax2.set_ylabel('GB/s')
_ = ax2.set_xticks(np.arange(3) + 0.25)
_ = ax2.set_xticklabels(('No Compr', 'Zlib', 'Blosc'))
_ = ax2.legend((rects1[0], rects2[0]), ('Single Table', 'EArray+Table'), loc=9)
_ = ax2.set_title('Speed to read data')
Explanation: Let's have a look at the speeds at which data can be stored and read using the different paradigms:
End of explanation
fig, ax1 = plt.subplots()
fig.set_size_inches(w=8, h=5)
rects1 = ax1.bar(np.arange(3), np.array([550, 17, 42]), 0.25, color='r')
rects2 = ax1.bar(np.arange(3) + 0.25, np.array([550, 4.1, 9]), 0.25, color='y')
_ = ax1.set_ylabel('MB')
_ = ax1.set_xticks(np.arange(3) + 0.25)
_ = ax1.set_xticklabels(('No Compr', 'Zlib', 'Blosc'))
_ = ax1.legend((rects1[0], rects2[0]), ('Single Table', 'EArray+Table'), loc=9)
_ = ax1.set_title('Size for stored datasets')
Explanation: And now, see the different sizes for the final files:
End of explanation |
10,577 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploratory data analysis
Step1: Changes in fatal accidents following legalization?
One interesting source of exogenous variation Colorado and Washington's legalization of cannabis in 2014. If cannabis usage increased following legalization and this translated into more impaired driving, then there should be an increase in the number of fatal auto accidents in these states after 2014.
Step2: Visualize accident statistics for Colorado, Washington, New Mexico (similar to Colorado), and Oregon (similar to Washington).
Step3: We'll specify a difference-in-differences design with "Treated" states (who legalized) and "Time" (when they legalized), while controlling for differences in population. The Treated
Step4: Changes in fatal accidents on 4/20?
There's another exogenous source of variation in this car crash data we can exploit
Step5: Inspect the main effect of cases on April 20 are compared to the week before and after.
Step6: Estimate the models. The Fourtwenty and Legal dummy variables (and their interaction) capture whether fatal accidents increased on April 20 compared to the week beforehand and afterwards, while controlling for state legality, year, whether it's a weekend, and state population. We do not observe a statistically signidicant increase in the number of incidents, alcohol-involved incidents, and total fatalities on April 20.
Step7: Case study
Step8: Here are two plots of the pageview dynamics for the seed "Cannabis in California" article.
Step9: Here is a visualization of the local hyperlink network around "Cannabis in California."
Step10: What kinds of causal arguments could you make from these pageview data and the hyperlink networks?
Appendix 1
Step11: The state population estimates for 2010-2017 from the U.S. Census Buureau.
Step12: Groupby-aggregate the data by state, month, day, and year counting the number of cases, alcohol-involved deaths, and total fatalities. Save the data as "accident_counts.csv".
Step15: Appendix 2
Step16: Get all of the links from "Cannabis in California", and add the seed article itself.
Step17: Make a network object of these hyperlinks among articles.
Step18: Get the pageviews for the articles linking from "Cannabis in California".
Step19: Define a function cleanup_pageviews to make a rectangular DataFrame with dates as index, pages as columns, and pageviews as values. | Python Code:
sb.factorplot(x='HOUR',y='ST_CASE',hue='WEEKDAY',data=counts_df,
aspect=2,order=range(24),palette='nipy_spectral',dodge=.5)
sb.factorplot(x='MONTH',y='ST_CASE',hue='WEEKDAY',data=counts_df,
aspect=2,order=range(1,13),palette='nipy_spectral',dodge=.5)
Explanation: Exploratory data analysis
End of explanation
annual_state_counts_df = counts_df.groupby(['STATE','YEAR']).agg({'ST_CASE':np.sum,'DRUNK_DR':np.sum,'FATALS':np.sum}).reset_index()
annual_state_counts_df = pd.merge(annual_state_counts_df,population_estimates,
left_on=['STATE','YEAR'],right_on=['State','Year']
)
annual_state_counts_df = annual_state_counts_df[['STATE','YEAR','ST_CASE','DRUNK_DR','FATALS','Population']]
annual_state_counts_df.head()
Explanation: Changes in fatal accidents following legalization?
One interesting source of exogenous variation Colorado and Washington's legalization of cannabis in 2014. If cannabis usage increased following legalization and this translated into more impaired driving, then there should be an increase in the number of fatal auto accidents in these states after 2014.
End of explanation
_cols = ['ST_CASE','DRUNK_DR','FATALS']
annual_co_counts = annual_state_counts_df[(annual_state_counts_df['STATE'] == "Colorado") & (annual_state_counts_df['YEAR'] > 2010)].set_index('YEAR')[_cols]
annual_wa_counts = annual_state_counts_df[(annual_state_counts_df['STATE'] == "Washington") & (annual_state_counts_df['YEAR'] > 2010)].set_index('YEAR')[_cols]
annual_nm_counts = annual_state_counts_df[(annual_state_counts_df['STATE'] == "New Mexico") & (annual_state_counts_df['YEAR'] > 2010)].set_index('YEAR')[_cols]
annual_or_counts = annual_state_counts_df[(annual_state_counts_df['STATE'] == "Oregon") & (annual_state_counts_df['YEAR'] > 2010)].set_index('YEAR')[_cols]
# Make the figures
f,axs = plt.subplots(3,1,figsize=(10,6),sharex=True)
# Plot the cases
annual_co_counts.plot.line(y='ST_CASE',c='blue',ax=axs[0],legend=False,lw=3)
annual_wa_counts.plot.line(y='ST_CASE',c='green',ax=axs[0],legend=False,lw=3)
annual_nm_counts.plot.line(y='ST_CASE',c='red',ls='--',ax=axs[0],legend=False,lw=3)
annual_or_counts.plot.line(y='ST_CASE',c='orange',ls='--',ax=axs[0],legend=False,lw=3)
axs[0].set_ylabel('Fatal Incidents')
# Plot the drunk driving cases
annual_co_counts.plot.line(y='DRUNK_DR',c='blue',ax=axs[1],legend=False,lw=3)
annual_wa_counts.plot.line(y='DRUNK_DR',c='green',ax=axs[1],legend=False,lw=3)
annual_nm_counts.plot.line(y='DRUNK_DR',c='red',ls='--',ax=axs[1],legend=False,lw=3)
annual_or_counts.plot.line(y='DRUNK_DR',c='orange',ls='--',ax=axs[1],legend=False,lw=3)
axs[1].set_ylabel('Drunk Driving')
# Plot the fatalities
annual_co_counts.plot.line(y='FATALS',c='blue',ax=axs[2],legend=False,lw=3)
annual_wa_counts.plot.line(y='FATALS',c='green',ax=axs[2],legend=False,lw=3)
annual_nm_counts.plot.line(y='FATALS',c='red',ls='--',ax=axs[2],legend=False,lw=3)
annual_or_counts.plot.line(y='FATALS',c='orange',ls='--',ax=axs[2],legend=False,lw=3)
axs[2].set_ylabel('Total Fatalities')
# Plot 2014 legalization
for ax in axs:
ax.axvline(x=2014,c='r')
# Stuff for legend
b = mlines.Line2D([],[],color='blue',label='Colorado',linewidth=3)
g = mlines.Line2D([],[],color='green',label='Washington',linewidth=3)
r = mlines.Line2D([],[],color='red',linestyle='--',label='New Mexico',linewidth=3)
o = mlines.Line2D([],[],color='orange',linestyle='--',label='Oregon',linewidth=3)
axs[2].legend(loc='lower center',ncol=4,handles=[b,g,r,o],fontsize=12,bbox_to_anchor=(.5,-.75))
f.tight_layout()
annual_state_counts_df['Treated'] = np.where(annual_state_counts_df['STATE'].isin(['Colorado','Washington']),1,0)
annual_state_counts_df['Time'] = np.where(annual_state_counts_df['YEAR'] >= 2014,1,0)
annual_state_counts_df = annual_state_counts_df[annual_state_counts_df['YEAR'] >= 2010]
annual_state_counts_df.query('STATE == "Colorado"')
Explanation: Visualize accident statistics for Colorado, Washington, New Mexico (similar to Colorado), and Oregon (similar to Washington).
End of explanation
m_cases = smf.ols(formula = 'ST_CASE ~ Treated*Time + Population',
data = annual_state_counts_df).fit()
print(m_cases.summary())
m_cases = smf.ols(formula = 'FATALS ~ Treated*Time + Population',
data = annual_state_counts_df).fit()
print(m_cases.summary())
m_cases = smf.ols(formula = 'DRUNK_DR ~ Treated*Time + Population',
data = annual_state_counts_df).fit()
print(m_cases.summary())
Explanation: We'll specify a difference-in-differences design with "Treated" states (who legalized) and "Time" (when they legalized), while controlling for differences in population. The Treated:Time interaction is the Difference-in-Differences estimate, which is not statistically significant. This suggests legalization did not increase the risk of fatal auto accidents in these states.
End of explanation
counts_df.head()
population_estimates.head()
# Select only data after 2004 in the month of April
april_df = counts_df.query('MONTH == 4 & YEAR > 2004').set_index(['STATE','HOUR','MONTH','DAY','YEAR'])
# Re-index the data to fill in missing dates
ix = pd.MultiIndex.from_product([state_codings.values(),range(0,24),[4],range(1,31),range(2005,2017)],
names = ['STATE','HOUR','MONTH','DAY','YEAR'])
april_df = april_df.reindex(ix).fillna(0)
april_df.reset_index(inplace=True)
# Add in population data
april_df = pd.merge(april_df,population_estimates,
left_on=['STATE','YEAR'],right_on=['State','Year'])
april_df = april_df[[i for i in april_df.columns if i not in ['Year','State']]]
# Inspect
april_df.head()
# Calculate whether day is a Friday, Saturday, or Sunday
april_df['Weekday'] = pd.to_datetime(april_df[['YEAR','MONTH','DAY']]).apply(lambda x:x.weekday())
april_df['Weekend'] = np.where(april_df['Weekday'] >= 4,1,0)
# Treated days are on April 20
april_df['Fourtwenty'] = np.where(april_df['DAY'] == 20,1,0)
april_df['Legal'] = np.where((april_df['STATE'].isin(['Colorado','Washington'])) & (april_df['YEAR'] >= 2014),1,0)
# Examine data for a week before and after April 20
april_df = april_df[april_df['DAY'].isin([13,20,27])]
# Inspect Colorado data
april_df.query('STATE == "Colorado"').sort_values(['YEAR','DAY'])
Explanation: Changes in fatal accidents on 4/20?
There's another exogenous source of variation in this car crash data we can exploit: the unofficial cannabis enthusiast holiday of April 20. If consumption increases on this day compared to a week before or after (April 13 and 27), does this explain changes in fatal auto accidents?
End of explanation
sb.factorplot(x='DAY',y='ST_CASE',data=april_df,kind='bar',palette=['grey','green','grey'])
Explanation: Inspect the main effect of cases on April 20 are compared to the week before and after.
End of explanation
m_cases_420 = smf.ols(formula = 'ST_CASE ~ Fourtwenty*Legal + YEAR + Weekend + Population',
data = april_df).fit()
print(m_cases_420.summary())
m_drunk_420 = smf.ols(formula = 'DRUNK_DR ~ Fourtwenty*Legal + YEAR + Weekend + Population',
data = april_df).fit()
print(m_drunk_420.summary())
m_fatal_420 = smf.ols(formula = 'FATALS ~ Fourtwenty*Legal + YEAR + Weekend + Population',
data = april_df).fit()
print(m_fatal_420.summary())
Explanation: Estimate the models. The Fourtwenty and Legal dummy variables (and their interaction) capture whether fatal accidents increased on April 20 compared to the week beforehand and afterwards, while controlling for state legality, year, whether it's a weekend, and state population. We do not observe a statistically signidicant increase in the number of incidents, alcohol-involved incidents, and total fatalities on April 20.
End of explanation
ca2016 = pd.read_csv('wikipv_ca_2016.csv',encoding='utf8',parse_dates=['timestamp']).set_index('timestamp')
ca2018 = pd.read_csv('wikipv_ca_2018.csv',encoding='utf8',parse_dates=['timestamp']).set_index('timestamp')
ca2016.head()
Explanation: Case study: Wikipedia pageview dynamics
On November 8, 2016, California voters passed Proposition 64 legalizing recreational use of cannabis. On January 1, 2018, recreational sales began. The following two files capture the daily pageview data for the article Cannabis in California as well as the daily pageview for all the other pages it links to.
End of explanation
f,axs = plt.subplots(2,1,figsize=(10,5))
ca2016['Cannabis in California'].plot(ax=axs[0],color='red',lw=3)
ca2018['Cannabis in California'].plot(ax=axs[1],color='blue',lw=3)
axs[0].axvline(pd.Timestamp('2016-11-08'),c='k',ls='--')
axs[1].axvline(pd.Timestamp('2018-01-01'),c='k',ls='--')
for ax in axs:
ax.set_ylabel('Pageviews')
f.tight_layout()
Explanation: Here are two plots of the pageview dynamics for the seed "Cannabis in California" article.
End of explanation
g = nx.read_gexf('wikilinks_cannabis_in_california.gexf')
f,ax = plt.subplots(1,1,figsize=(10,10))
g_pos = nx.layout.kamada_kawai_layout(g)
nx.draw(G = g,
ax = ax,
pos = g_pos,
with_labels = True,
node_size = [dc*(len(g) - 1)*10 for dc in nx.degree_centrality(g).values()],
font_size = 7.5,
font_weight = 'bold',
node_color = 'tomato',
edge_color = 'grey'
)
Explanation: Here is a visualization of the local hyperlink network around "Cannabis in California."
End of explanation
all_accident_df = pd.read_csv('accidents.csv',encoding='utf8',index_col=0)
all_accident_df.head()
Explanation: What kinds of causal arguments could you make from these pageview data and the hyperlink networks?
Appendix 1: Cleaning NHTSA FARS Data
"accidents.csv" is a ~450MB file after concatenating the raw annual data from NHTSA FARS.
End of explanation
population_estimates = pd.read_csv('census_pop_estimates.csv')
_cols = [i for i in population_estimates.columns if "POPESTIMATE" in i] + ['NAME']
population_estimates_stacked = population_estimates[_cols].set_index('NAME').unstack().reset_index()
population_estimates_stacked.rename(columns={'level_0':'Year','NAME':'State',0:'Population'},inplace=True)
population_estimates_stacked['Year'] = population_estimates_stacked['Year'].str.slice(-4).astype(int)
population_estimates_stacked = population_estimates_stacked[population_estimates_stacked['State'].isin(state_codings.values())]
population_estimates_stacked.dropna(subset=['Population'],inplace=True)
population_estimates_stacked.to_csv('population_estimates.csv',encoding='utf8')
Explanation: The state population estimates for 2010-2017 from the U.S. Census Buureau.
End of explanation
gb_vars = ['STATE','HOUR','DAY','MONTH','YEAR']
agg_dict = {'ST_CASE':len,'DRUNK_DR':np.sum,'FATALS':np.sum}
counts_df = all_accident_df.groupby(gb_vars).agg(agg_dict).reset_index()
counts_df['STATE'] = counts_df['STATE'].map(state_codings)
counts_df = counts_df.query('YEAR > 1999')
counts_df.to_csv('accident_counts.csv',encoding='utf8',index=False)
counts_df.head()
Explanation: Groupby-aggregate the data by state, month, day, and year counting the number of cases, alcohol-involved deaths, and total fatalities. Save the data as "accident_counts.csv".
End of explanation
from datetime import datetime
import requests, json
from bs4 import BeautifulSoup
from urllib.parse import urlparse, quote, unquote
import networkx as nx
def get_page_outlinks(page_title,lang='en',redirects=1):
Takes a page title and returns a list of wiki-links on the page. The
list may contain duplicates and the position in the list is approximately
where the links occurred.
page_title - a string with the title of the page on Wikipedia
lang - a string (typically two letter ISO 639-1 code) for the language
edition, defaults to "en"
redirects - 1 or 0 for whether to follow page redirects, defaults to 1
Returns:
outlinks_per_lang - a dictionary keyed by language returning a dictionary
keyed by page title returning a list of outlinks
# Replace spaces with underscores
page_title = page_title.replace(' ','_')
bad_titles = ['Special:','Wikipedia:','Help:','Template:','Category:','International Standard','Portal:','s:','File:','Digital object identifier','(page does not exist)']
# Get the response from the API for a query
# After passing a page title, the API returns the HTML markup of the current article version within a JSON payload
req = requests.get('https://{2}.wikipedia.org/w/api.php?action=parse&format=json&page={0}&redirects={1}&prop=text&disableeditsection=1&disabletoc=1'.format(page_title,redirects,lang))
# Read the response into JSON to parse and extract the HTML
json_string = json.loads(req.text)
# Initialize an empty list to store the links
outlinks_list = []
if 'parse' in json_string.keys():
page_html = json_string['parse']['text']['*']
# Parse the HTML into Beautiful Soup
soup = BeautifulSoup(page_html,'lxml')
# Remove sections at end
bad_sections = ['See_also','Notes','References','Bibliography','External_links']
sections = soup.find_all('h2')
for section in sections:
if section.span['id'] in bad_sections:
# Clean out the divs
div_siblings = section.find_next_siblings('div')
for sibling in div_siblings:
sibling.clear()
# Clean out the ULs
ul_siblings = section.find_next_siblings('ul')
for sibling in ul_siblings:
sibling.clear()
# Delete tags associated with templates
for tag in soup.find_all('tr'):
tag.replace_with('')
# For each paragraph tag, extract the titles within the links
for para in soup.find_all('p'):
for link in para.find_all('a'):
if link.has_attr('title'):
title = link['title']
# Ignore links that aren't interesting or are redlinks
if all(bad not in title for bad in bad_titles) and 'redlink' not in link['href']:
outlinks_list.append(title)
# For each unordered list, extract the titles within the child links
for unordered_list in soup.find_all('ul'):
for item in unordered_list.find_all('li'):
for link in item.find_all('a'):
if link.has_attr('title'):
title = link['title']
# Ignore links that aren't interesting or are redlinks
if all(bad not in title for bad in bad_titles) and 'redlink' not in link['href']:
outlinks_list.append(title)
return outlinks_list
def get_pageviews(page_title,lang='en',date_from='20150701',date_to=str(datetime.today().date()).replace('-','')):
Takes Wikipedia page title and returns a all the various pageview records
page_title - a string with the title of the page on Wikipedia
lang - a string (typically two letter ISO 639-1 code) for the language edition,
defaults to "en"
datefrom - a date string in a YYYYMMDD format, defaults to 20150701
dateto - a date string in a YYYYMMDD format, defaults to today
Returns:
revision_list - a DataFrame indexed by date and multi-columned by agent and access type
quoted_page_title = quote(page_title, safe='')
df_list = []
#for access in ['all-access','desktop','mobile-app','mobile-web']:
#for agent in ['all-agents','user','spider','bot']:
s = "https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/{1}.wikipedia.org/{2}/{3}/{0}/daily/{4}/{5}".format(quoted_page_title,lang,'all-access','user',date_from,date_to)
json_response = requests.get(s).json()
if 'items' in json_response:
df = pd.DataFrame(json_response['items'])
df_list.append(df)
concat_df = pd.concat(df_list)
concat_df['timestamp'] = pd.to_datetime(concat_df['timestamp'],format='%Y%m%d%H')
concat_df = concat_df.set_index(['timestamp','agent','access'])['views'].unstack([1,2]).sort_index(axis=1)
concat_df[('page','page')] = page_title
return concat_df
else:
print("Error on {0}".format(page_title))
pass
Explanation: Appendix 2: Get page outlinks
Load libraries and define two functions:
get_page_outlinks - Get all of the outlinks from the current version of the page.
get_pageviews - Get all of the pageviews for an article over a time range
End of explanation
ca_links = get_page_outlinks('Cannabis in California') + ['Cannabis in California']
link_d = {}
for l in list(set(ca_links)):
link_d[l] = get_page_outlinks(l)
Explanation: Get all of the links from "Cannabis in California", and add the seed article itself.
End of explanation
g = nx.DiGraph()
seed_edges = [('Cannabis in California',l) for l in link_d['Cannabis in California']]
#g.add_edges_from(seed_edges)
for page,links in link_d.items():
for link in links:
if link in link_d['Cannabis in California']:
g.add_edge(page,link)
print("There are {0:,} nodes and {1:,} edges in the network.".format(g.number_of_nodes(),g.number_of_edges()))
nx.write_gexf(g,'wikilinks_cannabis_in_california.gexf')
Explanation: Make a network object of these hyperlinks among articles.
End of explanation
pvs_2016 = {}
pvs_2018 = {}
pvs_2016['Cannabis in California'] = get_pageviews('Cannabis in California',
date_from='20160801',date_to='20170201')
pvs_2018['Cannabis in California'] = get_pageviews('Cannabis in California',
date_from='20171001',date_to='20180301')
for page in list(set(ca_links)):
pvs_2016[page] = get_pageviews(page,date_from='20160801',date_to='20170201')
pvs_2018[page] = get_pageviews(page,date_from='20171001',date_to='20180301')
Explanation: Get the pageviews for the articles linking from "Cannabis in California".
End of explanation
def cleanup_pageviews(pv_dict):
_df = pd.concat(pv_dict.values())
_df.reset_index(inplace=True)
_df.columns = _df.columns.droplevel(0)
_df.columns = ['timestamp','pageviews','page']
_df = _df.set_index(['timestamp','page']).unstack('page')['pageviews']
return _df
cleanup_pageviews(pvs_2016).to_csv('wikipv_ca_2016.csv',encoding='utf8')
cleanup_pageviews(pvs_2018).to_csv('wikipv_ca_2018.csv',encoding='utf8')
Explanation: Define a function cleanup_pageviews to make a rectangular DataFrame with dates as index, pages as columns, and pageviews as values.
End of explanation |
10,578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to traverse to GO parents and ancestors
Traverse immediate parents or all ancestors with or without user-specified optional relationships
Parents and Ancestors described
Code to get Parents and Ancestors
Get parents through is_a relationship
Get parents through is_a relationship and optional relationship, regulates.
Get ancestors through is_a relationship
Get ancestors through is_a relationship and optional relationship, regulates.
Parents and Ancestors
Parents
Parents are terms directly above a GO term
The yellow term, regulation of metabolic process, has one or two parents.
1) If using only the default is_a relationship, the only parent is circled in green
Step1: Get parents through is_a relationship
Parent is circled in green
Step2: Get parents through is_a relationship and optional relationship, regulates
Parents are circled in purple
Step3: Get ancestors through is_a relationship
Not circled, but can be seen in figure
Step4: Get ancestors through is_a relationship and optional relationship, regulates
Circles in blue | Python Code:
import os
from goatools.obo_parser import GODag
# Load a small test GO DAG and all the optional relationships,
# like 'regulates' and 'part_of'
godag = GODag('../tests/data/i126/viral_gene_silence.obo',
optional_attrs={'relationship'})
Explanation: How to traverse to GO parents and ancestors
Traverse immediate parents or all ancestors with or without user-specified optional relationships
Parents and Ancestors described
Code to get Parents and Ancestors
Get parents through is_a relationship
Get parents through is_a relationship and optional relationship, regulates.
Get ancestors through is_a relationship
Get ancestors through is_a relationship and optional relationship, regulates.
Parents and Ancestors
Parents
Parents are terms directly above a GO term
The yellow term, regulation of metabolic process, has one or two parents.
1) If using only the default is_a relationship, the only parent is circled in green:
regulation of biological process
2) If adding the optional relationship, regulates, the two parents are circled in purple:
regulation of biological process
metabolic process
Ancestors
Ancestors are all terms above a GO term, traversing up all of the GO hierarchy.
3) If adding the optional relationship, regulates, there are four ancestors are circled in blue:
biological_process
biological regulation
regulation of biological process
metabolic process
If using only the default is_a relationship, there are three ancestors (not circled)
biological_process
biological regulation
regulation of biological process
<img src="images/parents_and_ancestors.png" alt="parents_and_ancestors" width="550">
Code to get Parents and Ancestors
End of explanation
GO_ID = 'GO:0019222' # regulation of metabolic process
from goatools.godag.go_tasks import get_go2parents
optional_relationships = set()
go2parents_isa = get_go2parents(godag, optional_relationships)
print('{GO} parent: {P}'.format(
GO=GO_ID,
P=go2parents_isa[GO_ID]))
Explanation: Get parents through is_a relationship
Parent is circled in green
End of explanation
optional_relationships = {'regulates', 'negatively_regulates', 'positively_regulates'}
go2parents_reg = get_go2parents(godag, optional_relationships)
print('{GO} parents: {P}'.format(
GO=GO_ID,
P=go2parents_reg[GO_ID]))
Explanation: Get parents through is_a relationship and optional relationship, regulates
Parents are circled in purple
End of explanation
from goatools.gosubdag.gosubdag import GoSubDag
gosubdag_r0 = GoSubDag([GO_ID], godag, prt=None)
print('{GO} ancestors: {P}'.format(
GO=GO_ID,
P=gosubdag_r0.rcntobj.go2ancestors[GO_ID]))
Explanation: Get ancestors through is_a relationship
Not circled, but can be seen in figure
End of explanation
gosubdag_r1 = GoSubDag([GO_ID], godag, relationships=optional_relationships, prt=None)
print('{GO} ancestors: {P}'.format(
GO=GO_ID,
P=gosubdag_r1.rcntobj.go2ancestors[GO_ID]))
Explanation: Get ancestors through is_a relationship and optional relationship, regulates
Circles in blue
End of explanation |
10,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
solarposition.py tutorial
This tutorial needs your help to make it better!
Table of contents
Step1: SPA output
Step2: Speed tests
Step3: This numba test will only work properly if you have installed numba.
Step4: The numba calculation takes a long time the first time that it's run because it uses LLVM to compile the Python code to machine code. After that it's about 4-10 times faster depending on your machine. You can pass a numthreads argument to this function. The optimum numthreads depends on your machine and is equal to 4 by default. | Python Code:
import datetime
# scientific python add-ons
import numpy as np
import pandas as pd
# plotting stuff
# first line makes the plots appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
# seaborn makes your plots look better
try:
import seaborn as sns
sns.set(rc={"figure.figsize": (12, 6)})
except ImportError:
print('We suggest you install seaborn using conda or pip and rerun this cell')
# finally, we import the pvlib library
import pvlib
import pvlib
from pvlib.location import Location
Explanation: solarposition.py tutorial
This tutorial needs your help to make it better!
Table of contents:
1. Setup
2. SPA output
2. Speed tests
This tutorial has been tested against the following package versions:
* pvlib 0.2.0
* Python 2.7.10
* IPython 3.2
* Pandas 0.16.2
It should work with other Python and Pandas versions. It requires pvlib > 0.2.0 and IPython > 3.0.
Authors:
* Will Holmgren (@wholmgren), University of Arizona. July 2014, July 2015.
Setup
End of explanation
tus = Location(32.2, -111, 'US/Arizona', 700, 'Tucson')
print(tus)
golden = Location(39.742476, -105.1786, 'America/Denver', 1830, 'Golden')
print(golden)
golden_mst = Location(39.742476, -105.1786, 'MST', 1830, 'Golden MST')
print(golden_mst)
berlin = Location(52.5167, 13.3833, 'Europe/Berlin', 34, 'Berlin')
print(berlin)
times = pd.date_range(start=datetime.datetime(2014,6,23), end=datetime.datetime(2014,6,24), freq='1Min')
times_loc = times.tz_localize(tus.pytz)
times
pyephemout = pvlib.solarposition.pyephem(times, tus)
spaout = pvlib.solarposition.spa_python(times, tus)
reload(pvlib.solarposition)
pyephemout = pvlib.solarposition.pyephem(times_loc, tus)
spaout = pvlib.solarposition.spa_python(times_loc, tus)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
spaout['elevation'].plot(label='spa')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('spa')
print(spaout.head())
plt.figure()
pyephemout['elevation'].plot(label='pyephem')
spaout['elevation'].plot(label='spa')
(pyephemout['elevation'] - spaout['elevation']).plot(label='diff')
plt.legend(ncol=3)
plt.title('elevation')
plt.figure()
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
spaout['elevation'].plot(label='spa')
(pyephemout['apparent_elevation'] - spaout['elevation']).plot(label='diff')
plt.legend(ncol=3)
plt.title('elevation')
plt.figure()
pyephemout['apparent_zenith'].plot(label='pyephem apparent')
spaout['zenith'].plot(label='spa')
(pyephemout['apparent_zenith'] - spaout['zenith']).plot(label='diff')
plt.legend(ncol=3)
plt.title('zenith')
plt.figure()
pyephemout['apparent_azimuth'].plot(label='pyephem apparent')
spaout['azimuth'].plot(label='spa')
(pyephemout['apparent_azimuth'] - spaout['azimuth']).plot(label='diff')
plt.legend(ncol=3)
plt.title('azimuth')
reload(pvlib.solarposition)
pyephemout = pvlib.solarposition.pyephem(times, tus)
spaout = pvlib.solarposition.spa_python(times, tus)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
spaout['elevation'].plot(label='spa')
plt.legend(ncol=3)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('spa')
print(spaout.head())
reload(pvlib.solarposition)
pyephemout = pvlib.solarposition.pyephem(times, golden)
spaout = pvlib.solarposition.spa_python(times, golden)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
spaout['elevation'].plot(label='spa')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('spa')
print(spaout.head())
pyephemout = pvlib.solarposition.pyephem(times, golden)
ephemout = pvlib.solarposition.ephemeris(times, golden)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
loc = berlin
pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.ephemeris(times, loc)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
ephemout['apparent_elevation'].plot(label='ephem apparent')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
ephemout['apparent_elevation'].plot(label='ephem apparent')
plt.legend(ncol=2)
plt.title('elevation')
plt.xlim(pd.Timestamp('2015-06-28 03:00:00+02:00'), pd.Timestamp('2015-06-28 06:00:00+02:00'))
plt.ylim(-10,10)
loc = berlin
times = pd.DatetimeIndex(start=datetime.date(2015,3,28), end=datetime.date(2015,3,29), freq='5min')
pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.ephemeris(times, loc)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
plt.figure()
pyephemout['azimuth'].plot(label='pyephem')
ephemout['azimuth'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('azimuth')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
loc = berlin
times = pd.DatetimeIndex(start=datetime.date(2015,3,30), end=datetime.date(2015,3,31), freq='5min')
pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.ephemeris(times, loc)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
plt.figure()
pyephemout['azimuth'].plot(label='pyephem')
ephemout['azimuth'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('azimuth')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
loc = berlin
times = pd.DatetimeIndex(start=datetime.date(2015,6,28), end=datetime.date(2015,6,29), freq='5min')
pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.ephemeris(times, loc)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
plt.figure()
pyephemout['azimuth'].plot(label='pyephem')
ephemout['azimuth'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('azimuth')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
Explanation: SPA output
End of explanation
%%timeit
pyephemout = pvlib.solarposition.pyephem(times, loc)
#ephemout = pvlib.solarposition.ephemeris(times, loc)
%%timeit
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.ephemeris(times, loc)
%%timeit
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.get_solarposition(times, loc, method='nrel_numpy')
Explanation: Speed tests
End of explanation
%%timeit
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.get_solarposition(times, loc, method='nrel_numba')
Explanation: This numba test will only work properly if you have installed numba.
End of explanation
%%timeit
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.get_solarposition(times, loc, method='nrel_numba', numthreads=16)
%%timeit
ephemout = pvlib.solarposition.spa_python(times, loc, how='numba', numthreads=16)
Explanation: The numba calculation takes a long time the first time that it's run because it uses LLVM to compile the Python code to machine code. After that it's about 4-10 times faster depending on your machine. You can pass a numthreads argument to this function. The optimum numthreads depends on your machine and is equal to 4 by default.
End of explanation |
10,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step7: Use interact to explore your plot_lorenz function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y-x)
dy = x*(rho-z)-y
dz = x*y - beta*z
return np.array([dx,dy,dz])
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
Explanation: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
End of explanation
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
"
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
t = np.linspace(0,max_time,250*max_time)
soln = odeint(lorentz_derivs,ic,t,args=(sigma,rho,beta))
return t,soln
assert True # leave this to grade solve_lorenz
Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
End of explanation
colors = plt.cm.hot(1)
colors
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
colors = plt.cm.hot(np.linspace(0,1,N))
np.random.seed(1)
for i in range(N):
ic = (np.random.random(3)-.5)*30
t,soln = solve_lorentz(ic, max_time, sigma, rho, beta)
x = [e[0] for e in soln]
y = [e[1] for e in soln]
z = [e[2] for e in soln]
plt.plot(x,z,color=colors[i])
plt.title('Lorenz System for Multiple Trajectories')
plt.xlabel('X Position')
plt.ylabel('Y Position')
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
Explanation: Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
End of explanation
interact(plot_lorentz,max_time=(1,11,1),N=(1,51,1),sigma=(0.0,50.0),rho=(0.0,50.0),beta = fixed(8/3));
Explanation: Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
End of explanation |
10,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistics for Hackers
An Exploration of Statistics Through Computational Simulation
A talk by Jake VanDerPlas for PyCon 2016
Slides available on speakerdeck
Motivation
There's no shortage of absolutely magnificent material out there on the topics of data science and machine learning for an autodidact, such as myself, to learn from. In fact, so many great resources exist that an individual can be forgiven for not knowing where to begin their studies, or for getting distracted once they're off the starting block. I honestly can't count the number of times that I've started working through many of these online courses and tutorials only to have my attention stolen by one of the multitudes of amazing articles on data analysis with Python, or some great new MOOC on Deep Learning. But this year is different! This year, for one of my new year's resolutions, I've decided to create a personalized data science curriculum and stick to it. This year, I promise not to just casually sign up for another course, or start reading yet another textbook to be distracted part way through. This year, I'm sticking to the plan.
As part of my personalized program of study, I've chosen to start with Harvard's Data Science course. I'm currently on week 3 and one of the suggested readings for this week is Jake VanderPlas' talk from PyCon 2016 titled "Statistics for Hackers". As I was watching the video and following along with the slides, I wanted to try out some of the examples and create a set of notes that I could refer to later, so I figured why not create a Jupyter notebook. Once I'd finished, I realized I'd created a decently-sized resource that could be of use to others working their way through the talk. The result is the article you're reading right now, the remainder of which contains my notes and code examples for Jake's excellent talk.
So, enjoy the article, I hope you find this resource useful, and if you have any problems or suggestions of any kind, the full notebook can be found on github, so please send me a pull request, or submit an issue, or just message me directly on Twitter.
Preliminaries
Step4: Warm-up
The talk starts off with a motivating example that asks the question "If you toss a coin 30 times and see 22 heads, is it a fair coin?"
We all know that a fair coin should come up heads roughly 15 out of 30 tosses, give or take, so it does seem unlikely to see so many heads. However, the skeptic might argue that even a fair coin could show 22 heads in 30 tosses from time-to-time. This could just be a chance event. So, the question would then be "how can you determine if you're tossing a fair coin?"
The Classic Method
The classic method would assume that the skeptic is correct and would then test the hypothesis (i.e., the Null Hypothesis) that the observation of 22 heads in 30 tosses could happen simply by chance. Let's start by first considering the probability of a single coin flip coming up heads and work our way up to 22 out of 30.
$$
P(H) = \frac{1}{2}
$$
As our equation shows, the probability of a single coin toss turning up heads is exactly 50% since there is an equal chance of either heads or tails turning up. Taking this one step further, to determine the probability of getting 2 heads in a row with 2 coin tosses, we would need to multiply the probability of getting heads by the probability of getting heads again since the two events are independent of one another.
$$
P(HH) = P(H) \cdot P(H) = P(H)^2 = \left(\frac{1}{2}\right)^2 = \frac{1}{4}
$$
From the equation above, we can see that the probability of getting 2 heads in a row from a total of 2 coin tosses is 25%. Let's now take a look at a slightly different scenario and calculate the probability of getting 2 heads and 1 tails with 3 coin tosses.
$$
P(HHT) = P(H)^2 \cdot P(T) = \left(\frac{1}{2}\right)^2 \cdot \frac{1}{2} = \left(\frac{1}{2}\right)^3 = \frac{1}{8}
$$
The equation above tells us that the probability of getting 2 heads and 1 tails in 3 tosses is 12.5%. This is actually the exact same probability as getting heads in all three tosses, which doesn't sound quite right. The problem is that we've only calculated the probability for a single permutation of 2 heads and 1 tails; specifically for the scenario where we only see tails on the third toss. To get the actual probability of tossing 2 heads and 1 tails we will have to add the probabilities for all of the possible permutations, of which there are exactly three
Step5: Now that we have a method that will calculate the probability for a specific event happening (e.g., 22 heads in 30 coin tosses), we can calculate the probability for every possible outcome of flipping a coin 30 times, and if we plot these values we'll get a visual representation of our coin's probability distribution.
Step6: The visualization above shows the probability distribution for flipping a fair coin 30 times. Using this visualization we can now determine the probability of getting, say for example, 12 heads in 30 flips, which looks to be about 8%. Notice that we've labeled our example of 22 heads as 0.8%. If we look at the probability of flipping exactly 22 heads, it looks likes to be a little less than 0.8%, in fact if we calculate it using the binom_prob function from above, we get 0.5%
Step8: So, then why do we have 0.8% labeled in our probability distribution above? Well, that's because we are showing the probability of getting at least 22 heads, which is also known as the p-value.
What's a p-value?
In statistical hypothesis testing we have an idea that we want to test, but considering that it's very hard to prove something to be true beyond doubt, rather than test our hypothesis directly, we formulate a competing hypothesis, called a null hypothesis, and then try to disprove it instead. The null hypothesis essentially assumes that the effect we're seeing in the data could just be due to chance.
In our example, the null hypothesis assumes we have a fair coin, and the way we determine if this hypothesis is true or not is by calculating how often flipping this fair coin 30 times would result in 22 or more heads. If we then take the number of times that we got 22 or more heads and divide that number by the total of all possible permutations of 30 coin tosses, we get the probability of tossing 22 or more heads with a fair coin. This probability is what we call the p-value.
The p-value is used to check the validity of the null hypothesis. The way this is done is by agreeing upon some predetermined upper limit for our p-value, below which we will assume that our null hypothesis is false. In other words, if our null hypothesis were true, and 22 heads in 30 flips could happen often enough by chance, we would expect to see it happen more often than the given threshold percentage of times. So, for example, if we chose 10% as our threshold, then we would expect to see 22 or more heads show up at least 10% of the time to determine that this is a chance occurrence and not due to some bias in the coin. Historically, the generally accepted threshold has been 5%, and so if our p-value is less than 5%, we can then make the assumption that our coin may not be fair.
The binom_prob function from above calculates the probability of a single event happening, so now all we need for calculating our p-value is a function that adds up the probabilities of a given event, or a more extreme event happening. So, as an example, we would need a function to add up the probabilities of getting 22 heads, 23 heads, 24 heads, and so on. The next bit of code creates that function and uses it to calculate our p-value.
Step9: Running the code above gives us a p-value of roughly 0.8%, which matches the value in our probability distribution above and is also less than the 5% threshold needed to reject our null hypothesis, so it does look like we may have a biased coin.
The Easier Method
That's an example of using the classic method for testing if our coin is fair or not. However, if you don't happen to have at least some background in statistics, it can be a little hard to follow at times, but luckily for us, there's an easier method...
Simulation!
The code below seeks to answer the same question of whether or not our coin is fair by running a large number of simulated coin flips and calculating the proportion of these experiments that resulted in at least 22 heads or more.
Step10: The result of our simulations is 0.8%, the exact same result we got earlier when we calculated the p-value using the classical method above. So, it definitely looks like it's possible that we have a biased coin since the chances of seeing 22 or more heads in 30 tosses of a fair coin is less than 1%.
Four Recipes for Hacking Statistics
We've just seen one example of how our hacking skills can make it easy for us to answer questions that typically only a statistician would be able to answer using the classical methods of statistical analysis. This is just one possible method for answering statistical questions using our coding skills, but Jake's talk describes four recipes in total for "hacking statistics", each of which is listed below. The rest of this article will go into each of the remaining techniques in some detail.
Direct Simulation
Shuffling
Bootstrapping
Cross Validation
In the Warm-up section above, we saw an example direct simulation, the first recipe in our tour of statistical hacks. The next example uses the Shuffling method to figure out if there's a statistically significant difference between two different sample populations.
Shuffling
In this example, we look at the Dr. Seuss story about the Star-belly Sneetches. In this Seussian world, a group of creatures called the Sneetches are divided into two groups
Step11: If we then take a look at the average scores for each group of sneetches, we will see that there's a difference in scores of 6.6 between the two groups. So, on average, the star-bellied sneetches performed better on their tests than the plain-bellied sneetches. But, the real question is, is this a significant difference?
Step12: To determine if this is a signficant difference, we could perform a t-test on our data to compute a p-value, and then just make sure that the p-value is less than the target 0.05. Alternatively, we could use simulation instead.
Unlike our first example, however, we don't have a generative function that we can use to create our probability distribution. So, how can we then use simulation to solve our problem?
Well, we can run a bunch of simulations where we randomly shuffle the labels (i.e., star-bellied or plain-bellied) of each sneetch, recompute the difference between the means, and then determine if the proportion of simulations in which the difference was at least as extreme as 6.6 was less than the target 5%. If so, we can conclude that the difference we see is, in fact, one that doesn't occur strictly by chance very often and so the difference is a significant one. In other words, if the proportion of simulations that have a difference of 6.6 or greater is less than 5%, we can conclude that the labels really do matter, and so we can conclude that star-bellied sneetches are "better" than their plain-bellied counterparts.
Step13: Now that we've ran our simulations, we can calculate our p-value, which is simply the proportion of simulations that resulted in a difference greater than or equal to 6.6.
$$
p = \frac{N_{>6.6}}{N_{total}} = \frac{1512}{10000} = 0.15
$$
Step14: <!--
Since our p-value is greater than 0.05, we can conclude that the difference in test scores between the two groups is not a significant one. In other words, if having a star on your belly actually mattered, we wouldn't expect to see so many simulations result in the same, or greater, difference as the one in the real sample population.
-->
The following code plots the distribution of the differences we found by running the simulations above. We've also added an annotation that marks where the difference of 6.6 falls in the distribution along with its corresponding p-value.
Step15: We can see from the histogram above---and from our simulated p-value, which was greater than 5%---that the difference that we are seeing between the populations can be explained by random chance, so we can effectively dismiss the difference as not statistically significant. In short, star-bellied sneetches are no better than the plain-bellied ones, at least not from a statistical point of view.
For further discussion on this method of simulation, check out John Rauser's keynote talk "Statistics Without the Agonizing Pain" from Strata + Hadoop 2014. Jake mentions that he drew inspiration from it in his talk, and it is a really excellent talk as well; I wholeheartedly recommend it.
Bootstrapping
In this example, we'll be using the story of Yertle the Turtle to explore the bootstrapping recipe. As the story goes, in the land of Sala-ma-Sond, Yertle the Turtle was the king of the pond and he wanted to be the most powerful, highest turtle in the land. To achieve this goal, he would stack turtles as high as he could in order to stand upon their backs. As observers of this curious behavior, we've recorded the heights of 20 turtle towers and we've placed them in a dataframe in the following bit of code.
Step16: The questions we want to answer in this example are
Step17: More than likely the mean and standard error from our freshly drawn sample above didn't exactly match the one that we calculated using the classic method beforehand. But, if we continue to resample several thousand times and take a look at the average (mean) of all those sample means and their standard deviation, we should have something that very closely approximates the mean and standard error derived from using the classic method above.
Step18: Cross Validation
For the final example, we dive into the world of the Lorax. In the story of the Lorax, a faceless creature sales an item that (presumably) all creatures need called a Thneed. Our job as consultants to Onceler Industries is to project Thneed sales. But, before we can get started forecasting the sales of Thneeds, we'll first need some data.
Lucky for you, I've already done the hard work of assembling that data in the code below by "eyeballing" the data in the scatter plot from the slides of the talk. So, it may not be exactly the same, but it should be close enough for our example analysis.
Step19: Now that we have our sales data in a pandas dataframe, we can take a look to see if any trends show up. Plotting the data in a scatterplot, like the one below, reveals that a relationship does seem to exist between temperature and Thneed sales.
Step20: We can see what looks like a relationship between the two variables temperature and sales, but how can we best model that relationship so we can accurately predict sales based on temperature?
Well, one measure of a model's accuracy is the Root-Mean-Square Error (RMSE). This metric represents the sample standard deviation between a set of predicted values (from our model) and the actual observed values.
Step21: We can now use our rmse function to measure how well our models' accurately represent the Thneed sales dataset. And, in the next cell, we'll give it a try by creating two different models and seeing which one does a better job of fitting our sales data.
Step22: In the figure above, we plotted our sales data along with the two models we created in the previous step. The first model (in blue) is a simple linear model, i.e., a first-degree polynomial. The second model (in red) is a second-degree polynomial, so rather than a straight line, we end up with a slight curve.
We can see from the RMSE values in the figure above that the second-degree polynomial performed better than the simple linear model. Of course, the question you should now be asking is, is this the best possible model that we can find?
To find out, let's take a look at the RMSE of a few more models to see if we can do any better.
Step23: We can see, from the plot above, that as we increase the number of terms (i.e., the degrees of freedom) in our model we decrease the RMSE, and this behavior can continue indefinitely, or until we have as many terms as we do data points, at which point we would be fitting the data perfectly.
The problem with this approach though, is that as we increase the number of terms in our equation, we simply match the given dataset closer and closer, but what if our model were to see a data point that's not in our training dataset?
As you can see in the plot below, the model that we've created, though it has a very low RMSE, it has so many terms that it matches our current dataset too closely.
Step24: The problem with fitting the data too closely, is that our model is so finely tuned to our specific dataset, that if we were to use it to predict future sales, it would most likely fail to get very close to the actual value. This phenomenon of too closely modeling the training dataset is well known amongst machine learning practitioners as overfitting and one way that we can avoid it is to use cross-validation.
Cross-validation avoids overfitting by splitting the training dataset into several subsets and using each one to train and test multiple models. Then, the RMSE's of each of those models are averaged to give a more likely estimate of how a model of that type would perform on unseen data.
So, let's give it a try by splitting our data into two groups and randomly assigning data points into each one.
Step25: We can get a look at the data points assigned to each subset by plotting each one as a different color.
Step26: Then, we'll find the best model for each subset of data. In this particular example, we'll fit a second-degree polynomial to each subset and plot both below.
Step27: Finally, we'll compare models across subsets by calculating the RMSE for each model using the training set for the other model. This will give us two RMSE scores which we'll then average to get a more accurate estimate of how well a second-degree polynomial will perform on any unseen data.
Step28: Then, we simply repeat this process for as long as we so desire.
The following code repeats the process described above for polynomials up to 14 degrees and plots the average RMSE for each one against the non-cross-validated RMSE's that we calculated earlier. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# Suppress all warnings just to keep the notebook nice and clean.
# This must happen after all imports since numpy actually adds its
# RankWarning class back in.
import warnings
warnings.filterwarnings("ignore")
# Setup the look and feel of the notebook
sns.set_context("notebook",
font_scale=1.5,
rc={"lines.linewidth": 2.5})
sns.set_style('whitegrid')
sns.set_palette('deep')
# Create a couple of colors to use throughout the notebook
red = sns.xkcd_rgb['vermillion']
blue = sns.xkcd_rgb['dark sky blue']
from IPython.display import display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
Explanation: Statistics for Hackers
An Exploration of Statistics Through Computational Simulation
A talk by Jake VanDerPlas for PyCon 2016
Slides available on speakerdeck
Motivation
There's no shortage of absolutely magnificent material out there on the topics of data science and machine learning for an autodidact, such as myself, to learn from. In fact, so many great resources exist that an individual can be forgiven for not knowing where to begin their studies, or for getting distracted once they're off the starting block. I honestly can't count the number of times that I've started working through many of these online courses and tutorials only to have my attention stolen by one of the multitudes of amazing articles on data analysis with Python, or some great new MOOC on Deep Learning. But this year is different! This year, for one of my new year's resolutions, I've decided to create a personalized data science curriculum and stick to it. This year, I promise not to just casually sign up for another course, or start reading yet another textbook to be distracted part way through. This year, I'm sticking to the plan.
As part of my personalized program of study, I've chosen to start with Harvard's Data Science course. I'm currently on week 3 and one of the suggested readings for this week is Jake VanderPlas' talk from PyCon 2016 titled "Statistics for Hackers". As I was watching the video and following along with the slides, I wanted to try out some of the examples and create a set of notes that I could refer to later, so I figured why not create a Jupyter notebook. Once I'd finished, I realized I'd created a decently-sized resource that could be of use to others working their way through the talk. The result is the article you're reading right now, the remainder of which contains my notes and code examples for Jake's excellent talk.
So, enjoy the article, I hope you find this resource useful, and if you have any problems or suggestions of any kind, the full notebook can be found on github, so please send me a pull request, or submit an issue, or just message me directly on Twitter.
Preliminaries
End of explanation
def factorial(n):
Calculates the factorial of `n`
vals = list(range(1, n + 1))
if len(vals) <= 0:
return 1
prod = 1
for val in vals:
prod *= val
return prod
def n_choose_k(n, k):
Calculates the binomial coefficient
return factorial(n) / (factorial(k) * factorial(n - k))
def binom_prob(n, k, p):
Returns the probability of see `k` heads in `n` coin tosses
Arguments:
n - number of trials
k - number of trials in which an event took place
p - probability of an event happening
return n_choose_k(n, k) * p**k * (1 - p)**(n - k)
Explanation: Warm-up
The talk starts off with a motivating example that asks the question "If you toss a coin 30 times and see 22 heads, is it a fair coin?"
We all know that a fair coin should come up heads roughly 15 out of 30 tosses, give or take, so it does seem unlikely to see so many heads. However, the skeptic might argue that even a fair coin could show 22 heads in 30 tosses from time-to-time. This could just be a chance event. So, the question would then be "how can you determine if you're tossing a fair coin?"
The Classic Method
The classic method would assume that the skeptic is correct and would then test the hypothesis (i.e., the Null Hypothesis) that the observation of 22 heads in 30 tosses could happen simply by chance. Let's start by first considering the probability of a single coin flip coming up heads and work our way up to 22 out of 30.
$$
P(H) = \frac{1}{2}
$$
As our equation shows, the probability of a single coin toss turning up heads is exactly 50% since there is an equal chance of either heads or tails turning up. Taking this one step further, to determine the probability of getting 2 heads in a row with 2 coin tosses, we would need to multiply the probability of getting heads by the probability of getting heads again since the two events are independent of one another.
$$
P(HH) = P(H) \cdot P(H) = P(H)^2 = \left(\frac{1}{2}\right)^2 = \frac{1}{4}
$$
From the equation above, we can see that the probability of getting 2 heads in a row from a total of 2 coin tosses is 25%. Let's now take a look at a slightly different scenario and calculate the probability of getting 2 heads and 1 tails with 3 coin tosses.
$$
P(HHT) = P(H)^2 \cdot P(T) = \left(\frac{1}{2}\right)^2 \cdot \frac{1}{2} = \left(\frac{1}{2}\right)^3 = \frac{1}{8}
$$
The equation above tells us that the probability of getting 2 heads and 1 tails in 3 tosses is 12.5%. This is actually the exact same probability as getting heads in all three tosses, which doesn't sound quite right. The problem is that we've only calculated the probability for a single permutation of 2 heads and 1 tails; specifically for the scenario where we only see tails on the third toss. To get the actual probability of tossing 2 heads and 1 tails we will have to add the probabilities for all of the possible permutations, of which there are exactly three: HHT, HTH, and THH.
$$
P(2H,1T) = P(HHT) + P(HTH) + P(THH) = \frac{1}{8} + \frac{1}{8} + \frac{1}{8} = \frac{3}{8}
$$
Another way we could do this is to calculate the total number of permutations and simply multiply that by the probability of each event happening. To get the total number of permutations we can use the binomial coefficient. Then, we can simply calculate the probability above using the following equation.
$$
P(2H,1T) = \binom{3}{2} \left(\frac{1}{2}\right)^{3} = 3 \left(\frac{1}{8}\right) = \frac{3}{8}
$$
While the equation above works in our particular case, where each event has an equal probability of happening, it will run into trouble with events that have an unequal chance of taking place. To deal with those situations, you'll want to extend the last equation to take into account the differing probabilities. The result would be the following equation, where $N$ is number of coin flips, $N_H$ is the number of expected heads, $N_T$ is the number of expected tails, and $P_H$ is the probability of getting heads on each flip.
$$
P(N_H,N_T) = \binom{N}{N_H} \left(P_H\right)^{N_H} \left(1 - P_H\right)^{N_T}
$$
Now that we understand the classic method, let's use it to test our null hypothesis that we are actually tossing a fair coin, and that this is just a chance occurrence. The following code implements the equations we've just discussed above.
<!--
The following code sets up a handful of helper functions that we'll use below to create the probability distribution for a fair coin being flipped 30 times using the equations we just discussed above.
-->
End of explanation
# Calculate the probability for every possible outcome of tossing
# a fair coin 30 times.
probabilities = [binom_prob(30, k, 0.5) for k in range(1, 31)]
# Plot the probability distribution using the probabilities list
# we created above.
plt.step(range(1, 31), probabilities, where='mid', color=blue)
plt.xlabel('number of heads')
plt.ylabel('probability')
plt.plot((22, 22), (0, 0.1599), color=red);
plt.annotate('0.8%',
xytext=(25, 0.08),
xy=(22, 0.08),
multialignment='right',
va='center',
color=red,
size='large',
arrowprops={'arrowstyle': '<|-',
'lw': 2,
'color': red,
'shrinkA': 10});
Explanation: Now that we have a method that will calculate the probability for a specific event happening (e.g., 22 heads in 30 coin tosses), we can calculate the probability for every possible outcome of flipping a coin 30 times, and if we plot these values we'll get a visual representation of our coin's probability distribution.
End of explanation
print("Probability of flipping 22 heads: %0.1f%%" % (binom_prob(30, 22, 0.5) * 100))
Explanation: The visualization above shows the probability distribution for flipping a fair coin 30 times. Using this visualization we can now determine the probability of getting, say for example, 12 heads in 30 flips, which looks to be about 8%. Notice that we've labeled our example of 22 heads as 0.8%. If we look at the probability of flipping exactly 22 heads, it looks likes to be a little less than 0.8%, in fact if we calculate it using the binom_prob function from above, we get 0.5%
End of explanation
def p_value(n, k, p):
Returns the p-value for the given the given set
return sum(binom_prob(n, i, p) for i in range(k, n+1))
print("P-value: %0.1f%%" % (p_value(30, 22, 0.5) * 100))
Explanation: So, then why do we have 0.8% labeled in our probability distribution above? Well, that's because we are showing the probability of getting at least 22 heads, which is also known as the p-value.
What's a p-value?
In statistical hypothesis testing we have an idea that we want to test, but considering that it's very hard to prove something to be true beyond doubt, rather than test our hypothesis directly, we formulate a competing hypothesis, called a null hypothesis, and then try to disprove it instead. The null hypothesis essentially assumes that the effect we're seeing in the data could just be due to chance.
In our example, the null hypothesis assumes we have a fair coin, and the way we determine if this hypothesis is true or not is by calculating how often flipping this fair coin 30 times would result in 22 or more heads. If we then take the number of times that we got 22 or more heads and divide that number by the total of all possible permutations of 30 coin tosses, we get the probability of tossing 22 or more heads with a fair coin. This probability is what we call the p-value.
The p-value is used to check the validity of the null hypothesis. The way this is done is by agreeing upon some predetermined upper limit for our p-value, below which we will assume that our null hypothesis is false. In other words, if our null hypothesis were true, and 22 heads in 30 flips could happen often enough by chance, we would expect to see it happen more often than the given threshold percentage of times. So, for example, if we chose 10% as our threshold, then we would expect to see 22 or more heads show up at least 10% of the time to determine that this is a chance occurrence and not due to some bias in the coin. Historically, the generally accepted threshold has been 5%, and so if our p-value is less than 5%, we can then make the assumption that our coin may not be fair.
The binom_prob function from above calculates the probability of a single event happening, so now all we need for calculating our p-value is a function that adds up the probabilities of a given event, or a more extreme event happening. So, as an example, we would need a function to add up the probabilities of getting 22 heads, 23 heads, 24 heads, and so on. The next bit of code creates that function and uses it to calculate our p-value.
End of explanation
M = 0
n = 50000
for i in range(n):
trials = np.random.randint(2, size=30)
if (trials.sum() >= 22):
M += 1
p = M / n
print("Simulated P-value: %0.1f%%" % (p * 100))
Explanation: Running the code above gives us a p-value of roughly 0.8%, which matches the value in our probability distribution above and is also less than the 5% threshold needed to reject our null hypothesis, so it does look like we may have a biased coin.
The Easier Method
That's an example of using the classic method for testing if our coin is fair or not. However, if you don't happen to have at least some background in statistics, it can be a little hard to follow at times, but luckily for us, there's an easier method...
Simulation!
The code below seeks to answer the same question of whether or not our coin is fair by running a large number of simulated coin flips and calculating the proportion of these experiments that resulted in at least 22 heads or more.
End of explanation
import pandas as pd
df = pd.DataFrame({'star': [1, 1, 1, 1, 1, 1, 1, 1] +
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'score': [84, 72, 57, 46, 63, 76, 99, 91] +
[81, 69, 74, 61, 56, 87, 69, 65, 66, 44, 62, 69]})
df
Explanation: The result of our simulations is 0.8%, the exact same result we got earlier when we calculated the p-value using the classical method above. So, it definitely looks like it's possible that we have a biased coin since the chances of seeing 22 or more heads in 30 tosses of a fair coin is less than 1%.
Four Recipes for Hacking Statistics
We've just seen one example of how our hacking skills can make it easy for us to answer questions that typically only a statistician would be able to answer using the classical methods of statistical analysis. This is just one possible method for answering statistical questions using our coding skills, but Jake's talk describes four recipes in total for "hacking statistics", each of which is listed below. The rest of this article will go into each of the remaining techniques in some detail.
Direct Simulation
Shuffling
Bootstrapping
Cross Validation
In the Warm-up section above, we saw an example direct simulation, the first recipe in our tour of statistical hacks. The next example uses the Shuffling method to figure out if there's a statistically significant difference between two different sample populations.
Shuffling
In this example, we look at the Dr. Seuss story about the Star-belly Sneetches. In this Seussian world, a group of creatures called the Sneetches are divided into two groups: those with stars on their bellies, and those with no "stars upon thars". Over time, the star-bellied sneetches have come to think of themselves as better than the plain-bellied sneetches. As researchers of sneetches, it's our job to uncover whether or not star-bellied sneetches really are better than their plain-bellied cousins.
The first step in answering this question will be to create our experimental data. In the following code snippet we create a dataframe object that contains a set of test scores for both star-bellied and plain-bellied sneetches.
End of explanation
star_bellied_mean = df[df.star == 1].score.mean()
plain_bellied_mean = df[df.star == 0].score.mean()
print("Star-bellied Sneetches Mean: %2.1f" % star_bellied_mean)
print("Plain-bellied Sneetches Mean: %2.1f" % plain_bellied_mean)
print("Difference: %2.1f" % (star_bellied_mean - plain_bellied_mean))
Explanation: If we then take a look at the average scores for each group of sneetches, we will see that there's a difference in scores of 6.6 between the two groups. So, on average, the star-bellied sneetches performed better on their tests than the plain-bellied sneetches. But, the real question is, is this a significant difference?
End of explanation
df['label'] = df['star']
num_simulations = 10000
differences = []
for i in range(num_simulations):
np.random.shuffle(df['label'])
star_bellied_mean = df[df.label == 1].score.mean()
plain_bellied_mean = df[df.label == 0].score.mean()
differences.append(star_bellied_mean - plain_bellied_mean)
Explanation: To determine if this is a signficant difference, we could perform a t-test on our data to compute a p-value, and then just make sure that the p-value is less than the target 0.05. Alternatively, we could use simulation instead.
Unlike our first example, however, we don't have a generative function that we can use to create our probability distribution. So, how can we then use simulation to solve our problem?
Well, we can run a bunch of simulations where we randomly shuffle the labels (i.e., star-bellied or plain-bellied) of each sneetch, recompute the difference between the means, and then determine if the proportion of simulations in which the difference was at least as extreme as 6.6 was less than the target 5%. If so, we can conclude that the difference we see is, in fact, one that doesn't occur strictly by chance very often and so the difference is a significant one. In other words, if the proportion of simulations that have a difference of 6.6 or greater is less than 5%, we can conclude that the labels really do matter, and so we can conclude that star-bellied sneetches are "better" than their plain-bellied counterparts.
End of explanation
p_value = sum(diff >= 6.6 for diff in differences) / num_simulations
print("p-value: %2.2f" % p_value)
Explanation: Now that we've ran our simulations, we can calculate our p-value, which is simply the proportion of simulations that resulted in a difference greater than or equal to 6.6.
$$
p = \frac{N_{>6.6}}{N_{total}} = \frac{1512}{10000} = 0.15
$$
End of explanation
plt.hist(differences, bins=50, color=blue)
plt.xlabel('score difference')
plt.ylabel('number')
plt.plot((6.6, 6.6), (0, 700), color=red);
plt.annotate('%2.f%%' % (p_value * 100),
xytext=(15, 350),
xy=(6.6, 350),
multialignment='right',
va='center',
color=red,
size='large',
arrowprops={'arrowstyle': '<|-',
'lw': 2,
'color': red,
'shrinkA': 10});
Explanation: <!--
Since our p-value is greater than 0.05, we can conclude that the difference in test scores between the two groups is not a significant one. In other words, if having a star on your belly actually mattered, we wouldn't expect to see so many simulations result in the same, or greater, difference as the one in the real sample population.
-->
The following code plots the distribution of the differences we found by running the simulations above. We've also added an annotation that marks where the difference of 6.6 falls in the distribution along with its corresponding p-value.
End of explanation
df = pd.DataFrame({'heights': [48, 24, 51, 12, 21,
41, 25, 23, 32, 61,
19, 24, 29, 21, 23,
13, 32, 18, 42, 18]})
Explanation: We can see from the histogram above---and from our simulated p-value, which was greater than 5%---that the difference that we are seeing between the populations can be explained by random chance, so we can effectively dismiss the difference as not statistically significant. In short, star-bellied sneetches are no better than the plain-bellied ones, at least not from a statistical point of view.
For further discussion on this method of simulation, check out John Rauser's keynote talk "Statistics Without the Agonizing Pain" from Strata + Hadoop 2014. Jake mentions that he drew inspiration from it in his talk, and it is a really excellent talk as well; I wholeheartedly recommend it.
Bootstrapping
In this example, we'll be using the story of Yertle the Turtle to explore the bootstrapping recipe. As the story goes, in the land of Sala-ma-Sond, Yertle the Turtle was the king of the pond and he wanted to be the most powerful, highest turtle in the land. To achieve this goal, he would stack turtles as high as he could in order to stand upon their backs. As observers of this curious behavior, we've recorded the heights of 20 turtle towers and we've placed them in a dataframe in the following bit of code.
End of explanation
sample = df.sample(20, replace=True)
display(sample)
print("Mean: %2.2f" % sample.heights.mean())
print("Standard Error: %2.2f" % (sample.heights.std() / np.sqrt(len(sample))))
Explanation: The questions we want to answer in this example are: what is the mean height of Yertle's turtle stacks, and what is the uncertainty of this estimate?
The Classic Method
The classic method is simply to calculate the sample mean...
$$
\bar{x} = \frac{1}{N} \sum_{i=1}^{N} x_i = 28.9
$$
...and the standard error of the mean.
$$
\sigma_{\bar{x}} = \frac{1}{
\sqrt{N}}\sqrt{\frac{1}{N - 1}
\sum_{i=1}^{N} (x_i - \bar{x})^2
} = 3.0
$$
But, being hackers, we'll be using simulation instead.
Just like in our last example, we are once again faced with the problem of not having a generative model, but unlike the last example, we're not comparing two groups, so we can't just shuffle around labels here, instead we'll use something called bootstrap resampling.
Bootstrap resampling is a method that simulates several random sample distributions by drawing samples from the current distribution with replacement, i.e., we can draw the same data point more than once. Luckily, pandas makes this super easy with its sample function. We simply need to make sure that we pass in True for the replace argument to sample from our dataset with replacement.
End of explanation
xbar = []
for i in range(10000):
sample = df.sample(20, replace=True)
xbar.append(sample.heights.mean())
print("Mean: %2.1f" % np.mean(xbar))
print("Standard Error: %2.1f" % np.std(xbar))
Explanation: More than likely the mean and standard error from our freshly drawn sample above didn't exactly match the one that we calculated using the classic method beforehand. But, if we continue to resample several thousand times and take a look at the average (mean) of all those sample means and their standard deviation, we should have something that very closely approximates the mean and standard error derived from using the classic method above.
End of explanation
df = pd.DataFrame({
'temp': [22, 36, 36, 38, 44, 45, 47,
43, 44, 45, 47, 49,
52, 53, 53, 53, 54, 55, 55, 55, 56, 57, 58, 59,
60, 61, 61.5, 61.7, 61.7, 61.7, 61.8, 62, 62, 63.4, 64.6,
65, 65.6, 65.6, 66.4, 66.9, 67, 67, 67.4, 67.5, 68, 69,
70, 71, 71, 71.5, 72, 72, 72, 72.7, 73, 73, 73, 73.3, 74, 75, 75,
77, 77, 77, 77.4, 77.9, 78, 78, 79,
80, 82, 83, 84, 85, 85, 86, 87, 88,
90, 90, 91, 93, 95, 97,
102, 104],
'sales': [660, 433, 475, 492, 302, 345, 337,
479, 456, 440, 423, 269,
331, 197, 283, 351, 470, 252, 278, 350, 253, 253, 343, 280,
200, 194, 188, 171, 204, 266, 275, 171, 282, 218, 226,
187, 184, 192, 167, 136, 149, 168, 218, 298, 199, 268,
235, 157, 196, 203, 148, 157, 213, 173, 145, 184, 226, 204, 250, 102, 176,
97, 138, 226, 35, 190, 221, 95, 211,
110, 150, 152, 37, 76, 56, 51, 27, 82,
100, 123, 145, 51, 156, 99,
147, 54]
})
Explanation: Cross Validation
For the final example, we dive into the world of the Lorax. In the story of the Lorax, a faceless creature sales an item that (presumably) all creatures need called a Thneed. Our job as consultants to Onceler Industries is to project Thneed sales. But, before we can get started forecasting the sales of Thneeds, we'll first need some data.
Lucky for you, I've already done the hard work of assembling that data in the code below by "eyeballing" the data in the scatter plot from the slides of the talk. So, it may not be exactly the same, but it should be close enough for our example analysis.
End of explanation
# Grab a reference to fig and axes object so we can reuse them
fig, ax = plt.subplots()
# Plot the Thneed sales data
ax.scatter(df.temp, df.sales)
ax.set_xlim(xmin=20, xmax=110)
ax.set_ylim(ymin=0, ymax=700)
ax.set_xlabel('temperature (F)')
ax.set_ylabel('thneed sales (daily)');
Explanation: Now that we have our sales data in a pandas dataframe, we can take a look to see if any trends show up. Plotting the data in a scatterplot, like the one below, reveals that a relationship does seem to exist between temperature and Thneed sales.
End of explanation
def rmse(predictions, targets):
return np.sqrt(((predictions - targets)**2).mean())
Explanation: We can see what looks like a relationship between the two variables temperature and sales, but how can we best model that relationship so we can accurately predict sales based on temperature?
Well, one measure of a model's accuracy is the Root-Mean-Square Error (RMSE). This metric represents the sample standard deviation between a set of predicted values (from our model) and the actual observed values.
End of explanation
# 1D Polynomial Fit
d1_model = np.poly1d(np.polyfit(df.temp, df.sales, 1))
d1_predictions = d1_model(range(111))
ax.plot(range(111), d1_predictions,
color=blue, alpha=0.7)
# 2D Polynomial Fit
d2_model = np.poly1d(np.polyfit(df.temp, df.sales, 2))
d2_predictions = d2_model(range(111))
ax.plot(range(111), d2_predictions,
color=red, alpha=0.5)
ax.annotate('RMS error = %2.1f' % rmse(d1_model(df.temp), df.sales),
xy=(75, 650),
fontsize=20,
color=blue,
backgroundcolor='w')
ax.annotate('RMS error = %2.1f' % rmse(d2_model(df.temp), df.sales),
xy=(75, 580),
fontsize=20,
color=red,
backgroundcolor='w')
display(fig);
Explanation: We can now use our rmse function to measure how well our models' accurately represent the Thneed sales dataset. And, in the next cell, we'll give it a try by creating two different models and seeing which one does a better job of fitting our sales data.
End of explanation
rmses = []
for deg in range(15):
model = np.poly1d(np.polyfit(df.temp, df.sales, deg))
predictions = model(df.temp)
rmses.append(rmse(predictions, df.sales))
plt.plot(range(15), rmses)
plt.ylim(45, 70)
plt.xlabel('number of terms in fit')
plt.ylabel('rms error')
plt.annotate('$y = a + bx$',
xytext=(14.2, 70),
xy=(1, rmses[1]),
multialignment='right',
va='center',
arrowprops={'arrowstyle': '-|>',
'lw': 1,
'shrinkA': 10,
'shrinkB': 3})
plt.annotate('$y = a + bx + cx^2$',
xytext=(14.2, 64),
xy=(2, rmses[2]),
multialignment='right',
va='top',
arrowprops={'arrowstyle': '-|>',
'lw': 1,
'shrinkA': 35,
'shrinkB': 3})
plt.annotate('$y = a + bx + cx^2 + dx^3$',
xytext=(14.2, 58),
xy=(3, rmses[3]),
multialignment='right',
va='top',
arrowprops={'arrowstyle': '-|>',
'lw': 1,
'shrinkA': 12,
'shrinkB': 3});
Explanation: In the figure above, we plotted our sales data along with the two models we created in the previous step. The first model (in blue) is a simple linear model, i.e., a first-degree polynomial. The second model (in red) is a second-degree polynomial, so rather than a straight line, we end up with a slight curve.
We can see from the RMSE values in the figure above that the second-degree polynomial performed better than the simple linear model. Of course, the question you should now be asking is, is this the best possible model that we can find?
To find out, let's take a look at the RMSE of a few more models to see if we can do any better.
End of explanation
# Remove everything but the datapoints
ax.lines.clear()
ax.texts.clear()
# Changing the y-axis limits to match the figure in the slides
ax.set_ylim(0, 1000)
# 14 Dimensional Model
model = np.poly1d(np.polyfit(df.temp, df.sales, 14))
ax.plot(range(20, 110), model(range(20, 110)),
color=sns.xkcd_rgb['sky blue'])
display(fig)
Explanation: We can see, from the plot above, that as we increase the number of terms (i.e., the degrees of freedom) in our model we decrease the RMSE, and this behavior can continue indefinitely, or until we have as many terms as we do data points, at which point we would be fitting the data perfectly.
The problem with this approach though, is that as we increase the number of terms in our equation, we simply match the given dataset closer and closer, but what if our model were to see a data point that's not in our training dataset?
As you can see in the plot below, the model that we've created, though it has a very low RMSE, it has so many terms that it matches our current dataset too closely.
End of explanation
df_a = df.sample(n=len(df)//2)
df_b = df.drop(df_a.index)
Explanation: The problem with fitting the data too closely, is that our model is so finely tuned to our specific dataset, that if we were to use it to predict future sales, it would most likely fail to get very close to the actual value. This phenomenon of too closely modeling the training dataset is well known amongst machine learning practitioners as overfitting and one way that we can avoid it is to use cross-validation.
Cross-validation avoids overfitting by splitting the training dataset into several subsets and using each one to train and test multiple models. Then, the RMSE's of each of those models are averaged to give a more likely estimate of how a model of that type would perform on unseen data.
So, let's give it a try by splitting our data into two groups and randomly assigning data points into each one.
End of explanation
plt.scatter(df_a.temp, df_a.sales, color='red')
plt.scatter(df_b.temp, df_b.sales, color='blue')
plt.xlim(0, 110)
plt.ylim(0, 700)
plt.xlabel('temprature (F)')
plt.ylabel('thneed sales (daily)');
Explanation: We can get a look at the data points assigned to each subset by plotting each one as a different color.
End of explanation
# Create a 2-degree model for each subset of data
m1 = np.poly1d(np.polyfit(df_a.temp, df_a.sales, 2))
m2 = np.poly1d(np.polyfit(df_b.temp, df_b.sales, 2))
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2,
sharex=False, sharey=True,
figsize=(12, 5))
x_min, x_max = 20, 110
y_min, y_max = 0, 700
x = range(x_min, x_max + 1)
# Plot the df_a group
ax1.scatter(df_a.temp, df_a.sales, color='red')
ax1.set_xlim(xmin=x_min, xmax=x_max)
ax1.set_ylim(ymin=y_min, ymax=y_max)
ax1.set_xlabel('temprature (F)')
ax1.set_ylabel('thneed sales (daily)')
ax1.plot(x, m1(x),
color=sns.xkcd_rgb['sky blue'],
alpha=0.7)
# Plot the df_b group
ax2.scatter(df_b.temp, df_b.sales, color='blue')
ax2.set_xlim(xmin=x_min, xmax=x_max)
ax2.set_ylim(ymin=y_min, ymax=y_max)
ax2.set_xlabel('temprature (F)')
ax2.plot(x, m2(x),
color=sns.xkcd_rgb['rose'],
alpha=0.5);
Explanation: Then, we'll find the best model for each subset of data. In this particular example, we'll fit a second-degree polynomial to each subset and plot both below.
End of explanation
print("RMS = %2.1f" % rmse(m1(df_b.temp), df_b.sales))
print("RMS = %2.1f" % rmse(m2(df_a.temp), df_a.sales))
print("RMS estimate = %2.1f" % np.mean([rmse(m1(df_b.temp), df_b.sales),
rmse(m2(df_a.temp), df_a.sales)]))
Explanation: Finally, we'll compare models across subsets by calculating the RMSE for each model using the training set for the other model. This will give us two RMSE scores which we'll then average to get a more accurate estimate of how well a second-degree polynomial will perform on any unseen data.
End of explanation
rmses = []
cross_validated_rmses = []
for deg in range(15):
# df_a the model on the whole dataset and calculate its
# RMSE on the same set of data
model = np.poly1d(np.polyfit(df.temp, df.sales, deg))
predictions = model(df.temp)
rmses.append(rmse(predictions, df.sales))
# Use cross-validation to create the model and df_a it
m1 = np.poly1d(np.polyfit(df_a.temp, df_a.sales, deg))
m2 = np.poly1d(np.polyfit(df_b.temp, df_b.sales, deg))
p1 = m1(df_b.temp)
p2 = m2(df_a.temp)
cross_validated_rmses.append(np.mean([rmse(p1, df_b.sales),
rmse(p2, df_a.sales)]))
plt.plot(range(15), rmses, color=blue,
label='RMS')
plt.plot(range(15), cross_validated_rmses, color=red,
label='cross validated RMS')
plt.ylim(45, 70)
plt.xlabel('number of terms in fit')
plt.ylabel('rms error')
plt.legend(frameon=True)
plt.annotate('Best model minimizes the\ncross-validated error.',
xytext=(7, 60),
xy=(2, cross_validated_rmses[2]),
multialignment='center',
va='top',
color='blue',
size=25,
backgroundcolor='w',
arrowprops={'arrowstyle': '-|>',
'lw': 3,
'shrinkA': 12,
'shrinkB': 3,
'color': 'blue'});
Explanation: Then, we simply repeat this process for as long as we so desire.
The following code repeats the process described above for polynomials up to 14 degrees and plots the average RMSE for each one against the non-cross-validated RMSE's that we calculated earlier.
End of explanation |
10,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TFL 레이어로 Keras 모델 만들기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 필수 패키지 가져오기
Step3: UCI Statlog(Heart) 데이터세트 다운로드하기
Step4: 이 가이드에서 훈련에 사용되는 기본값 설정하기
Step5: 순차형 Keras 모델
이 예제는 순차형 Keras 모델을 생성하고 TFL 레이어만 사용합니다.
격자 레이어는 input[i]이 [0, lattice_sizes[i] - 1.0] 내에 있을 것으로 예상하므로 보정 레이어보다 먼저 격자 크기를 정의해야 보정 레이어의 출력 범위를 올바르게 지정할 수 있습니다.
Step6: tfl.layers.ParallelCombination 레이어를 사용하여 순차형 모델을 생성하기 위해 병렬로 실행해야 하는 보정 레이어를 그룹화합니다.
Step7: 각 특성에 대한 보정 레이어를 만들고 병렬 조합 레이어에 추가합니다. 숫자 특성에는 tfl.layers.PWLCalibration을 사용하고 범주형 특성에는 tfl.layers.CategoricalCalibration을 사용합니다.
Step8: 그런 다음 calibrator의 출력을 비선형적으로 융합하기 위해 격자 레이어를 만듭니다.
필요한 차원에 대해 증가할 격자의 단조를 지정해야 합니다. 보정에서 단조로운 방향의 구성은 단조의 엔드 투 엔드 방향을 올바르게 만듭니다. 여기에는 CategoricalCalibration 레이어의 부분 단조가 포함됩니다.
Step9: 그런 다음 결합된 calibrator 및 격자 레이어를 사용하여 순차형 모델을 만들 수 있습니다.
Step10: 훈련은 다른 Keras 모델과 동일하게 동작합니다.
Step11: 함수형 Keras 모델
이 예제에서는 Keras 모델 생성을 위한 함수형 API를 사용합니다.
이전 섹션에서 언급했듯이 격자 레이어는 input[i]가 [0, lattice_sizes[i] - 1.0] 내에 있을 것으로 예상되므로 보정 레이어보다 먼저 격자 크기를 정의해야 보정 레이어의 출력 범위를 적절하게 지정할 수 있습니다.
Step12: 각 특성에 대해 입력 레이어와 보정 레이어를 만들어야 합니다. 숫자 특성에는 tfl.layers.PWLCalibration을 사용하고 범주형 특성에는 tfl.layers.CategoricalCalibration을 사용합니다.
Step13: 그런 다음 calibrator의 출력을 비선형적으로 융합하기 위해 격자 레이어를 만듭니다.
필요한 차원에 대해 증가할 격자의 단조를 지정해야 합니다. 보정에서 단조로운 방향의 구성은 단조의 엔드 투 엔드 방향을 올바르게 만듭니다. 여기에는 tfl.layers.CategoricalCalibration 레이어의 부분 단조가 포함됩니다.
Step14: 모델에 더 많은 유연성을 추가하기 위해 출력 보정 레이어를 추가합니다.
Step15: 이제 입력과 출력을 사용하여 모델을 만들 수 있습니다.
Step16: 훈련은 다른 Keras 모델과 동일하게 동작합니다. 설정에서 입력 특성은 별도의 텐서로 전달됩니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install tensorflow-lattice pydot
Explanation: TFL 레이어로 Keras 모델 만들기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/keras_layers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/keras_layers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/keras_layers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lattice/tutorials/keras_layers.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
TFL Keras 레이어를 사용하여 단조 및 기타 형상 제약 조건이 있는 Keras 모델을 구성할 수 있습니다. 이 예제에서는 TFL 레이어를 사용하여 UCI heart 데이터세트에 대해 보정된 격자 모델을 구축하고 훈련합니다.
보정된 격자 모델에서 각 특성은 tfl.layers.PWLCalibration 또는 tfl.layers.CategoricalCalibration 레이어에 의해 변환되고 결과는 tfl.layers.Lattice를 사용하여 비선형적으로 융합됩니다.
보정된 격자 모델에서 각 특성은 tfl.layers.PWLCalibration 또는 tfl.layers.CategoricalCalibration 레이어에 의해 변환되고 결과는 tfl.layers.Lattice를 사용하여 비선형적으로 융합됩니다.
설정
TF Lattice 패키지 설치하기
End of explanation
import tensorflow as tf
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
from tensorflow import feature_column as fc
logging.disable(sys.maxsize)
Explanation: 필수 패키지 가져오기
End of explanation
# UCI Statlog (Heart) dataset.
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
training_data_df = pd.read_csv(csv_file).sample(
frac=1.0, random_state=41).reset_index(drop=True)
training_data_df.head()
Explanation: UCI Statlog(Heart) 데이터세트 다운로드하기
End of explanation
LEARNING_RATE = 0.1
BATCH_SIZE = 128
NUM_EPOCHS = 100
Explanation: 이 가이드에서 훈련에 사용되는 기본값 설정하기
End of explanation
# Lattice layer expects input[i] to be within [0, lattice_sizes[i] - 1.0], so
lattice_sizes = [3, 2, 2, 2, 2, 2, 2]
Explanation: 순차형 Keras 모델
이 예제는 순차형 Keras 모델을 생성하고 TFL 레이어만 사용합니다.
격자 레이어는 input[i]이 [0, lattice_sizes[i] - 1.0] 내에 있을 것으로 예상하므로 보정 레이어보다 먼저 격자 크기를 정의해야 보정 레이어의 출력 범위를 올바르게 지정할 수 있습니다.
End of explanation
combined_calibrators = tfl.layers.ParallelCombination()
Explanation: tfl.layers.ParallelCombination 레이어를 사용하여 순차형 모델을 생성하기 위해 병렬로 실행해야 하는 보정 레이어를 그룹화합니다.
End of explanation
# ############### age ###############
calibrator = tfl.layers.PWLCalibration(
# Every PWLCalibration layer must have keypoints of piecewise linear
# function specified. Easiest way to specify them is to uniformly cover
# entire input range by using numpy.linspace().
input_keypoints=np.linspace(
training_data_df['age'].min(), training_data_df['age'].max(), num=5),
# You need to ensure that input keypoints have same dtype as layer input.
# You can do it by setting dtype here or by providing keypoints in such
# format which will be converted to desired tf.dtype by default.
dtype=tf.float32,
# Output range must correspond to expected lattice input range.
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
)
combined_calibrators.append(calibrator)
# ############### sex ###############
# For boolean features simply specify CategoricalCalibration layer with 2
# buckets.
calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
# Initializes all outputs to (output_min + output_max) / 2.0.
kernel_initializer='constant')
combined_calibrators.append(calibrator)
# ############### cp ###############
calibrator = tfl.layers.PWLCalibration(
# Here instead of specifying dtype of layer we convert keypoints into
# np.float32.
input_keypoints=np.linspace(1, 4, num=4, dtype=np.float32),
output_min=0.0,
output_max=lattice_sizes[2] - 1.0,
monotonicity='increasing',
# You can specify TFL regularizers as a tuple ('regularizer name', l1, l2).
kernel_regularizer=('hessian', 0.0, 1e-4))
combined_calibrators.append(calibrator)
# ############### trestbps ###############
calibrator = tfl.layers.PWLCalibration(
# Alternatively, you might want to use quantiles as keypoints instead of
# uniform keypoints
input_keypoints=np.quantile(training_data_df['trestbps'],
np.linspace(0.0, 1.0, num=5)),
dtype=tf.float32,
# Together with quantile keypoints you might want to initialize piecewise
# linear function to have 'equal_slopes' in order for output of layer
# after initialization to preserve original distribution.
kernel_initializer='equal_slopes',
output_min=0.0,
output_max=lattice_sizes[3] - 1.0,
# You might consider clamping extreme inputs of the calibrator to output
# bounds.
clamp_min=True,
clamp_max=True,
monotonicity='increasing')
combined_calibrators.append(calibrator)
# ############### chol ###############
calibrator = tfl.layers.PWLCalibration(
# Explicit input keypoint initialization.
input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
dtype=tf.float32,
output_min=0.0,
output_max=lattice_sizes[4] - 1.0,
# Monotonicity of calibrator can be decreasing. Note that corresponding
# lattice dimension must have INCREASING monotonicity regardless of
# monotonicity direction of calibrator.
monotonicity='decreasing',
# Convexity together with decreasing monotonicity result in diminishing
# return constraint.
convexity='convex',
# You can specify list of regularizers. You are not limited to TFL
# regularizrs. Feel free to use any :)
kernel_regularizer=[('laplacian', 0.0, 1e-4),
tf.keras.regularizers.l1_l2(l1=0.001)])
combined_calibrators.append(calibrator)
# ############### fbs ###############
calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[5] - 1.0,
# For categorical calibration layer monotonicity is specified for pairs
# of indices of categories. Output for first category in pair will be
# smaller than output for second category.
#
# Don't forget to set monotonicity of corresponding dimension of Lattice
# layer to '1'.
monotonicities=[(0, 1)],
# This initializer is identical to default one('uniform'), but has fixed
# seed in order to simplify experimentation.
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=0.0, maxval=lattice_sizes[5] - 1.0, seed=1))
combined_calibrators.append(calibrator)
# ############### restecg ###############
calibrator = tfl.layers.CategoricalCalibration(
num_buckets=3,
output_min=0.0,
output_max=lattice_sizes[6] - 1.0,
# Categorical monotonicity can be partial order.
monotonicities=[(0, 1), (0, 2)],
# Categorical calibration layer supports standard Keras regularizers.
kernel_regularizer=tf.keras.regularizers.l1_l2(l1=0.001),
kernel_initializer='constant')
combined_calibrators.append(calibrator)
Explanation: 각 특성에 대한 보정 레이어를 만들고 병렬 조합 레이어에 추가합니다. 숫자 특성에는 tfl.layers.PWLCalibration을 사용하고 범주형 특성에는 tfl.layers.CategoricalCalibration을 사용합니다.
End of explanation
lattice = tfl.layers.Lattice(
lattice_sizes=lattice_sizes,
monotonicities=[
'increasing', 'none', 'increasing', 'increasing', 'increasing',
'increasing', 'increasing'
],
output_min=0.0,
output_max=1.0)
Explanation: 그런 다음 calibrator의 출력을 비선형적으로 융합하기 위해 격자 레이어를 만듭니다.
필요한 차원에 대해 증가할 격자의 단조를 지정해야 합니다. 보정에서 단조로운 방향의 구성은 단조의 엔드 투 엔드 방향을 올바르게 만듭니다. 여기에는 CategoricalCalibration 레이어의 부분 단조가 포함됩니다.
End of explanation
model = tf.keras.models.Sequential()
model.add(combined_calibrators)
model.add(lattice)
Explanation: 그런 다음 결합된 calibrator 및 격자 레이어를 사용하여 순차형 모델을 만들 수 있습니다.
End of explanation
features = training_data_df[[
'age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg'
]].values.astype(np.float32)
target = training_data_df[['target']].values.astype(np.float32)
model.compile(
loss=tf.keras.losses.mean_squared_error,
optimizer=tf.keras.optimizers.Adagrad(learning_rate=LEARNING_RATE))
model.fit(
features,
target,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_split=0.2,
shuffle=False,
verbose=0)
model.evaluate(features, target)
Explanation: 훈련은 다른 Keras 모델과 동일하게 동작합니다.
End of explanation
# We are going to have 2-d embedding as one of lattice inputs.
lattice_sizes = [3, 2, 2, 3, 3, 2, 2]
Explanation: 함수형 Keras 모델
이 예제에서는 Keras 모델 생성을 위한 함수형 API를 사용합니다.
이전 섹션에서 언급했듯이 격자 레이어는 input[i]가 [0, lattice_sizes[i] - 1.0] 내에 있을 것으로 예상되므로 보정 레이어보다 먼저 격자 크기를 정의해야 보정 레이어의 출력 범위를 적절하게 지정할 수 있습니다.
End of explanation
model_inputs = []
lattice_inputs = []
# ############### age ###############
age_input = tf.keras.layers.Input(shape=[1], name='age')
model_inputs.append(age_input)
age_calibrator = tfl.layers.PWLCalibration(
# Every PWLCalibration layer must have keypoints of piecewise linear
# function specified. Easiest way to specify them is to uniformly cover
# entire input range by using numpy.linspace().
input_keypoints=np.linspace(
training_data_df['age'].min(), training_data_df['age'].max(), num=5),
# You need to ensure that input keypoints have same dtype as layer input.
# You can do it by setting dtype here or by providing keypoints in such
# format which will be converted to desired tf.dtype by default.
dtype=tf.float32,
# Output range must correspond to expected lattice input range.
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
monotonicity='increasing',
name='age_calib',
)(
age_input)
lattice_inputs.append(age_calibrator)
# ############### sex ###############
# For boolean features simply specify CategoricalCalibration layer with 2
# buckets.
sex_input = tf.keras.layers.Input(shape=[1], name='sex')
model_inputs.append(sex_input)
sex_calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
# Initializes all outputs to (output_min + output_max) / 2.0.
kernel_initializer='constant',
name='sex_calib',
)(
sex_input)
lattice_inputs.append(sex_calibrator)
# ############### cp ###############
cp_input = tf.keras.layers.Input(shape=[1], name='cp')
model_inputs.append(cp_input)
cp_calibrator = tfl.layers.PWLCalibration(
# Here instead of specifying dtype of layer we convert keypoints into
# np.float32.
input_keypoints=np.linspace(1, 4, num=4, dtype=np.float32),
output_min=0.0,
output_max=lattice_sizes[2] - 1.0,
monotonicity='increasing',
# You can specify TFL regularizers as tuple ('regularizer name', l1, l2).
kernel_regularizer=('hessian', 0.0, 1e-4),
name='cp_calib',
)(
cp_input)
lattice_inputs.append(cp_calibrator)
# ############### trestbps ###############
trestbps_input = tf.keras.layers.Input(shape=[1], name='trestbps')
model_inputs.append(trestbps_input)
trestbps_calibrator = tfl.layers.PWLCalibration(
# Alternatively, you might want to use quantiles as keypoints instead of
# uniform keypoints
input_keypoints=np.quantile(training_data_df['trestbps'],
np.linspace(0.0, 1.0, num=5)),
dtype=tf.float32,
# Together with quantile keypoints you might want to initialize piecewise
# linear function to have 'equal_slopes' in order for output of layer
# after initialization to preserve original distribution.
kernel_initializer='equal_slopes',
output_min=0.0,
output_max=lattice_sizes[3] - 1.0,
# You might consider clamping extreme inputs of the calibrator to output
# bounds.
clamp_min=True,
clamp_max=True,
monotonicity='increasing',
name='trestbps_calib',
)(
trestbps_input)
lattice_inputs.append(trestbps_calibrator)
# ############### chol ###############
chol_input = tf.keras.layers.Input(shape=[1], name='chol')
model_inputs.append(chol_input)
chol_calibrator = tfl.layers.PWLCalibration(
# Explicit input keypoint initialization.
input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
output_min=0.0,
output_max=lattice_sizes[4] - 1.0,
# Monotonicity of calibrator can be decreasing. Note that corresponding
# lattice dimension must have INCREASING monotonicity regardless of
# monotonicity direction of calibrator.
monotonicity='decreasing',
# Convexity together with decreasing monotonicity result in diminishing
# return constraint.
convexity='convex',
# You can specify list of regularizers. You are not limited to TFL
# regularizrs. Feel free to use any :)
kernel_regularizer=[('laplacian', 0.0, 1e-4),
tf.keras.regularizers.l1_l2(l1=0.001)],
name='chol_calib',
)(
chol_input)
lattice_inputs.append(chol_calibrator)
# ############### fbs ###############
fbs_input = tf.keras.layers.Input(shape=[1], name='fbs')
model_inputs.append(fbs_input)
fbs_calibrator = tfl.layers.CategoricalCalibration(
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[5] - 1.0,
# For categorical calibration layer monotonicity is specified for pairs
# of indices of categories. Output for first category in pair will be
# smaller than output for second category.
#
# Don't forget to set monotonicity of corresponding dimension of Lattice
# layer to '1'.
monotonicities=[(0, 1)],
# This initializer is identical to default one ('uniform'), but has fixed
# seed in order to simplify experimentation.
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=0.0, maxval=lattice_sizes[5] - 1.0, seed=1),
name='fbs_calib',
)(
fbs_input)
lattice_inputs.append(fbs_calibrator)
# ############### restecg ###############
restecg_input = tf.keras.layers.Input(shape=[1], name='restecg')
model_inputs.append(restecg_input)
restecg_calibrator = tfl.layers.CategoricalCalibration(
num_buckets=3,
output_min=0.0,
output_max=lattice_sizes[6] - 1.0,
# Categorical monotonicity can be partial order.
monotonicities=[(0, 1), (0, 2)],
# Categorical calibration layer supports standard Keras regularizers.
kernel_regularizer=tf.keras.regularizers.l1_l2(l1=0.001),
kernel_initializer='constant',
name='restecg_calib',
)(
restecg_input)
lattice_inputs.append(restecg_calibrator)
Explanation: 각 특성에 대해 입력 레이어와 보정 레이어를 만들어야 합니다. 숫자 특성에는 tfl.layers.PWLCalibration을 사용하고 범주형 특성에는 tfl.layers.CategoricalCalibration을 사용합니다.
End of explanation
lattice = tfl.layers.Lattice(
lattice_sizes=lattice_sizes,
monotonicities=[
'increasing', 'none', 'increasing', 'increasing', 'increasing',
'increasing', 'increasing'
],
output_min=0.0,
output_max=1.0,
name='lattice',
)(
lattice_inputs)
Explanation: 그런 다음 calibrator의 출력을 비선형적으로 융합하기 위해 격자 레이어를 만듭니다.
필요한 차원에 대해 증가할 격자의 단조를 지정해야 합니다. 보정에서 단조로운 방향의 구성은 단조의 엔드 투 엔드 방향을 올바르게 만듭니다. 여기에는 tfl.layers.CategoricalCalibration 레이어의 부분 단조가 포함됩니다.
End of explanation
model_output = tfl.layers.PWLCalibration(
input_keypoints=np.linspace(0.0, 1.0, 5),
name='output_calib',
)(
lattice)
Explanation: 모델에 더 많은 유연성을 추가하기 위해 출력 보정 레이어를 추가합니다.
End of explanation
model = tf.keras.models.Model(
inputs=model_inputs,
outputs=model_output)
tf.keras.utils.plot_model(model, rankdir='LR')
Explanation: 이제 입력과 출력을 사용하여 모델을 만들 수 있습니다.
End of explanation
feature_names = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg']
features = np.split(
training_data_df[feature_names].values.astype(np.float32),
indices_or_sections=len(feature_names),
axis=1)
target = training_data_df[['target']].values.astype(np.float32)
model.compile(
loss=tf.keras.losses.mean_squared_error,
optimizer=tf.keras.optimizers.Adagrad(LEARNING_RATE))
model.fit(
features,
target,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_split=0.2,
shuffle=False,
verbose=0)
model.evaluate(features, target)
Explanation: 훈련은 다른 Keras 모델과 동일하게 동작합니다. 설정에서 입력 특성은 별도의 텐서로 전달됩니다.
End of explanation |
10,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create Bilayer Topology with Stitch
OpenPNM includes numerous tools for manipulating and altering the topology. Most of these are found in the topotools submodule. This example will illustrate who to join or 'stitch' two distinct networks, even if they have different lattice spacing. In this example we'll create a coarse and a fine network then stitch them together to make a network with two distinct layers.
Start by creating a network with a large lattice spacing
Step1: The coarse_net network has 1000 pores in a cubic lattice with a spacing of 50 um for a total size of 500 um per size. Next, we'll make another network with smaller spacing between pores, but with the same total size.
Step2: These two networks are totally independent of each other, and actually both spatially overlap each other since the network generator places the pores at the [0, 0, 0] origin. Combining these networks into a single network is possible using the stitch function, but first we must make some adjustments. For starters, let's shift the fine_net along the z-axis so it is beside the coarse_net to give the layered effect
Step3: Before proceeding, let's quickly check that the two networks are indeed spatially separated now
Step4: As can be seen below, fine_net (orange) has been repositioned above the coarse_net (blue).
Now it's time stitch the networks together by adding throats between the pores on the top of the coarse network and those on the bottom of the fine network. The stitch function uses Euclidean distance to determine which pore is each face is nearest each other, and connects them.
Step5: And we can quickly visualize the result using OpenPNM's plotting tools | Python Code:
import scipy as sp
import matplotlib.pyplot as plt
%matplotlib inline
# Import OpenPNM and the topotools submodule and initialize the workspace manager
import openpnm as op
from openpnm import topotools as tt
wrk = op.Workspace()
wrk.clear() # Clear the existing workspace (mostly for testing purposes)
wrk.loglevel=50
coarse_net = op.network.Cubic(shape=[10, 10, 10], spacing=5e-1)
Explanation: Create Bilayer Topology with Stitch
OpenPNM includes numerous tools for manipulating and altering the topology. Most of these are found in the topotools submodule. This example will illustrate who to join or 'stitch' two distinct networks, even if they have different lattice spacing. In this example we'll create a coarse and a fine network then stitch them together to make a network with two distinct layers.
Start by creating a network with a large lattice spacing:
End of explanation
fine_net = op.network.Cubic(shape=[25, 25, 5], spacing=2e-1)
Explanation: The coarse_net network has 1000 pores in a cubic lattice with a spacing of 50 um for a total size of 500 um per size. Next, we'll make another network with smaller spacing between pores, but with the same total size.
End of explanation
fine_net['pore.coords'] += sp.array([0, 0, 5])
Explanation: These two networks are totally independent of each other, and actually both spatially overlap each other since the network generator places the pores at the [0, 0, 0] origin. Combining these networks into a single network is possible using the stitch function, but first we must make some adjustments. For starters, let's shift the fine_net along the z-axis so it is beside the coarse_net to give the layered effect:
End of explanation
fig = tt.plot_connections(coarse_net)
fig = tt.plot_connections(fine_net, fig=fig)
Explanation: Before proceeding, let's quickly check that the two networks are indeed spatially separated now:
End of explanation
tt.stitch(network=fine_net, donor=coarse_net,
P_network=fine_net.pores('bottom'),
P_donor=coarse_net.pores('top'),
len_max=4e-1)
Explanation: As can be seen below, fine_net (orange) has been repositioned above the coarse_net (blue).
Now it's time stitch the networks together by adding throats between the pores on the top of the coarse network and those on the bottom of the fine network. The stitch function uses Euclidean distance to determine which pore is each face is nearest each other, and connects them.
End of explanation
fig = tt.plot_connections(fine_net)
Explanation: And we can quickly visualize the result using OpenPNM's plotting tools:
End of explanation |
10,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create enriched movie dataset from The Movie Database API
MovieLens latest data can be downloaded at http
Step1: Generate random names for each unique user and save to ES
Step2: Enrich movie data with TMDB metadata
Step3: Write enriched movie data to Elasticsearch | Python Code:
from pyspark.sql.functions import udf, col
from pyspark.sql.types import *
ms_ts = udf(lambda x: int(x) * 1000, LongType())
import csv
from pyspark.sql.types import *
with open("data/ml-latest-small/ratings.csv") as f:
reader = csv.reader(f)
cols = reader.next()
ratings = [l for l in reader]
ratings_df = sqlContext.createDataFrame(ratings, cols) \
.select("userId", "movieId", col("rating").cast(DoubleType()), ms_ts("timestamp").alias("timestamp"))
ratings_df.write.format("org.elasticsearch.spark.sql").save("demo/ratings")
Explanation: Create enriched movie dataset from The Movie Database API
MovieLens latest data can be downloaded at http://grouplens.org/datasets/movielens/
This demo uses the ml-latest-small dataset of 100k ratings, 9k movies and 700 users
Data enrichment requires access to The Movie Database API
Note set up index mappings before loading data
Using Spark 1.6.1
Load ratings data into Elasticsearch
End of explanation
import names
# define UDF to create random user names
random_name = udf(lambda x: names.get_full_name(), StringType())
users = ratings_df.select("userId").distinct().select("userId", random_name("userId").alias("name"))
users.write.format("org.elasticsearch.spark.sql").option("es.mapping.id", "userId").save("demo/users")
Explanation: Generate random names for each unique user and save to ES
End of explanation
with open("data/ml-latest-small/movies.csv") as f:
reader = csv.reader(f)
cols = reader.next()
raw_movies = sqlContext.createDataFrame([l for l in reader], cols)
with open("data/ml-latest-small/links.csv") as f:
reader = csv.reader(f)
cols = reader.next()
link_data = sqlContext.createDataFrame([l for l in reader], cols)
movie_data = raw_movies.join(link_data, raw_movies.movieId == link_data.movieId)\
.select(raw_movies.movieId, raw_movies.title, raw_movies.genres, link_data.tmdbId)
num_movies = movie_data.count()
movie_data.show(5)
data = movie_data.collect()
import tmdbsimple as tmdb
tmdb.API_KEY = 'YOUR_KEY'
# base URL for TMDB poster images
IMAGE_URL = 'https://image.tmdb.org/t/p/w500'
import csv
from requests import HTTPError
enriched = []
i = 0
for row in data:
try:
m = tmdb.Movies(row.tmdbId).info()
poster_url = IMAGE_URL + m['poster_path'] if 'poster_path' in m and m['poster_path'] is not None else ""
movie = {
"movieId": row.movieId,
"title": m['title'],
"originalTitle": row.title,
"genres": row.genres,
"overview": m['overview'],
"release_date": m['release_date'],
"popularity": m['popularity'],
"original_language": m['original_language'],
"image_url": poster_url
}
enriched.append(movie)
except HTTPError as e:
print "Encountered error: %s for movieId=%d title=%s" % (e, row.movieId, row.title)
movie = {
"movieId": row.movieId,
"title": row.title,
"originalTitle": row.title,
"genres": row.genres,
"overview": "",
"release_date": "",
"popularity": 0,
"original_language": "",
"image_url": ""
}
enriched.append(movie)
i += 1
if i % 1 == 0: print "Enriched movie %s of %s" % (i, num_movies)
Explanation: Enrich movie data with TMDB metadata
End of explanation
from elasticsearch import Elasticsearch
es = Elasticsearch()
for m in enriched:
if 'release_date' in m and m['release_date'] == "": m.pop('release_date')
es.index("demo", "movies", id=m['movieId'], body=m)
Explanation: Write enriched movie data to Elasticsearch
End of explanation |
10,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SymPy
Step1: <h2>Elementary operations</h2>
Step2: <h2>Algebra<h2>
Step3: <h2>Calculus</h2>
Step5: Illustrating Taylor series
We will define a function to compute the Taylor series expansions of a symbolically defined expression at
various orders and visualize all the approximations together with the original function
Step6: With this function defined, we can now use it for any sympy function or expression
Step7: This shows easily how a Taylor series is useless beyond its convergence radius, illustrated by
a simple function that has singularities on the real axis | Python Code:
from IPython.display import display
from sympy.interactive import printing
printing.init_printing(use_latex='mathjax')
from __future__ import division
import sympy as sym
from sympy import *
x, y, z = symbols("x y z")
k, m, n = symbols("k m n", integer=True)
f, g, h = map(Function, 'fgh')
Explanation: SymPy: Open Source Symbolic Mathematics
This notebook uses the SymPy package to perform symbolic manipulations,
and combined with numpy and matplotlib, also displays numerical visualizations of symbolically
constructed expressions.
We first load sympy printing extensions, as well as all of sympy:
End of explanation
Rational(3,2)*pi + exp(I*x) / (x**2 + y)
exp(I*x).subs(x,pi).evalf()
e = x + 2*y
srepr(e)
exp(pi * sqrt(163)).evalf(50)
Explanation: <h2>Elementary operations</h2>
End of explanation
eq = ((x+y)**2 * (x+1))
eq
expand(eq)
a = 1/x + (x*sin(x) - 1)/x
a
simplify(a)
eq = Eq(x**3 + 2*x**2 + 4*x + 8, 0)
eq
solve(eq, x)
a, b = symbols('a b')
Sum(6*n**2 + 2**n, (n, a, b))
Explanation: <h2>Algebra<h2>
End of explanation
limit((sin(x)-x)/x**3, x, 0)
(1/cos(x)).series(x, 0, 6)
diff(cos(x**2)**2 / (1+x), x)
integrate(x**2 * cos(x), (x, 0, pi/2))
eqn = Eq(Derivative(f(x),x,x) + 9*f(x), 1)
display(eqn)
dsolve(eqn, f(x))
Explanation: <h2>Calculus</h2>
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# You can change the default figure size to be a bit larger if you want,
# uncomment the next line for that:
#plt.rc('figure', figsize=(10, 6))
def plot_taylor_approximations(func, x0=None, orders=(2, 4), xrange=(0,1), yrange=None, npts=200):
Plot the Taylor series approximations to a function at various orders.
Parameters
----------
func : a sympy function
x0 : float
Origin of the Taylor series expansion. If not given, x0=xrange[0].
orders : list
List of integers with the orders of Taylor series to show. Default is (2, 4).
xrange : 2-tuple or array.
Either an (xmin, xmax) tuple indicating the x range for the plot (default is (0, 1)),
or the actual array of values to use.
yrange : 2-tuple
(ymin, ymax) tuple indicating the y range for the plot. If not given,
the full range of values will be automatically used.
npts : int
Number of points to sample the x range with. Default is 200.
if not callable(func):
raise ValueError('func must be callable')
if isinstance(xrange, (list, tuple)):
x = np.linspace(float(xrange[0]), float(xrange[1]), npts)
else:
x = xrange
if x0 is None: x0 = x[0]
xs = sym.Symbol('x')
# Make a numpy-callable form of the original function for plotting
fx = func(xs)
f = sym.lambdify(xs, fx, modules=['numpy'])
# We could use latex(fx) instead of str(), but matploblib gets confused
# with some of the (valid) latex constructs sympy emits. So we play it safe.
plt.plot(x, f(x), label=str(fx), lw=2)
# Build the Taylor approximations, plotting as we go
apps = {}
for order in orders:
app = fx.series(xs, x0, n=order).removeO()
apps[order] = app
# Must be careful here: if the approximation is a constant, we can't
# blindly use lambdify as it won't do the right thing. In that case,
# evaluate the number as a float and fill the y array with that value.
if isinstance(app, sym.numbers.Number):
y = np.zeros_like(x)
y.fill(app.evalf())
else:
fa = sym.lambdify(xs, app, modules=['numpy'])
y = fa(x)
tex = sym.latex(app).replace('$', '')
plt.plot(x, y, label=r'$n=%s:\, %s$' % (order, tex) )
# Plot refinements
if yrange is not None:
plt.ylim(*yrange)
plt.grid()
plt.legend(loc='best').get_frame().set_alpha(0.8)
Explanation: Illustrating Taylor series
We will define a function to compute the Taylor series expansions of a symbolically defined expression at
various orders and visualize all the approximations together with the original function
End of explanation
plot_taylor_approximations(sin, 0, [2, 4, 6], (0, 2*pi), (-2,2))
plot_taylor_approximations(cos, 0, [2, 4, 6], (0, 2*pi), (-2,2))
Explanation: With this function defined, we can now use it for any sympy function or expression
End of explanation
# For an expression made from elementary functions, we must first make it into
# a callable function, the simplest way is to use the Python lambda construct.
plot_taylor_approximations(lambda x: 1/cos(x), 0, [2,4,6], (0, 2*pi), (-5,5))
Explanation: This shows easily how a Taylor series is useless beyond its convergence radius, illustrated by
a simple function that has singularities on the real axis:
End of explanation |
10,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: Now we create a 5 by 5 grid with a spacing (dx and dy) of 1.
We also create an elevation field with value of 1. everywhere, except at the outlet, where the elevation is 0. In this case the outlet is in the middle of the bottom row, at location (0,2), and has a node id of 2.
Step2: The set_watershed_boundary_condition in RasterModelGrid will find the outlet of the watershed.
This method takes the node data, in this case z, and, optionally the no_data value.
This method sets all nodes that have no_data values to closed boundaries.
This example does not have any no_data values, which is fine.
In this case, the code will set all of the perimeter nodes as BC_NODE_IS_CLOSED(boundary status 4) in order to create this boundary around the core nodes.
The exception on the perimeter is node 2 (with elevation of 0). Although it is on the perimeter, it has a value and it has the lowest value. So in this case node 2 will be set as BC_NODE_IS_FIXED_VALUE (boundary status 1).
The rest of the nodes are set as a CORE_NODE (boundary status 0)
Step3: Check to see that node status were set correctly.
imshow will default to not plot the value of BC_NODE_IS_CLOSED nodes, which is why we override this below with the option color_for_closed
Step4: The second example uses set_watershed_boundary_condition_outlet_coords
In this case the user knows the coordinates of the outlet node.
First instantiate a new grid, with new data values.
Step5: Note that the node with zero elevation, which will be the outlet, is now at location (0,1).
Note that even though this grid has a dx & dy of 10., the outlet coords are still (0,1).
Set the boundary conditions.
Step6: Plot grid of boundary status information
Step7: The third example uses set_watershed_boundary_condition_outlet_id
In this case the user knows the node id value of the outlet node.
First instantiate a new grid, with new data values.
Step8: Set boundary conditions with the outlet id.
Note that here we know the id of the node that has a value of zero and choose this as the outlet. But the code will not complain if you give it an id value of a node that does not have the smallest data value.
Step9: Another plot to illustrate the results.
Step10: The final example uses set_watershed_boundary_condition on a watershed that was exported from Arc.
First import read_esri_ascii and then import the DEM data.
An optional value of halo=1 is used with read_esri_ascii. This puts a perimeter of nodata values around the DEM.
This is done just in case there are data values on the edge of the raster. These would have to become closed to set watershed boundary conditions, but in order to avoid that, we add a perimeter to the data.
Step11: Let's plot the data to see what the topography looks like.
Step12: In this case the nodata value is zero. This skews the colorbar, but we can at least see the shape of the watershed.
Let's set the boundary condition. Remember we don't know the outlet id.
Step13: Now we can look at the boundary status of the nodes to see where the found outlet was.
Step14: This looks sensible.
Now that the boundary conditions ae set, we can also look at the topography.
imshow will default to show boundaries as black, as illustrated below. But that can be overwridden as we have been doing all along. | Python Code:
from landlab import RasterModelGrid
import numpy as np
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Setting watershed boundary conditions on a raster grid
This tutorial ilustrates how to set watershed boundary conditions on a raster grid.
Note that a watershed is assumed to have a ring of nodes around the core nodes that are closed boundaries (i.e. no flux can cross these nodes, or more correctly, no flux can cross the faces around the nodes).
This means that automatically the nodes on the outer perimeter of the grid will be set to be closed boundary.
By definitation a watershed also has one outlet through which fluxes can pass. Here the outlet is set as the node that has the lowest value, is not a nodata_value node, and is adjacent to at least one closed boundary node.
This means that an outlet can be on the outer perimeter of the raster. However, the outlet does not need to be on the outer perimeter of the raster.
The first example uses set_watershed_boundary_condition, which finds the outlet for the user.
First import what we need.
End of explanation
mg1 = RasterModelGrid((5, 5), 1.0)
z1 = mg1.add_ones("topographic__elevation", at="node")
mg1.at_node["topographic__elevation"][2] = 0.0
mg1.at_node["topographic__elevation"]
Explanation: Now we create a 5 by 5 grid with a spacing (dx and dy) of 1.
We also create an elevation field with value of 1. everywhere, except at the outlet, where the elevation is 0. In this case the outlet is in the middle of the bottom row, at location (0,2), and has a node id of 2.
End of explanation
mg1.set_watershed_boundary_condition(z1)
Explanation: The set_watershed_boundary_condition in RasterModelGrid will find the outlet of the watershed.
This method takes the node data, in this case z, and, optionally the no_data value.
This method sets all nodes that have no_data values to closed boundaries.
This example does not have any no_data values, which is fine.
In this case, the code will set all of the perimeter nodes as BC_NODE_IS_CLOSED(boundary status 4) in order to create this boundary around the core nodes.
The exception on the perimeter is node 2 (with elevation of 0). Although it is on the perimeter, it has a value and it has the lowest value. So in this case node 2 will be set as BC_NODE_IS_FIXED_VALUE (boundary status 1).
The rest of the nodes are set as a CORE_NODE (boundary status 0)
End of explanation
mg1.imshow(mg1.status_at_node, color_for_closed="blue")
Explanation: Check to see that node status were set correctly.
imshow will default to not plot the value of BC_NODE_IS_CLOSED nodes, which is why we override this below with the option color_for_closed
End of explanation
mg2 = RasterModelGrid((5, 5), 10.0)
z2 = mg2.add_ones("topographic__elevation", at="node")
mg2.at_node["topographic__elevation"][1] = 0.0
mg2.at_node["topographic__elevation"]
Explanation: The second example uses set_watershed_boundary_condition_outlet_coords
In this case the user knows the coordinates of the outlet node.
First instantiate a new grid, with new data values.
End of explanation
mg2.set_watershed_boundary_condition_outlet_coords((0, 1), z2)
Explanation: Note that the node with zero elevation, which will be the outlet, is now at location (0,1).
Note that even though this grid has a dx & dy of 10., the outlet coords are still (0,1).
Set the boundary conditions.
End of explanation
mg2.imshow(mg2.status_at_node, color_for_closed="blue")
Explanation: Plot grid of boundary status information
End of explanation
mg3 = RasterModelGrid((5, 5), 5.0)
z3 = mg3.add_ones("topographic__elevation", at="node")
mg3.at_node["topographic__elevation"][5] = 0.0
mg3.at_node["topographic__elevation"]
Explanation: The third example uses set_watershed_boundary_condition_outlet_id
In this case the user knows the node id value of the outlet node.
First instantiate a new grid, with new data values.
End of explanation
mg3.set_watershed_boundary_condition_outlet_id(5, z3)
Explanation: Set boundary conditions with the outlet id.
Note that here we know the id of the node that has a value of zero and choose this as the outlet. But the code will not complain if you give it an id value of a node that does not have the smallest data value.
End of explanation
mg3.imshow(mg3.status_at_node, color_for_closed="blue")
Explanation: Another plot to illustrate the results.
End of explanation
from landlab.io import read_esri_ascii
(grid_bijou, z_bijou) = read_esri_ascii("west_bijou_gully.asc", halo=1)
Explanation: The final example uses set_watershed_boundary_condition on a watershed that was exported from Arc.
First import read_esri_ascii and then import the DEM data.
An optional value of halo=1 is used with read_esri_ascii. This puts a perimeter of nodata values around the DEM.
This is done just in case there are data values on the edge of the raster. These would have to become closed to set watershed boundary conditions, but in order to avoid that, we add a perimeter to the data.
End of explanation
grid_bijou.imshow(z_bijou)
Explanation: Let's plot the data to see what the topography looks like.
End of explanation
grid_bijou.set_watershed_boundary_condition(z_bijou, 0)
Explanation: In this case the nodata value is zero. This skews the colorbar, but we can at least see the shape of the watershed.
Let's set the boundary condition. Remember we don't know the outlet id.
End of explanation
grid_bijou.imshow(grid_bijou.status_at_node, color_for_closed="blue")
Explanation: Now we can look at the boundary status of the nodes to see where the found outlet was.
End of explanation
grid_bijou.imshow(z_bijou)
Explanation: This looks sensible.
Now that the boundary conditions ae set, we can also look at the topography.
imshow will default to show boundaries as black, as illustrated below. But that can be overwridden as we have been doing all along.
End of explanation |
10,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of the input data
What is typical customer/fraudster behavior?
Which type of aggregated information could be useful for the simulator?
Where are structural differences between fraud/non-fraud?
Step1: Read in dataset and split into fraud/non-fraud
Step2: Print some basic info about the dataset
Step3: Percentage of fraudulent cards also in genuine transactions
Step4: 1. TIME of TRANSACTION
Step5: Analysis
Step6: Analysis
Step7: Analysis
Step8: Analysis
Step9: Analysis
Step10: 1.6 TEST
Step11: 2. COUNTRY
2.1 Country per transaction
Step12: 3. CURRENCY
3.1 Currency per Transaction
Step13: 3.1 Currency per country
Check how many cards make purchases in several currencies
Step14: CONCLUSION
Step15: 4. Merchants
4.1
Step16: We conclude from this that most merchants only sell things in one currenyc; thus, we will let each customer select the merchant given the currency that the customer has (which is unique).
Estimate the probability of selection a merchat, given the currency
Step17: 4.2 Number transactions per merchant
Step18: 5. Transaction Amount
5.1 Amount over time
Step19: 5.2 Amount distribution
Step20: For each merchant, we will have a probability distribution over the amount spent
Step21: We conclude that the normal customers and fraudsters follow roughly the same distribution, so we will only have one per merchant; irrespective of whether a genuine or fraudulent customer is making the transaction.
Step22: Customers
Here we want to find out how long customers/fraudsters return, i.e., how often the same credit card is used over time.
Step23: At a given transaction, estimate the probability of doing another transaction with the same card.
Step24: Fraud behaviour
Step25: when a fraudster uses an existing card, are country and currency always the same? | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from datetime import datetime, timedelta
import utils_data
from os.path import join
from IPython.display import display
dates_2016 = [datetime(2016, 1, 1) + timedelta(days=i) for i in range(366)]
Explanation: Analysis of the input data
What is typical customer/fraudster behavior?
Which type of aggregated information could be useful for the simulator?
Where are structural differences between fraud/non-fraud?
End of explanation
dataset01, dataset0, dataset1 = utils_data.get_real_dataset()
datasets = [dataset0, dataset1]
out_folder = utils_data.FOLDER_REAL_DATA_ANALYSIS
Explanation: Read in dataset and split into fraud/non-fraud
End of explanation
print(dataset01.head())
data_stats = utils_data.get_real_data_stats()
data_stats.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'aggregated_data.csv'))
display(data_stats)
Explanation: Print some basic info about the dataset
End of explanation
most_used_card = dataset0['CardID'].value_counts().index[0]
print("Card (ID) with most transactions: ", most_used_card)
Explanation: Percentage of fraudulent cards also in genuine transactions:
End of explanation
plt.figure(figsize=(15, 5))
plt_idx = 1
for d in datasets:
plt.subplot(1, 2, plt_idx)
trans_dates = d["Global_Date"].apply(lambda date: date.date())
all_trans = trans_dates.value_counts().sort_index()
date_num = matplotlib.dates.date2num(all_trans.index)
plt.plot(date_num, all_trans.values, 'k.', label='num trans.')
plt.plot(date_num, np.zeros(len(date_num))+np.sum(all_trans)/366, 'g--',label='average')
plt_idx += 1
plt.title(d.name, size=20)
plt.xlabel('days (1.1.16 - 31.12.16)', size=15)
plt.xticks([])
plt.xlim(matplotlib.dates.date2num([datetime(2016,1,1), datetime(2016,12,31)]))
if plt_idx == 2:
plt.ylabel('num transactions', size=15)
plt.legend(fontsize=15)
plt.tight_layout()
plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'time_day-in-year'))
plt.show()
Explanation: 1. TIME of TRANSACTION:
Here we analyse number of transactions regarding time.
1.1 Activity per day:
End of explanation
monthdays_2016 = np.unique([dates_2016[i].day for i in range(366)], return_counts=True)
monthdays_2016 = monthdays_2016[1][monthdays_2016[0]-1]
plt.figure(figsize=(12, 5))
plt_idx = 1
monthday_frac = np.zeros((31, 2))
idx = 0
for d in datasets:
# get the average number of transactions per day in a month
monthday = d["Local_Date"].apply(lambda date: date.day).value_counts().sort_index()
monthday /= monthdays_2016
if idx > -1:
monthday_frac[:, idx] = monthday.values / np.sum(monthday.values, axis=0)
idx += 1
plt.subplot(1, 2, plt_idx)
plt.plot(monthday.index, monthday.values, 'ko')
plt.plot(monthday.index, monthday.values, 'k-', markersize=0.1)
plt.plot(monthday.index, np.zeros(31)+np.sum(monthday)/31, 'g--', label='average')
plt.title(d.name, size=20)
plt.xlabel('day in month', size=15)
if plt_idx == 1:
plt.ylabel('avg. num transactions', size=15)
plt_idx += 1
plt.tight_layout()
plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'time_day-in-month'))
plt.show()
# save the resulting data
np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'monthday_frac'), monthday_frac)
Explanation: Analysis:
- It's interesting that there seems to be some kind of structure in the fraudster behavior. I.e., there are many days on which the number of frauds is exactly the same. This must either be due to some peculiarity in the data (are these days where fraud was investigated more?) or because the fraudsters do coordinated attacks
1.2 Activity per day in a month:
End of explanation
weekdays_2016 = np.unique([dates_2016[i].weekday() for i in range(366)], return_counts=True)
weekdays_2016 = weekdays_2016[1][weekdays_2016[0]]
plt.figure(figsize=(12, 5))
plt_idx = 1
weekday_frac = np.zeros((7, 2))
idx = 0
for d in datasets:
weekday = d["Local_Date"].apply(lambda date: date.weekday()).value_counts().sort_index()
weekday /= weekdays_2016
if idx > -1:
weekday_frac[:, idx] = weekday.values / np.sum(weekday.values, axis=0)
idx += 1
plt.subplot(1, 2, plt_idx)
plt.plot(weekday.index, weekday.values, 'ko')
plt.plot(weekday.index, weekday.values, 'k-', markersize=0.1)
plt.plot(weekday.index, np.zeros(7)+np.sum(weekday)/7, 'g--', label='average')
plt.title(d.name, size=20)
plt.xlabel('weekday', size=15)
plt.xticks(range(7), ['Mo', 'Tu', 'We', 'Th', 'Fr', 'Sa', 'Su'])
if plt_idx == 1:
plt.ylabel('avg. num transactions', size=15)
plt_idx += 1
plt.tight_layout()
plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'time_day-in-week'))
plt.show()
# save the resulting data
np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'weekday_frac'), weekday_frac)
Explanation: Analysis:
- the amount of transactions does not depend on the day in a month in a utilisable way
1.3 Activity per weekday:
End of explanation
monthdays = np.array([31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31])
plt.figure(figsize=(12, 5))
plt_idx = 1
month_frac = np.zeros((12, 2))
idx = 0
for d in datasets:
month = d["Local_Date"].apply(lambda date: date.month).value_counts().sort_index()
# correct for different number of days in a month
month = month / monthdays[month.index.values-1] * np.mean(monthdays[month.index.values-1])
if idx > -1:
month_frac[month.index-1, idx] = month.values / np.sum(month.values, axis=0)
idx += 1
plt.subplot(1, 2, plt_idx)
plt.plot(month.index, month.values, 'ko')
plt.plot(month.index, month.values, 'k-', markersize=0.1)
plt.plot(range(1,13), np.zeros(12)+np.sum(month)/12, 'g--', label='average')
plt.title(d.name, size=20)
plt.xlabel('month', size=15)
plt.xticks(range(1, 13), ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
if plt_idx == 1:
plt.ylabel('num transactions', size=15)
plt_idx += 1
plt.tight_layout()
plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'time_month-in-year'))
plt.show()
# save the resulting data
np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'month_frac'), month_frac)
Explanation: Analysis:
- the amount of transactions does not depend on the day in a week in a utilisable way
1.4 Activity per month in a year:
End of explanation
plt.figure(figsize=(12, 5))
plt_idx = 1
hour_frac = np.zeros((24, 2))
idx = 0
for d in datasets:
hours = d["Local_Date"].apply(lambda date: date.hour).value_counts().sort_index()
hours /= 366
if idx > -1:
hour_frac[hours.index.values, idx] = hours.values / np.sum(hours.values, axis=0)
idx += 1
plt.subplot(1, 2, plt_idx)
plt.plot(hours.index, hours.values, 'ko')
plt.plot(hours.index, hours.values, 'k-', markersize=0.1, label='transactions')
plt.plot(range(24), np.zeros(24)+np.sum(hours)/24, 'g--', label='average')
plt.title(d.name, size=20)
plt.xlabel('hour', size=15)
# plt.xticks([])
if plt_idx == 1:
plt.ylabel('avg. num transactions', size=15)
plt_idx += 1
plt.tight_layout()
plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'time_hour-in-day'))
plt.show()
# save the resulting data
np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'hour_frac'), hour_frac)
Explanation: Analysis:
- people buy more in summer than in winter
1.5 Activity per hour of day:
End of explanation
# extract only hours
date_hour_counts = dataset0["Local_Date"].apply(lambda d: d.replace(minute=0, second=0)).value_counts(sort=False)
hours = np.array(list(map(lambda d: d.hour, list(date_hour_counts.index))))
counts = date_hour_counts.values
hour_mean = np.zeros(24)
hour_min = np.zeros(24)
hour_max = np.zeros(24)
hour_std = np.zeros(24)
for h in range(24):
hour_mean[h] = np.mean(counts[hours==h])
hour_min[h] = np.min(counts[hours==h])
hour_max[h] = np.max(counts[hours==h])
hour_std[h] = np.std(counts[hours==h])
print(np.vstack((range(24), hour_min, hour_max, hour_mean, hour_std)).T)
Explanation: Analysis:
- the hour of day is very important: people spend most in the evening and least during the night; fraud is usually committed in the night
End of explanation
# total number of transactions we want in one year
aggregated_data = pd.read_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'aggregated_data.csv'), index_col=0)
trans_per_year = np.array(aggregated_data.loc['transactions'].values, dtype=np.float)[1:]
# transactions per day in a month
frac_monthday = np.load(join(utils_data.FOLDER_SIMULATOR_INPUT, 'monthday_frac.npy'))
# transactions per day in a week
frac_weekday = np.load(join(utils_data.FOLDER_SIMULATOR_INPUT, 'weekday_frac.npy'))
# transactions per month in a year
frac_month = np.load(join(utils_data.FOLDER_SIMULATOR_INPUT, 'month_frac.npy'))
# transactions hour in a day
frac_hour = np.load(join(utils_data.FOLDER_SIMULATOR_INPUT, 'hour_frac.npy'))
cust_idx = 0
std_transactions = 1000
num_customers = 200
# get the probability of a transaction in a given hour
curr_date = datetime(2016, 1, 1)
num_trans = 0
for i in range(366*24):
new_trans = float(trans_per_year[cust_idx])
new_trans *= frac_month[curr_date.month-1, cust_idx]
new_trans *= frac_monthday[curr_date.day-1, cust_idx]
new_trans *= 7 * frac_weekday[curr_date.weekday(), cust_idx]
new_trans *= frac_hour[curr_date.hour, cust_idx]
num_trans += new_trans
curr_date += timedelta(hours=1)
print(curr_date)
print(trans_per_year[cust_idx])
print(num_trans)
print("")
# the difference happens because some months have longer/shorter days.
# We did not want to scale up the transactions on day 31 because that's unrealistic.
curr_date = datetime(2016, 1, 1)
num_trans = 0
for i in range(366*24):
for c in range(num_customers):
# num_trans is the number of transactions the customer will make in this hour
# we assume that we have enough customers to model that each customer can make max 1 transaction per hour
cust_trans = float(trans_per_year[cust_idx])
cust_trans += np.random.normal(0, std_transactions, 1)[0]
cust_trans /= num_customers
cust_trans *= frac_month[curr_date.month-1, cust_idx]
cust_trans *= frac_monthday[curr_date.day-1, cust_idx]
cust_trans *= 7 * frac_weekday[curr_date.weekday(), cust_idx]
cust_trans *= frac_hour[curr_date.hour, cust_idx]
cust_trans += np.random.normal(0, 0.01, 1)[0]
if cust_trans > np.random.uniform(0, 1, 1)[0]:
num_trans += 1
curr_date += timedelta(hours=1)
print(curr_date)
print(trans_per_year[cust_idx])
print(num_trans)
print("")
Explanation: 1.6 TEST: Do the above calculated fractions lead to the correct amount of transactions?
End of explanation
country_counts = pd.concat([d['Country'].value_counts() for d in datasets], axis=1)
country_counts.fillna(0, inplace=True)
country_counts.columns = ['non-fraud', 'fraud']
country_counts[['non-fraud', 'fraud']] /= country_counts.sum(axis=0)
# save the resulting data
country_counts.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'country_frac.csv'))
countries_large = []
for c in ['non-fraud', 'fraud']:
countries_large.extend(country_counts.loc[country_counts[c] > 0.05].index)
countries_large = np.unique(countries_large)
countries_large_counts = []
for c in countries_large:
countries_large_counts.append(country_counts.loc[c, 'non-fraud'])
countries_large = [countries_large[np.argsort(countries_large_counts)[::-1][i]] for i in range(len(countries_large))]
plt.figure(figsize=(10,5))
bottoms = np.zeros(3)
for i in range(len(countries_large)):
c = countries_large[i]
plt.bar((0, 1, 2), np.concatenate((country_counts.loc[c], [0])), label=c, bottom=bottoms)
bottoms += np.concatenate((country_counts.loc[c], [0]))
# fill up the rest
plt.bar((0, 1), 1-bottoms[:-1], bottom=bottoms[:-1], label='rest')
plt.legend(fontsize=20)
plt.xticks([0, 1], ['non-fraud', 'fraud'], size=15)
plt.ylabel('fraction transactions made', size=15)
plt.tight_layout()
plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'country_distribution'))
plt.show()
Explanation: 2. COUNTRY
2.1 Country per transaction:
End of explanation
currency_counts = pd.concat([d['Currency'].value_counts() for d in datasets], axis=1)
currency_counts.fillna(0, inplace=True)
currency_counts.columns = ['non-fraud', 'fraud']
currency_counts[['non-fraud', 'fraud']] /= currency_counts.sum(axis=0)
currencies_large = []
for c in ['non-fraud', 'fraud']:
currencies_large.extend(currency_counts.loc[currency_counts[c] > 0].index)
currencies_large = np.unique(currencies_large)
currencies_large_counts = []
for c in currencies_large:
currencies_large_counts.append(currency_counts.loc[c, 'non-fraud'])
currencies_large = [currencies_large[np.argsort(currencies_large_counts)[::-1][i]] for i in range(len(currencies_large))]
plt.figure(figsize=(10,5))
bottoms = np.zeros(3)
for i in range(len(currencies_large)):
c = currencies_large[i]
plt.bar((0, 1, 2), np.concatenate((currency_counts.loc[c], [0])), label=c, bottom=bottoms)
bottoms += np.concatenate((currency_counts.loc[c], [0]))
plt.legend(fontsize=20)
plt.xticks([0, 1], ['non-fraud', 'fraud'], size=15)
plt.ylabel('fraction of total transactions made', size=15)
plt.tight_layout()
plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'currency_distribution'))
plt.show()
Explanation: 3. CURRENCY
3.1 Currency per Transaction
End of explanation
curr_per_cust = dataset0[['CardID', 'Currency']].groupby('CardID')['Currency'].value_counts().index.get_level_values(0)
print(len(curr_per_cust))
print(len(curr_per_cust.unique()))
print(len(curr_per_cust) - len(curr_per_cust.unique()))
Explanation: 3.1 Currency per country
Check how many cards make purchases in several currencies:
End of explanation
curr_per_country0 = dataset0.groupby(['Country'])['Currency'].value_counts(normalize=True)
curr_per_country1 = dataset1.groupby(['Country'])['Currency'].value_counts(normalize=True)
curr_per_country0.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'currency_per_country0.csv'))
curr_per_country1.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'currency_per_country1.csv'))
Explanation: CONCLUSION: Only 243 cards out of 54,000 puchased things in several currencies.
Estimate the probability of selection a currency, given a country:
End of explanation
plt.figure(figsize=(7,5))
currencies = dataset01['Currency'].unique()
merchants = dataset01['MerchantID'].unique()
for curr_idx in range(len(currencies)):
for merch_idx in range(len(merchants)):
plt.plot(range(len(currencies)), np.zeros(len(currencies))+merch_idx, 'r-', linewidth=0.2)
if currencies[curr_idx] in dataset01.loc[dataset01['MerchantID'] == merch_idx, 'Currency'].values:
plt.plot(curr_idx, merch_idx, 'ko')
plt.xticks(range(len(currencies)), currencies)
plt.ylabel('Merchant ID', size=15)
plt.xlabel('Currency', size=15)
plt.tight_layout()
plt.show()
plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'currency_per_merchant'))
Explanation: 4. Merchants
4.1: Merchants per Currency
End of explanation
merch_per_curr0 = dataset0.groupby(['Currency'])['MerchantID'].value_counts(normalize=True)
merch_per_curr1 = dataset1.groupby(['Currency'])['MerchantID'].value_counts(normalize=True)
merch_per_curr0.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'merchant_per_currency0.csv'))
merch_per_curr1.to_csv(join(utils_data.FOLDER_SIMULATOR_INPUT, 'merchant_per_currency1.csv'))
Explanation: We conclude from this that most merchants only sell things in one currenyc; thus, we will let each customer select the merchant given the currency that the customer has (which is unique).
Estimate the probability of selection a merchat, given the currency:
End of explanation
merchant_count0 = dataset0['MerchantID'].value_counts().sort_index()
merchant_count1 = dataset1['MerchantID'].value_counts().sort_index()
plt.figure(figsize=(15,10))
ax = plt.subplot(2, 1, 1)
ax.bar(merchant_count0.index.values, merchant_count0.values)
rects = ax.patches
for rect, label in zip(rects, merchant_count0.values):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2, height, label, ha='center', va='bottom')
plt.ylabel('num transactions')
plt.xticks([])
plt.xlim([-0.5, data_stats.loc['num merchants', 'all']+0.5])
ax = plt.subplot(2, 1, 2)
ax.bar(merchant_count1.index.values, merchant_count1.values)
rects = ax.patches
for rect, label in zip(rects, merchant_count1.values):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2, height, label, ha='center', va='bottom')
plt.ylabel('num transactions')
plt.xlabel('Merchant ID')
plt.xlim([-0.5, data_stats.loc['num merchants', 'all']+0.5])
plt.tight_layout()
plt.show()
Explanation: 4.2 Number transactions per merchant
End of explanation
plt.figure(figsize=(12, 10))
plt_idx = 1
for d in datasets:
plt.subplot(2, 1, plt_idx)
plt.plot(range(d.shape[0]), d['Amount'], 'k.')
# plt.plot(date_num, amount, 'k.', label='num trans.')
# plt.plot(date_num, np.zeros(len(date_num))+np.mean(all_trans), 'g',label='average')
plt_idx += 1
# plt.title(d.name, size=20)
plt.xlabel('transactions', size=15)
plt.xticks([])
if plt_idx == 2:
plt.ylabel('amount', size=15)
plt.legend(fontsize=15)
plt.tight_layout()
plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'amount_day-in-year'))
plt.show()
print(dataset0.loc[dataset0['Amount'] == 5472.53,['Local_Date', 'CardID', 'MerchantID', 'Amount', 'Currency', 'Country']])
Explanation: 5. Transaction Amount
5.1 Amount over time
End of explanation
plt.figure(figsize=(10,5))
bins = [0, 5, 25, 50, 100, 1000, 11000]
plt_idx = 1
for d in datasets:
amount_counts, loc = np.histogram(d["Amount"], bins=bins)
amount_counts = np.array(amount_counts, dtype=np.float)
amount_counts /= np.sum(amount_counts)
plt.subplot(1, 2, plt_idx)
am_bot = 0
for i in range(len(amount_counts)):
plt.bar(plt_idx, amount_counts[i], bottom=am_bot, label='{}-{}'.format(bins[i], bins[i+1]))
am_bot += amount_counts[i]
plt_idx += 1
plt.ylim([0, 1.01])
plt.legend()
# plt.title("Amount distribution")
plt_idx += 1
plt.show()
plt.figure(figsize=(12, 10))
plt_idx = 1
for d in datasets:
plt.subplot(2, 1, plt_idx)
min_amount = min(d['Amount'])
max_amount = max(d['Amount'])
plt.plot(range(d.shape[0]), np.sort(d['Amount']), 'k.', label='transaction')
# plt.plot(date_num, amount, 'k.', label='num trans.')
plt.plot(np.linspace(0, d.shape[0], 100), np.zeros(100)+np.mean(d['Amount']), 'g--',label='average')
plt_idx += 1
plt.title(d.name, size=20)
plt.ylabel('amount', size=15)
if plt_idx == 3:
plt.xlabel('transactions', size=15)
else:
plt.legend(fontsize=15)
plt.tight_layout()
plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'amount_day-in-year'))
plt.show()
Explanation: 5.2 Amount distribution
End of explanation
from scipy.optimize import curve_fit
def sigmoid(x, x0, k):
y = 1 / (1 + np.exp(-k * (x - x0)))
return y
num_merchants = data_stats.loc['num merchants', 'all']
num_bins = 20
merchant_amount_distr = np.zeros((2, num_merchants, 2*num_bins+1))
plt.figure(figsize=(15, 5))
plt_idx = 1
for dataset in [dataset0, dataset1]:
for m in dataset0['MerchantID'].unique():
# get all transactions from this merchant
trans_merch = dataset.loc[dataset['MerchantID']==m]
num_transactions = trans_merch.shape[0]
if num_transactions > 0:
# get the amounts paid for the transactions with this merchant
amounts = trans_merch['Amount']
bins_height, bins_edges = np.histogram(amounts, bins=num_bins)
bins_height = np.array(bins_height, dtype=np.float)
bins_height /= np.sum(bins_height)
merchant_amount_distr[int(plt_idx > 7), (plt_idx-1)%7, :] = np.concatenate((bins_height, bins_edges))
plt.subplot(2, num_merchants, plt_idx)
plt.hist(amounts, bins=num_bins)
plt_idx += 1
plt.tight_layout()
plt.show()
np.save(join(utils_data.FOLDER_SIMULATOR_INPUT,'merchant_amount_distr'), merchant_amount_distr)
from scipy.optimize import curve_fit
def sigmoid(x, x0, k):
y = 1 / (1 + np.exp(-k * (x - x0)))
return y
num_merchants = data_stats.loc['num merchants', 'all']
merchant_amount_parameters = np.zeros((2, num_merchants, 4))
plt.figure(figsize=(15, 5))
plt_idx = 1
for dataset in [dataset0, dataset1]:
for m in dataset0['MerchantID'].unique():
# get all transactions from this merchant
trans_merch = dataset.loc[dataset['MerchantID']==m]
num_transactions = trans_merch.shape[0]
if num_transactions > 0:
# get the amounts paid for the transactions with this merchant
amounts = np.sort(trans_merch['Amount'])
min_amount = min(amounts)
max_amount = max(amounts)
amounts_normalised = (amounts - min_amount) / (max_amount - min_amount)
plt.subplot(2, num_merchants, plt_idx)
plt.plot(np.linspace(0, 1, num_transactions), amounts, '.')
# fit sigmoid
x_vals = np.linspace(0, 1, 100)
try:
p_sigmoid, _ = curve_fit(sigmoid, np.linspace(0, 1, num_transactions), amounts_normalised)
amounts_predict = sigmoid(x_vals, *p_sigmoid)
amounts_predict_denormalised = amounts_predict * (max_amount - min_amount) + min_amount
plt.plot(x_vals, amounts_predict_denormalised)
except:
# fit polynomial
p_poly = np.polyfit(np.linspace(0, 1, num_transactions), amounts_normalised, 2)
amounts_predict = np.polyval(p_poly, x_vals)
p_sigmoid, _ = curve_fit(sigmoid, x_vals, amounts_predict)
amounts_predict = sigmoid(x_vals, *p_sigmoid)
amounts_predict_denormalised = amounts_predict * (max_amount - min_amount) + min_amount
plt.plot(x_vals, amounts_predict_denormalised)
merchant_amount_parameters[int(plt_idx > 7), (plt_idx-1)%7] = [min_amount, max_amount, p_sigmoid[0], p_sigmoid[1]]
plt_idx += 1
plt.tight_layout()
plt.show()
np.save(join(utils_data.FOLDER_SIMULATOR_INPUT,'merchant_amount_parameters'), merchant_amount_parameters)
print(merchant_amount_parameters)
Explanation: For each merchant, we will have a probability distribution over the amount spent
End of explanation
from scipy.optimize import curve_fit
def sigmoid(x, x0, k):
y = 1 / (1 + np.exp(-k * (x - x0)))
return y
num_merchants = data_stats.loc['num merchants', 'all']
merchant_amount_parameters = np.zeros((2, num_merchants, 4))
plt.figure(figsize=(6, 3))
plt_idx = 1
dataset = dataset0
m = dataset0['MerchantID'].unique()[0]
# get all transactions from this merchant
trans_merch = dataset.loc[dataset['MerchantID']==m]
num_transactions = trans_merch.shape[0]
# get the amounts paid for the transactions with this merchant
amounts = np.sort(trans_merch['Amount'])
min_amount = min(amounts)
max_amount = max(amounts)
amounts_normalised = (amounts - min_amount) / (max_amount - min_amount)
plt.plot(range(num_transactions), amounts, 'k-', linewidth=2, label='real')
# fit sigmoid
x_vals = np.linspace(0, 1, 100)
x = np.linspace(0, 1, num_transactions)
p_sigmoid, _ = curve_fit(sigmoid, np.linspace(0, 1, num_transactions), amounts_normalised)
amounts_predict = sigmoid(x_vals, *p_sigmoid)
amounts_predict_denormalised = amounts_predict * (max_amount - min_amount) + min_amount
plt.plot(np.linspace(0, num_transactions, 100), amounts_predict_denormalised, 'm--', linewidth=3, label='approx')
merchant_amount_parameters[int(plt_idx > 7), (plt_idx-1)%7] = [min_amount, max_amount, p_sigmoid[0], p_sigmoid[1]]
plt.xlabel('transaction count', fontsize=20)
plt.ylabel('price', fontsize=20)
plt.legend(fontsize=15)
plt.tight_layout()
plt.savefig(join(utils_data.FOLDER_REAL_DATA_ANALYSIS, 'merchant_price_sigmoid_fit'))
plt.show()
Explanation: We conclude that the normal customers and fraudsters follow roughly the same distribution, so we will only have one per merchant; irrespective of whether a genuine or fraudulent customer is making the transaction.
End of explanation
plt.figure(figsize=(15, 30))
plt_idx = 1
dist_transactions = [[], []]
for d in datasets:
# d = d.loc[d['Date'].apply(lambda date: date.month) < 7]
# d = d.loc[d['Date'].apply(lambda date: date.month) > 3]
plt.subplot(1, 2, plt_idx)
trans_idx = 0
for card in dataset01['CardID'].unique():
card_times = d.loc[d['CardID'] == card, 'Global_Date']
dist_transactions[plt_idx-1].extend([(card_times.iloc[i+1] - card_times.iloc[i]).days for i in range(len(card_times)-1)])
if plt_idx == 2:
num_c = 2
else:
num_c = 10
if len(card_times) > num_c:
card_times = card_times.apply(lambda date: date.date())
card_times = matplotlib.dates.date2num(card_times)
plt.plot(card_times, np.zeros(len(card_times)) + trans_idx, 'k.', markersize=1)
plt.plot(card_times, np.zeros(len(card_times)) + trans_idx, 'k-', linewidth=0.2)
trans_idx += 1
min_date = matplotlib.dates.date2num(min(dataset01['Global_Date']).date())
max_date = matplotlib.dates.date2num(max(dataset01['Global_Date']).date())
# plt.xlim([min_date, max_date])
plt.xticks([])
for m in range(1,13):
datenum = matplotlib.dates.date2num(datetime(2016, m, 1))
plt.plot(np.zeros(2)+datenum, [-1, 1000], 'r-', linewidth=0.5)
if plt_idx == 1:
plt.ylim([0,300])
else:
plt.ylim([0, 50])
plt_idx += 1
plt.show()
# average distance between two transactions with the same card
print(np.mean(dist_transactions[0]))
print(np.mean(dist_transactions[1]))
Explanation: Customers
Here we want to find out how long customers/fraudsters return, i.e., how often the same credit card is used over time.
End of explanation
prob_stay = np.zeros(2)
for k in range(2):
dataset = [dataset0, dataset1][k]
creditcards = dataset.loc[dataset['Global_Date'].apply(lambda d: d.month) > 3]
creditcards = creditcards.loc[creditcards['Global_Date'].apply(lambda d: d.month) < 6]
creditcard_counts = creditcards['CardID'].value_counts()
creditcardIDs = creditcards['CardID']
data = dataset.loc[dataset['Global_Date'].apply(lambda d: d.month) > 3]
single = 0
multi = 0
for i in range(len(creditcards)):
cc = creditcards.iloc[i]['CardID']
dd = creditcards.iloc[i]['Global_Date']
cond1 = data['CardID'] == cc
cond2 = data['Global_Date'] > dd
if len(data.loc[np.logical_and(cond1, cond2)]) == 0:
single += 1
else:
multi += 1
prob_stay[k] = multi/(single+multi)
print('probability of doing another transaction:', prob_stay[k], '{}'.format(['non-fraud', 'fraud'][k]))
np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'prob_stay'), prob_stay)
Explanation: At a given transaction, estimate the probability of doing another transaction with the same card.
End of explanation
cards0 = dataset0['CardID'].unique()
cards1 = dataset1['CardID'].unique()
print('cards total:', len(np.union1d(cards0, cards1)))
print('fraud cards:', len(cards1))
print('intersection:', len(np.intersect1d(cards0, cards1)))
# go through the cards that were in both sets
cards0_1 = []
cards1_0 = []
cards010 = []
for cib in np.intersect1d(cards0, cards1):
date0 = dataset0.loc[dataset0['CardID']==cib].iloc[0]['Global_Date']
date1 = dataset1.loc[dataset1['CardID']==cib].iloc[0]['Global_Date']
if date0 < date1:
cards0_1.append(cib)
# genuine purchases after fraud
dates00 = dataset0.loc[dataset0['CardID']==cib].iloc[1:]['Global_Date']
if len(dates00)>0:
if sum(dates00>date1)>0:
cards010.append(cib)
else:
cards1_0.append(cib)
print('first genuine then fraud: ', len(cards0_1))
print('first fraud then genuine: ', len(cards1_0))
print('genuine again after fraud: ', len(cards010))
prob_stay_after_fraud = len(cards010)/len(cards0_1)
print('prob of purchase after fraud: ', prob_stay_after_fraud)
np.save(join(utils_data.FOLDER_SIMULATOR_INPUT, 'prob_stay_after_fraud'), prob_stay_after_fraud )
plt.figure(figsize=(10, 25))
dist_transactions = []
trans_idx = 0
data_compromised = dataset01.loc[dataset01['CardID'].apply(lambda cid: cid in np.intersect1d(cards0, cards1))]
no_trans_after_fraud = 0
trans_after_fraud = 0
for card in data_compromised['CardID'].unique():
cards_used = data_compromised.loc[data_compromised['CardID'] == card, ['Global_Date', 'Target']]
dist_transactions.extend([(cards_used.iloc[i+1, 0] - cards_used.iloc[i, 0]).days for i in range(len(cards_used)-1)])
card_times = cards_used['Global_Date'].apply(lambda date: date.date())
card_times = matplotlib.dates.date2num(card_times)
plt.plot(card_times, np.zeros(len(card_times)) + trans_idx, 'k-', linewidth=0.9)
cond0 = cards_used['Target'] == 0
plt.plot(card_times[cond0], np.zeros(len(card_times[cond0])) + trans_idx, 'g.', markersize=5)
cond1 = cards_used['Target'] == 1
plt.plot(card_times[cond1], np.zeros(len(card_times[cond1])) + trans_idx, 'r.', markersize=5)
if max(cards_used.loc[cards_used['Target']==0, 'Global_Date']) > max(cards_used.loc[cards_used['Target']==1, 'Global_Date']):
trans_after_fraud += 1
else:
no_trans_after_fraud += 1
trans_idx += 1
min_date = matplotlib.dates.date2num(min(dataset01['Global_Date']).date())
max_date = matplotlib.dates.date2num(max(dataset01['Global_Date']).date())
plt.xticks([])
plt.ylim([0, trans_idx])
# print lines for months
for m in range(1,13):
datenum = matplotlib.dates.date2num(datetime(2016, m, 1))
plt.plot(np.zeros(2)+datenum, [-1, 1000], 'r-', linewidth=0.5)
plt_idx += 1
plt.show()
print("genuine transactions after fraud: ", trans_after_fraud)
print("fraud is the last transaction: ", no_trans_after_fraud)
Explanation: Fraud behaviour
End of explanation
plt.figure(figsize=(10, 25))
dist_transactions = []
trans_idx = 0
for card in data_compromised['CardID'].unique():
cards_used = data_compromised.loc[data_compromised['CardID'] == card, ['Global_Date', 'Target', 'Country', 'Currency']]
if len(cards_used['Country'].unique()) > 1 or len(cards_used['Currency'].unique()) > 1:
print(cards_used)
print("")
Explanation: when a fraudster uses an existing card, are country and currency always the same?
End of explanation |
10,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous
Step1: Import section specific modules
Step2: 1.10 The Limits of Single Dish Astronomy
In the previous section ➞ of this chapter we introduced the concepts and historical background of interferometry. Earlier in the chapter we presented some of the basic astrophysical sources which emit in the radio spectrum. In this section we will try to answer why we need to use interferometry in radio astronomy. A related question we will try to answer is why we can not just use a single telescope as is done in traditional optical astronomy.
Single telescopes are used in radio astronomy, and provide complimentary observational data to that of interferometric arrays. Astronomy with a single radio telescope is often called single dish astronomy as the telescope usually has a dish reflector (Figure 1.10.1). This dish is usually parabolic, but other shapes are also used, as it allows for the focusing of light to a single focal point where a receiver is placed - among other instruments this could be a camera in the optical, a bolometer in the far-infrared, or an antenna feed in the radio. Instead of a single dish telescope, a more general term would be a single element telescope which can be as simple as a dipole (Figure 1.10.2). An interferometric array (Figure 1.10.3) is used to create a synthesized telescope as it is considered a single telescope synthesized out of many elements (each element is also a telescope, it can get even more confusing).
Step3: Figure 1.10.1
Step4: Figure 1.10.2
Step6: Figure 1.10.3
Step7: 1.10.2 Physical limitations of single dishes
There are certain physical limitations to account for when designing single dish radio telescopes. As an example consider that, due to its limited field of view and the rotation of the earth, an antenna will have to track a source on the sky to maintain a constant sensitivity. In principle this can be achieved by mounting the antenna on a pedestal and mechanically steering it with a suitable engines. However, in order to maintain the integrity of the antenna, the control systems for these engines need to be incredibly precise. Clearly, this gets harder as the size of the instrument increases and will constitute a critical design point on the engineering side. This is true in the optical case as well, but it is easier to manage as the telescopes are physically much smaller.
There is an upper limit on how large we can build steerable single dish radio telescopes. This is because, just like everything else, the metals that these telescopes are made out of can only withstand finite amounts of stress and strain before deforming. Perhaps one of the greatest reminders of this fact came in 1988 with the <cite data-cite='2008ASPC..395..323C'>collapse of the 300 foot Green Bank Telescope</cite> ⤴ (see Figure 1.10.4). Clearly, large steerable telescopes run the risk of collapsing under their own weight. The 100 meter Green Bank Telescope (GBT) which replaced the 300 foot telescope is the largest steerable telescope in the world.
Larger single dish apertures can still be reached though. By leaving the reflector fixed and allowing the receiver at the focus to move along the focal plane (or along the caustic) of the instrument will mimic a slowly varying pointing in the sky (a so called steerable focus telescope). Indeed, this is how the Arecibo Observatory radio telescope (see Figure 1.10.5) operates. However, steerable focus telescopes come with limitations of their own (e.g. material cost and available space). In order to overcome these physical limitations and achieve a higher angular resolution we must use interferometric arrays to form a synthesized telescope.
Step8: Figure 1.10.4a
Step9: Figure 1.10.4b
Step10: Figure 1.10.5
Step11: Figure 1.10.6a
Step12: Figure 1.10.6b
Step13: Figure 1.10.6c
Step14: Figure 1.10.6d
Step15: Figure 1.10.6e | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous: 1.9 A brief introduction to interferometry
Next: 1.11 Modern Interferometric Arrays
Section status: <span style="background-color:yellow"> </span>
Import standard modules:
End of explanation
import ipywidgets
from IPython.display import Image
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
Image(filename='figures/hart_26m_15m_2012-09-11_08511.jpg')
Explanation: 1.10 The Limits of Single Dish Astronomy
In the previous section ➞ of this chapter we introduced the concepts and historical background of interferometry. Earlier in the chapter we presented some of the basic astrophysical sources which emit in the radio spectrum. In this section we will try to answer why we need to use interferometry in radio astronomy. A related question we will try to answer is why we can not just use a single telescope as is done in traditional optical astronomy.
Single telescopes are used in radio astronomy, and provide complimentary observational data to that of interferometric arrays. Astronomy with a single radio telescope is often called single dish astronomy as the telescope usually has a dish reflector (Figure 1.10.1). This dish is usually parabolic, but other shapes are also used, as it allows for the focusing of light to a single focal point where a receiver is placed - among other instruments this could be a camera in the optical, a bolometer in the far-infrared, or an antenna feed in the radio. Instead of a single dish telescope, a more general term would be a single element telescope which can be as simple as a dipole (Figure 1.10.2). An interferometric array (Figure 1.10.3) is used to create a synthesized telescope as it is considered a single telescope synthesized out of many elements (each element is also a telescope, it can get even more confusing).
End of explanation
Image(filename='figures/kaira_lba_element.jpg')
Explanation: Figure 1.10.1: 26 meter dish at HartRAO, South Africa used for single dish observations and as part of interferometric VLBI networks. Credit: M Gaylard / HartRAO⤴
End of explanation
Image(filename='../5_Imaging/figures/2013_kat7_20.jpg')
Explanation: Figure 1.10.2: LOFAR LBA dipole element. Credit: KAIRA/D. McKay-Bukowski⤴
End of explanation
def WhichDiameter(wavelength=1., angres=(15e-3/3600)):
Compute the diameter of an aperture as a function of angular resolution and observing wavelength
c = 299792458. #spped of light, m/s
freq = c/(wavelength)/1e6 #
D = 1.22 * wavelength/np.radians(angres) # assuming a circular aperture
print '\n'
print 'At a frequency of %.3f MHz (Lambda = %.3f m)'%(freq, wavelength)
print 'the aperture diameter is D = %f m'%D
print 'to achieve an angular resolution of %f degrees / %f arcmin / %f arcsec'%(angres, angres*60, angres*3600)
print '\n'
w = ipywidgets.interact(WhichDiameter, angres=((15e-3/3600), 10, 1e-5), wavelength=(0.5e-6, 1, 1e-7))
Explanation: Figure 1.10.3: Inner 5 dishes of KAT-7, a 7 element interferometric array located in South Africa which can be combined into a single synthesized telescope. Credit: SKA-SA⤴
<span style="background-color:yellow"> LB:LF:this link seems to have died</span>
Depending on the science goals of an experiment or observatory, different types of telescopes are built. So what is the main driver for building an interferometric array to create a synthesized telescope? It all comes down to the resolution of a telescope, a property which is related to the wavelength of incoming light and the physical size of the telescope.
1.10.1. Aperture Diameter and Angular Resolution
If we consider a generic dish radio telescope, ignoring blockage from feeds and structure and any practical issues, we can think of the dish as having a circular aperture. We will use the term 'primary beam' later in Chapter 7 to discuss this aperture in detail. Until then we can think of the dish aperture size as being the collecting area. The larger the aperture the more collecting area, thus the more sensitive (a measure of how well the telescope is able to measure a signal) the telescope. This is the same as in photography. Since we are modelling our simple telescope as a circle then the collection area $A$, or aperture size, is proportional to the diameter of the dish $D$.
$$A \propto D^2$$
A larger aperture also increases the maximum angular resolution of the telescope i.e. the ability to differentiate between two sources (say stars) which are separated by some angular distance. Using the Rayleigh criterion the angular resolution $\Delta \theta$ (in radians) of a dish of diameter $D$ is
$$\Delta \theta = 1.22 \frac{\lambda}{D}$$
where $\lambda$ is the observing wavelength. Since light in the radio regime of the electromagnetic spectrum has a longer wavelength compared to that in the optical regime, we can see that a radio telescope with the same collecting area diameter as an optical telescope will have a much lower angular resolution.
<div class=warn>
<b>Warning:</b> Note that a higher value of $\Delta \theta$ implies lower angular resolution and vice versa.
</div>
The sensitivity of a telescope is directly proportional to its collecting area. The angular resolution of the telescope is inversely proportional to the aperture diameter. Usually, we want both high sensitivity and fine angular resolution, since we are interested in accurately measuring the strength of the signal and positions of sources. A natural way to improve both the sensitivity and angular resolution of a single telescope is to increase the collecting area.
The following table shows the angular resolution as a function of aperture diameter $D$ and observing wavelength for a single dish telescope.
| Telescope Type | Angular Resolution <br> $\Delta \theta$ | Visible <br> $\lambda$ = 500 nm | Infrared <br> $\lambda$ = 10 $\mu$m | Radio EHF <br> $\lambda$ = 10 mm <br> 30 GHz | Radio UHF <br> $\lambda$ = 1 m <br> 300 Mhz|
|:---:|:---:|:---:|:---:|:---:|:---:|
| Amatuer | 0.8'' | 15 cm | 3 m | 3 km | 300 km |
| Automated Follow-up | 0.25'' | 50 cm | 10 m | 10 km | 100 km |
| Small Science | 0.12'' | 1 m | 21 m | 21 km | 2100 km |
| Large Science | 0.015'' (15 mas) | 8 m | 168 m | 168 km | 16800 km |
Table 1.10.1: Angular resolution of a telescope as a function of the aperture diameter $D$ and observing wavelength.
As we can see from the table, a radio telescope requires a diameter which is many orders of magnitude larger than that of an optical telescope to achieve the same angular resolution. It is very reasonable to build a 15 cm optical telescope, in fact they can be easily bought at a store. But a radio telescope, observing at 300 MHz, which has the same resolution (0.8 arcseconds) needs to have an aperture of 300 km! Now, this would not only be prohibitively expensive, but the engineering is completely infeasible. Just for reference, the largest single dish telescopes are on the order of a few hundred meters in diameter (see FAST in China, Arecibo in Puerto Rico). The following example shows how the diameter of a telescope varies as a function of observing wavelength and desired angular resolution.
End of explanation
Image(filename='figures/gbt_300foot_telescope.jpg')
Explanation: 1.10.2 Physical limitations of single dishes
There are certain physical limitations to account for when designing single dish radio telescopes. As an example consider that, due to its limited field of view and the rotation of the earth, an antenna will have to track a source on the sky to maintain a constant sensitivity. In principle this can be achieved by mounting the antenna on a pedestal and mechanically steering it with a suitable engines. However, in order to maintain the integrity of the antenna, the control systems for these engines need to be incredibly precise. Clearly, this gets harder as the size of the instrument increases and will constitute a critical design point on the engineering side. This is true in the optical case as well, but it is easier to manage as the telescopes are physically much smaller.
There is an upper limit on how large we can build steerable single dish radio telescopes. This is because, just like everything else, the metals that these telescopes are made out of can only withstand finite amounts of stress and strain before deforming. Perhaps one of the greatest reminders of this fact came in 1988 with the <cite data-cite='2008ASPC..395..323C'>collapse of the 300 foot Green Bank Telescope</cite> ⤴ (see Figure 1.10.4). Clearly, large steerable telescopes run the risk of collapsing under their own weight. The 100 meter Green Bank Telescope (GBT) which replaced the 300 foot telescope is the largest steerable telescope in the world.
Larger single dish apertures can still be reached though. By leaving the reflector fixed and allowing the receiver at the focus to move along the focal plane (or along the caustic) of the instrument will mimic a slowly varying pointing in the sky (a so called steerable focus telescope). Indeed, this is how the Arecibo Observatory radio telescope (see Figure 1.10.5) operates. However, steerable focus telescopes come with limitations of their own (e.g. material cost and available space). In order to overcome these physical limitations and achieve a higher angular resolution we must use interferometric arrays to form a synthesized telescope.
End of explanation
Image(filename='figures/gbt_300foot_collapse.jpg')
Explanation: Figure 1.10.4a: 300 foot Green Bank Telescope located in West Virgina, USA during initial operations in 1962. Credit: NRAO⤴
End of explanation
Image(filename='figures/arecibo_observatory.jpg')
Explanation: Figure 1.10.4b: November, 1988, a day after the collapse of the 300 foot GBT telescope due to structural defects. Credit: NRAO⤴
End of explanation
Image(filename='figures/cartoon_1.png')
Explanation: Figure 1.10.5: 300 m Arecibo Telescope lying in a natural cavity in Puerto Rico. The receiver is located in the white spherical structure held up by wires, and is repositioned to "point" the telescope. Credit: courtesy of the NAIC - Arecibo Observatory, a facility of the NSF⤴
1.10.3 Creating a Synthesized Telescope using Interferometry
Here we will attempt to develop some intuition for what an interferometric array is and how it is related to a single dish telescope. We will construct a cartoon example before getting into the mathematics. A simple single dish telescope is made up of a primary reflector dish on a mount to point in some direction in the sky and a signal receptor at the focal point of the reflector (Figure 1.3.6a). The receptor is typically an antenna in the case of radio astronomy or a camera in optical astronomy.
Basic optics tells us how convex lenses can be used to form real images of sources that are very far away. The image of a source that is infinitely far away will form at exactly the focal point of the lens, the location of which is completely determined by the shape of the lens (under the "thin lens" approximation). Sources of astrophysical interest can be approximated as being infinitely far away as long as they are at distances much farther away than the focal point of the lens. This is immediately obvious from the equation of a thin convex lens:
$$ \frac{1}{o} + \frac{1}{i} = \frac{1}{f}, $$
where $i, ~ o$ and $f$ are the image, object and focal distances respectively. Early astronomers exploited this useful property of lenses to build the first optical telescopes. Later on concave mirrors replaced lenses because it was easier to control their physical and optical properties (e.g. curvature, surface quality etc.). Reflective paraboloids are the most efficient at focussing incoming plane waves (travelling on-axis) into a single locus (the focal point) and are therefore a good choice for the shape of a collector.
In our simple model the sky only contains a single astrophysical source, which is detected by pointing the telescope towards its location in the sky.
End of explanation
Image(filename='figures/cartoon_2.png')
Explanation: Figure 1.10.6a: A simple dish telescope which reflects incoming plane waves (red dashed) along ray tracing paths (cyan) to a receptor at the focal point of the parabolic dish.
Ignoring real world effects like aperture blockage and reflector inefficiencies, plane waves are focused to a single point using a parabolic reflector (at that focus if a signal receptor). We can imagine the reflector is made up of many smaller reflectors, each with its own reflection path. A single dish, in the limit of fully sampling the observing wavelength $\lambda$, can be thought of as being made up of enough reflectors of diameter $\lambda/2$ to fill the collecting area of the dish. In our simple example, we just break the dish into 8 reflectors (Figure 1.10.6b). This is in fact what is often done with very large telescopes when it is not feasible to build a single large mirror, such as in the W. M. Keck Observatory. At this point we have not altered the telescope, we are just thinking about the reflector as being made up of multiple smaller reflectors.
<div class=advice>
<b>Note:</b> We can interpret a single dish telescope as a *continuous interferometer* by applying the Wiener-Khinchin theorem. See Chapter 2 of [<cite data-cite='2007isra.book.....T'>Interferometry and Synthesis in Radio Astronomy</cite> ⤴](http://adsabs.harvard.edu/abs/2007isra.book.....T) for an in depth discussion.
</div>
End of explanation
Image(filename='figures/cartoon_3.png')
Explanation: Figure 1.10.6b: The dish reflector can be thought of as being made up of multiple smaller reflectors, each with its own light path to the focus.
Now, instead of capturing all the signal at a single point, there is no reason we can not capture the signal at the smaller, individual reflector focus points. If that signal is captured, we can digitally combine the signals at the main focus point later (Figure 1.10.6c). This is the first trick of interferometry. Radio waves can be sufficiently sampled in time to digitally record the signals (this becomes more difficult at higher frequencies, and not possible in the near-infrared and higher). The cost is that a receptor needs to be built for each sub-reflector, and additional hardware is required to combine the signals. The dish optically combines the light, we are simply doing the same thing digitally.
End of explanation
Image(filename='figures/cartoon_4.png')
Explanation: Figure 1.10.6c: A receptor at each sub-reflector captures the light signals. To recreate the combined signal at the main receptor the signals are digitally combined.
The next leap is that there is no reason the sub-reflectors need to be set in the shape of a dish since the combination of the signal at the main focus is performed done digitally. Since light travels at a constant speed any repositioning of a sub-reflector just requires a time delay correction. So we can move each element to the ground and construct a pointing system for each sub-reflector (Figure 1.10.6d). We now have an array of smaller single dish telescopes! By including the correct time delays on each signal, we can measure the same signal as the original, larger single dish telescope. This digital operation is called beamforming and is very important in interferometry.
End of explanation
Image(filename='figures/cartoon_5.png')
Explanation: Figure 1.10.6d: The sub-reflector elements of the original telescope are set on the ground with their own pointing systems. The original signal can be reconstructed digitally and by including the appropriate time delay for each telescope.
The beamforming operation recombines all the signals into a single signal, which can be thought of as a single pixel camera. However, we can do even better using a correlator. By correlating the signals we can compute visibilities which are then used to form an image (Figure 1.10.6e). This will be explained in more depth in the chapters that follow. For now it is important to know that interferometric arrays have an advantage over single dish telescopes viz. by combining signals from multiple smaller telescopes we can 'synthesize' a much larger telescope than can be constructed from a single dish. The correlator also allows for the creation of image over a beamformer at the cost of additional computing hardware.
<span style="background-color:yellow"> LB:RC: this last sentence is not clear, I am not sure what it is trying to say</span>
End of explanation
Image(filename='figures/cartoon_6.png')
Explanation: Figure 1.10.6e: By using correlator hardware instead of a beamformer an image of the sky can be created.
The next trick of interferometry is that we do not necessarily need to sample the entire original dish (Figure 1.10.6f). We do lose sensitivity and, as will be discussed in later chapters, spatial frequency modes, but by using only a subset of elements and exploiting interferometry we can build synthesized telescopes that are many kilometres in diameter (e.g. MeerKAT) or as large as the Earth (e.g. VLBI networks). This is why radio interferometry can be used to produce the highest resolution telescopes in the world.
End of explanation |
10,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Some very basic python
Showing some very basic python, variables, arrays, math and plotting
Step1: Python versions
Python2 and Python3 are still being used today. So safeguard printing between python2 and python3 you will need a special import for old python2
Step2: Now we can print(pi) in python2. The old style would be print pi
Step3: Control stuctures
Most programming languages have a way to control the flow of the program. The common ones are
if/then/else
for-loop
while-loop
Step4: Python Data Structures
A list is one of four major data structures (lists, dictionaries, sets, tuples) that python uses. It is the most simple one, and has direct parallels to those in other languages such as Fortran, C/C++, Java etc.
Python Lists
Python uses special symbols to make up these collection, briefly they are
Step5: Math and Numeric Arrays | Python Code:
# setting a variable
a = 1.23
# just writing the variable will show it's value, but this is not the recommended
# way, because per cell only the last one will be printed and stored in the out[]
# list that the notebook maintains
a
a+1
# the right way to print is using the official **print** function in python
# and this way you can also print out multiple lines in the out[]
print(a)
print(type(a),str(a))
b=2
print(b)
# overwriting the same variable , now as a string
a="1.23"
a,type(a)
# checking the value of the variable
a
Explanation: Some very basic python
Showing some very basic python, variables, arrays, math and plotting:
End of explanation
from __future__ import print_function
Explanation: Python versions
Python2 and Python3 are still being used today. So safeguard printing between python2 and python3 you will need a special import for old python2:
End of explanation
pi = 3.1415
print("pi=",pi)
print("pi=%15.10f" % pi)
# for reference, here is the old style of printing in python2
# print "pi=",pi
Explanation: Now we can print(pi) in python2. The old style would be print pi
End of explanation
n = 1
if n > 0:
print("yes, n>0")
else:
print("not")
for i in [2,4,n,6]:
print("i=",i)
print("oulala, i=",i)
n = 10
while n>0:
# n = n - 2
print("whiling",n)
n = n - 2
print("last n",n)
Explanation: Control stuctures
Most programming languages have a way to control the flow of the program. The common ones are
if/then/else
for-loop
while-loop
End of explanation
a1 = [1,2,3,4]
a2 = range(1,5)
print(a1)
print(a2)
a2 = ['a',1,'cccc']
print(a1)
print(a2)
a3 = range(12,20,2)
print(a3)
a1=range(3)
a2=range(1,4)
print(a1,a2)
a1+a2
Explanation: Python Data Structures
A list is one of four major data structures (lists, dictionaries, sets, tuples) that python uses. It is the most simple one, and has direct parallels to those in other languages such as Fortran, C/C++, Java etc.
Python Lists
Python uses special symbols to make up these collection, briefly they are:
* list: [1,2,3]
* dictionary: { "a":1 , "b":2 , "c": 3}
* set: {1,2,3,"abc"}
* tuple: (1,2,3)
End of explanation
import math
import numpy as np
math.pi
np.pi
# %matplotlib inline
import matplotlib.pyplot as plt
a=np.arange(0,1,0.01)
b = a*a
c = np.sqrt(a)
plt.plot(a,b,'-bo',label='b')
plt.plot(a,c,'-ro',label='c')
plt.legend()
plt.plot(a,a+1)
plt.show()
Explanation: Math and Numeric Arrays
End of explanation |
10,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Práctica 1 - Introducción a Jupyter lab y libreria robots
Introducción a la libreria robots y sus simuladores
Recordando los resultados obtenidos en el documento anterior, tenemos que
Step1: Una vez que tenemos la función f, tenemos que importar la función odeint de la librería integrate de scipy.
Step2: Esta función implementará un método numérico para integrar el estado del sistema a traves del tiempo, definiremos tambien un vector con todos los tiempos en los que nos interesa saber el estado del sistema, para esto importamos linspace.
Step3: Definimos los tiempos como $0s$ hasta $10s$ con mil datos en total, y la condición inicial de nuestro sistema
Step4: Y con estos datos, mandamos llamar a la función odeint
Step5: Para visualizar estos datos importamos la libreria matplotlib para graficación
Step6: Y graficamos
Step7: Sin embargo, en esta clase utilizaremos un sistema de simulación a medida, el cual nos quita la responsabilidad del codigo de graficación y nos brindará algunas ventajas en practicas posteriores. A continuación importamos la función simulador de la librería simuladores del paquete robots
Step8: Nota que dentro de la carpeta de esta práctica existe una carpeta llamada robots, esta carpeta contiene todo el código que hace funcionar las siguientes funciones, si deseas averiguar como funciona, puedes abrir esos archivos y estudiarlos.
Definimos de nuevo la función f con la única diferencia del orden de las entradas
Step9: Mandamos llamar al simulador | Python Code:
def f(x, t):
# Se importan funciones matematicas necesarias
from numpy import cos
# Se desenvuelven las variables que componen al estado
q1, q̇1 = x
# Se definen constantes del sistema
g = 9.81
m1, J1 = 0.3, 0.0005
l1 = 0.2
τ1 = 0
# Se calculan las variables a devolver por el sistema
q1p = q̇1
q1pp = 1/(m1*l1**2 + J1)*(τ1 - g*m1*l1*cos(q1))
# Se devuelve la derivada de las variables de entrada
return [q1p, q1pp]
Explanation: Práctica 1 - Introducción a Jupyter lab y libreria robots
Introducción a la libreria robots y sus simuladores
Recordando los resultados obtenidos en el documento anterior, tenemos que:
$$
\tau_1 = \left[ m_1 l_1^2 + J_1 \right] \ddot{q}_1 + g\left[ m_1 l_1 \cos(q_1) \right]
$$
sin embargo, no tenemos una buena manera de simular el comportamiento de este sistema ya que este sistema es de segundo orden y no lineal; por lo que el primer paso para simular este sistema es convertirlo a un sistema de primer orden.
Primero necesitamos definir una variable de estado para el sistema, y ya que queremos convertir un sistema de segundo orden, en uno de primer orden, necesitamos duplicar el numero de variables.
Nota: Si tuvieramos un sistema de tercer orden tendriamos que triplicar el numero de variables para convertirlo en un sistema de primer orden y asi sucesivamente.
Si definimos el estado del sistema $x$ como:
$$
x =
\begin{pmatrix} q_1 \ \dot{q}_1 \end{pmatrix}
$$
entonces es claro que podemos obtener la derivada del sistema haciendo:
$$
\dot{x} = \frac{d}{dt} x = \frac{d}{dt}
\begin{pmatrix} q_1 \ \dot{q}_1 \end{pmatrix} =
\begin{pmatrix} \dot{q}_1 \ \ddot{q}_1 \end{pmatrix}
$$
por lo que podemos utilizar métodos numéricos como el método de Euler, Runge-Kutta, etc. para obtener el comportamiento de estas variables a traves del tiempo, tomando en cuenta que siempre es necesario tener una función $f$ tal que:
$$
\dot{x} = f(x,t)
$$
Nuestra tarea entonces, es construir una función f, tal que, cuando le demos como datos $x = \begin{pmatrix} q_1 \ \dot{q}_1 \end{pmatrix}$, nos pueda devolver como resultado $\dot{x} = \begin{pmatrix} \dot{q}_1 \ \ddot{q}_1 \end{pmatrix}$.
Para esto necesitamos calcular $\dot{q}_1$ y $\ddot{q}_1$, pero si revisamos la ecuación de movimiento calculada en el documento anterior (simbolico.ipynb), podemos ver que $\ddot{q}_1$ es facilmente despejable:
$$
\ddot{q}_1 = \frac{1}{m_1 l_1^2 + J_1} \left[ \tau_1 - g m_1 l_1 \cos{q}_1 \right]
$$
Para calcular $\dot{q}_1$, tan solo tenemos que darnos cuenta que uno de nuestros datos es $\dot{q}_1$, por lo que esta ecuación es trivial:
$$
\dot{q}_1 = \dot{q}_1
$$
con lo que ya tenemos todo lo necesario para construir la función f:
End of explanation
from scipy.integrate import odeint
Explanation: Una vez que tenemos la función f, tenemos que importar la función odeint de la librería integrate de scipy.
End of explanation
from numpy import linspace
Explanation: Esta función implementará un método numérico para integrar el estado del sistema a traves del tiempo, definiremos tambien un vector con todos los tiempos en los que nos interesa saber el estado del sistema, para esto importamos linspace.
End of explanation
ts = linspace(0, 10, 1000)
x0 = [0, 0]
Explanation: Definimos los tiempos como $0s$ hasta $10s$ con mil datos en total, y la condición inicial de nuestro sistema:
$$
x_0 = \begin{pmatrix} 0 \ 0 \end{pmatrix} = \begin{pmatrix} q_1 \ \dot{q}_1 \end{pmatrix}
$$
End of explanation
xs = odeint(f, x0, ts)
Explanation: Y con estos datos, mandamos llamar a la función odeint:
End of explanation
from matplotlib.pyplot import plot, legend
%matplotlib widget
Explanation: Para visualizar estos datos importamos la libreria matplotlib para graficación:
End of explanation
graf_pos, graf_vel = plot(xs)
legend([graf_pos, graf_vel], ["posicion", "velocidad"])
Explanation: Y graficamos:
End of explanation
from robots.simuladores import simulador
Explanation: Sin embargo, en esta clase utilizaremos un sistema de simulación a medida, el cual nos quita la responsabilidad del codigo de graficación y nos brindará algunas ventajas en practicas posteriores. A continuación importamos la función simulador de la librería simuladores del paquete robots:
End of explanation
def f(t, x):
# Se importan funciones matematicas necesarias
from numpy import cos
# Se desenvuelven las variables que componen al estado
q1, q̇1 = x
# Se definen constantes del sistema
g = 9.81
m1, J1 = 0.3, 0.0005
l1 = 0.4
τ1 = 0
# Se calculan las variables a devolver por el sistema
q1p = q̇1
q1pp = 1/(m1*l1**2 + J1)*(τ1 - g*m1*l1*cos(q1))
# Se devuelve la derivada de las variables de entrada
return [q1p, q1pp]
Explanation: Nota que dentro de la carpeta de esta práctica existe una carpeta llamada robots, esta carpeta contiene todo el código que hace funcionar las siguientes funciones, si deseas averiguar como funciona, puedes abrir esos archivos y estudiarlos.
Definimos de nuevo la función f con la única diferencia del orden de las entradas:
End of explanation
%matplotlib widget
ts, xs = simulador(puerto_zmq="5551", f=f, x0=[0,0], dt=0.02)
Explanation: Mandamos llamar al simulador
End of explanation |
10,591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiple Kernel Learning
By Saurabh Mahindre - <a href="https
Step1: Introduction
<em>Multiple kernel learning</em> (MKL) is about using a combined kernel i.e. a kernel consisting of a linear combination of arbitrary kernels over different domains. The coefficients or weights of the linear combination can be learned as well.
Kernel based methods such as support vector machines (SVMs) employ a so-called kernel function $k(x_{i},x_{j})$ which intuitively computes the similarity between two examples $x_{i}$ and $x_{j}$. </br>
Selecting the kernel function
$k()$ and it's parameters is an important issue in training. Kernels designed by humans usually capture one aspect of data. Choosing one kernel means to select exactly one such aspect. Which means combining such aspects is often better than selecting.
In shogun the MKL is the base class for MKL. We can do classifications
Step2: Prediction on toy data
In order to see the prediction capabilities, let us generate some data using the GMM class. The data is sampled by setting means (GMM notebook) such that it sufficiently covers X-Y grid and is not too easy to classify.
Step3: Generating Kernel weights
Just to help us visualize let's use two gaussian kernels (GaussianKernel) with considerably different widths. As required in MKL, we need to append them to the Combined kernel. To generate the optimal weights (i.e $\beta$s in the above equation), training of MKL is required. This generates the weights as seen in this example.
Step4: Binary classification using MKL
Now with the data ready and training done, we can do the binary classification. The weights generated can be intuitively understood. We will see that on plotting individual subkernels outputs and outputs of the MKL classification. To apply on test features, we need to reinitialize the kernel with kernel.init and pass the test features. After that it's just a matter of doing mkl.apply to generate outputs.
Step5: To justify the weights, let's train and compare two subkernels with the MKL classification output. Training MKL classifier with a single kernel appended to a combined kernel makes no sense and is just like normal single kernel based classification, but let's do it for comparison.
Step6: As we can see the multiple kernel output seems just about right. Kernel 1 gives a sort of overfitting output while the kernel 2 seems not so accurate. The kernel weights are hence so adjusted to get a refined output. We can have a look at the errors by these subkernels to have more food for thought. Most of the time, the MKL error is lesser as it incorporates aspects of both kernels. One of them is strict while other is lenient, MKL finds a balance between those.
Step7: MKL for knowledge discovery
MKL can recover information about the problem at hand. Let us see this with a binary classification problem. The task is to separate two concentric classes shaped like circles. By varying the distance between the boundary of the circles we can control the separability of the problem. Starting with an almost non-separable scenario, the data quickly becomes separable as the distance between the circles increases.
Step8: These are the type of circles we want to distinguish between. We can try classification with a constant separation between the circles first.
Step9: As we can see the MKL classifier classifies them as expected. Now let's vary the separation and see how it affects the weights.The choice of the kernel width of the Gaussian kernel used for classification is expected to depend on the separation distance of the learning problem. An increased distance between the circles will correspond to a larger optimal kernel width. This effect should be visible in the results of the MKL, where we used MKL-SVMs with four kernels with different widths (1,5,7,10).
Step10: In the above plot we see the kernel weightings obtained for the four kernels. Every line shows one weighting. The courses of the kernel weightings reflect the development of the learning problem
Step11: Let's plot five of the examples to get a feel of the dataset.
Step12: We combine a Gaussian kernel and a PolyKernel. To test, examples not included in training data are used.
This is just a demonstration but we can see here how MKL is working behind the scene. What we have is two kernels with significantly different properties. The gaussian kernel defines a function space that is a lot larger than that of the linear kernel or the polynomial kernel. The gaussian kernel has a low width, so it will be able to represent more and more complex relationships between the training data. But it requires enough data to train on. The number of training examples here is 1000, which seems a bit less as total examples are 10000. We hope the polynomial kernel can counter this problem, since it will fit the polynomial for you using a lot less data than the squared exponential. The kernel weights are printed below to add some insight.
Step13: The misclassified examples are surely pretty tough to predict. As seen from the accuracy MKL seems to work a shade better in the case. One could try this out with more and different types of kernels too.
One-class classification using MKL
One-class classification can be done using MKL in shogun. This is demonstrated in the following simple example using MKLOneClass. We will see how abnormal data is detected. This is also known as novelty detection. Below we generate some toy data and initialize combined kernels and features.
Step14: Now that everything is initialized, let's see MKLOneclass in action by applying it on the test data and on the X-Y grid. | Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# import all shogun classes
import shogun as sg
from shogun import *
Explanation: Multiple Kernel Learning
By Saurabh Mahindre - <a href="https://github.com/Saurabh7">github.com/Saurabh7</a>
This notebook is about multiple kernel learning in shogun. We will see how to construct a combined kernel, determine optimal kernel weights using MKL and use it for different types of classification and novelty detection.
Introduction
Mathematical formulation
Using a Combined kernel
Example: Toy Data
Generating Kernel weights
Binary classification using MKL
MKL for knowledge discovery
Multiclass classification using MKL
One-class classification using MKL
End of explanation
kernel = CombinedKernel()
Explanation: Introduction
<em>Multiple kernel learning</em> (MKL) is about using a combined kernel i.e. a kernel consisting of a linear combination of arbitrary kernels over different domains. The coefficients or weights of the linear combination can be learned as well.
Kernel based methods such as support vector machines (SVMs) employ a so-called kernel function $k(x_{i},x_{j})$ which intuitively computes the similarity between two examples $x_{i}$ and $x_{j}$. </br>
Selecting the kernel function
$k()$ and it's parameters is an important issue in training. Kernels designed by humans usually capture one aspect of data. Choosing one kernel means to select exactly one such aspect. Which means combining such aspects is often better than selecting.
In shogun the MKL is the base class for MKL. We can do classifications: binary, one-class, multiclass and regression too: regression.
Mathematical formulation (skip if you just want code examples)
</br>In a SVM, defined as:
$$f({\bf x})=\text{sign} \left(\sum_{i=0}^{N-1} \alpha_i k({\bf x}, {\bf x_i})+b\right)$$</br>
where ${\bf x_i},{i = 1,...,N}$ are labeled training examples ($y_i \in {±1}$).
One could make a combination of kernels like:
$${\bf k}(x_i,x_j)=\sum_{k=0}^{K} \beta_k {\bf k_k}(x_i, x_j)$$
where $\beta_k > 0$ and $\sum_{k=0}^{K} \beta_k = 1$
In the multiple kernel learning problem for binary classification one is given $N$ data points ($x_i, y_i$ )
($y_i \in {±1}$), where $x_i$ is translated via $K$ mappings $\phi_k(x) \rightarrow R^{D_k} $, $k=1,...,K$ , from the input into $K$ feature spaces $(\phi_1(x_i),...,\phi_K(x_i))$ where $D_k$ denotes dimensionality of the $k$-th feature space.
In MKL $\alpha_i$,$\beta$ and bias are determined by solving the following optimization program. For details see [1].
$$\mbox{min} \hspace{4mm} \gamma-\sum_{i=1}^N\alpha_i$$
$$ \mbox{w.r.t.} \hspace{4mm} \gamma\in R, \alpha\in R^N \nonumber$$
$$\mbox {s.t.} \hspace{4mm} {\bf 0}\leq\alpha\leq{\bf 1}C,\;\;\sum_{i=1}^N \alpha_i y_i=0 \nonumber$$
$$ {\frac{1}{2}\sum_{i,j=1}^N \alpha_i \alpha_j y_i y_j \leq \gamma}, \forall k=1,\ldots,K\nonumber\
$$
Here C is a pre-specified regularization parameter.
Within shogun this optimization problem is solved using semi-infinite programming. For 1-norm MKL one of the two approaches described in [1] is used.
The first approach (also called the wrapper algorithm) wraps around a single kernel SVMs, alternatingly solving for $\alpha$ and $\beta$. It is using a traditional SVM to generate new violated constraints and thus requires a single kernel SVM and any of the SVMs contained in shogun can be used. In the MKL step either a linear program is solved via glpk or cplex or analytically or a newton (for norms>1) step is performed.
The second much faster but also more memory demanding approach performing interleaved optimization, is integrated into the chunking-based SVMlight.
Using a Combined kernel
Shogun provides an easy way to make combination of kernels using the CombinedKernel class, to which we can append any kernel from the many options shogun provides. It is especially useful to combine kernels working on different domains and to combine kernels looking at independent features and requires CombinedFeatures to be used. Similarly the CombinedFeatures is used to combine a number of feature objects into a single CombinedFeatures object
End of explanation
num=30;
num_components=4
means=zeros((num_components, 2))
means[0]=[-1,1]
means[1]=[2,-1.5]
means[2]=[-1,-3]
means[3]=[2,1]
covs=array([[1.0,0.0],[0.0,1.0]])
gmm=GMM(num_components)
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
[gmm.set_nth_cov(covs,i) for i in range(num_components)]
gmm.set_coef(array([1.0,0.0,0.0,0.0]))
xntr=array([gmm.sample() for i in range(num)]).T
xnte=array([gmm.sample() for i in range(5000)]).T
gmm.set_coef(array([0.0,1.0,0.0,0.0]))
xntr1=array([gmm.sample() for i in range(num)]).T
xnte1=array([gmm.sample() for i in range(5000)]).T
gmm.set_coef(array([0.0,0.0,1.0,0.0]))
xptr=array([gmm.sample() for i in range(num)]).T
xpte=array([gmm.sample() for i in range(5000)]).T
gmm.set_coef(array([0.0,0.0,0.0,1.0]))
xptr1=array([gmm.sample() for i in range(num)]).T
xpte1=array([gmm.sample() for i in range(5000)]).T
traindata=concatenate((xntr,xntr1,xptr,xptr1), axis=1)
trainlab=concatenate((-ones(2*num), ones(2*num)))
testdata=concatenate((xnte,xnte1,xpte,xpte1), axis=1)
testlab=concatenate((-ones(10000), ones(10000)))
#convert to shogun features and generate labels for data
feats_train=features(traindata)
labels=BinaryLabels(trainlab)
_=jet()
figure(figsize(18,5))
subplot(121)
# plot train data
_=scatter(traindata[0,:], traindata[1,:], c=trainlab, s=100)
title('Toy data for classification')
axis('equal')
colors=["blue","blue","red","red"]
# a tool for visualisation
from matplotlib.patches import Ellipse
def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3):
vals, vecs = eigh(cov)
order = vals.argsort()[::-1]
vals, vecs = vals[order], vecs[:, order]
theta = numpy.degrees(arctan2(*vecs[:, 0][::-1]))
width, height = 2 * nstd * sqrt(vals)
e = Ellipse(xy=mean, width=width, height=height, angle=theta, \
edgecolor=color, fill=False, linewidth=linewidth)
return e
for i in range(num_components):
gca().add_artist(get_gaussian_ellipse_artist(means[i], covs, color=colors[i]))
Explanation: Prediction on toy data
In order to see the prediction capabilities, let us generate some data using the GMM class. The data is sampled by setting means (GMM notebook) such that it sufficiently covers X-Y grid and is not too easy to classify.
End of explanation
width0=0.5
kernel0=sg.kernel("GaussianKernel", log_width=np.log(width0))
width1=25
kernel1=sg.kernel("GaussianKernel", log_width=np.log(width1))
#combine kernels
kernel.append_kernel(kernel0)
kernel.append_kernel(kernel1)
kernel.init(feats_train, feats_train)
mkl = MKLClassification()
#set the norm, weights sum to 1.
mkl.set_mkl_norm(1)
mkl.set_C(1, 1)
mkl.set_kernel(kernel)
mkl.set_labels(labels)
#train to get weights
mkl.train()
w=kernel.get_subkernel_weights()
print(w)
Explanation: Generating Kernel weights
Just to help us visualize let's use two gaussian kernels (GaussianKernel) with considerably different widths. As required in MKL, we need to append them to the Combined kernel. To generate the optimal weights (i.e $\beta$s in the above equation), training of MKL is required. This generates the weights as seen in this example.
End of explanation
size=100
x1=linspace(-5, 5, size)
x2=linspace(-5, 5, size)
x, y=meshgrid(x1, x2)
#Generate X-Y grid test data
grid=features(array((ravel(x), ravel(y))))
kernel0t=sg.kernel("GaussianKernel", log_width=np.log(width0))
kernel1t=sg.kernel("GaussianKernel", log_width=np.log(width1))
kernelt=CombinedKernel()
kernelt.append_kernel(kernel0t)
kernelt.append_kernel(kernel1t)
#initailize with test grid
kernelt.init(feats_train, grid)
mkl.set_kernel(kernelt)
#prediction
grid_out=mkl.apply()
z=grid_out.get_values().reshape((size, size))
figure(figsize=(10,5))
title("Classification using MKL")
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
Explanation: Binary classification using MKL
Now with the data ready and training done, we can do the binary classification. The weights generated can be intuitively understood. We will see that on plotting individual subkernels outputs and outputs of the MKL classification. To apply on test features, we need to reinitialize the kernel with kernel.init and pass the test features. After that it's just a matter of doing mkl.apply to generate outputs.
End of explanation
z=grid_out.get_labels().reshape((size, size))
# MKL
figure(figsize=(20,5))
subplot(131, title="Multiple Kernels combined")
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
comb_ker0=CombinedKernel()
comb_ker0.append_kernel(kernel0)
comb_ker0.init(feats_train, feats_train)
mkl.set_kernel(comb_ker0)
mkl.train()
comb_ker0t=CombinedKernel()
comb_ker0t.append_kernel(kernel0)
comb_ker0t.init(feats_train, grid)
mkl.set_kernel(comb_ker0t)
out0=mkl.apply()
# subkernel 1
z=out0.get_labels().reshape((size, size))
subplot(132, title="Kernel 1")
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
comb_ker1=CombinedKernel()
comb_ker1.append_kernel(kernel1)
comb_ker1.init(feats_train, feats_train)
mkl.set_kernel(comb_ker1)
mkl.train()
comb_ker1t=CombinedKernel()
comb_ker1t.append_kernel(kernel1)
comb_ker1t.init(feats_train, grid)
mkl.set_kernel(comb_ker1t)
out1=mkl.apply()
# subkernel 2
z=out1.get_labels().reshape((size, size))
subplot(133, title="kernel 2")
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
Explanation: To justify the weights, let's train and compare two subkernels with the MKL classification output. Training MKL classifier with a single kernel appended to a combined kernel makes no sense and is just like normal single kernel based classification, but let's do it for comparison.
End of explanation
kernelt.init(feats_train, features(testdata))
mkl.set_kernel(kernelt)
out=mkl.apply()
evaluator=ErrorRateMeasure()
print("Test error is %2.2f%% :MKL" % (100*evaluator.evaluate(out,BinaryLabels(testlab))))
comb_ker0t.init(feats_train,features(testdata))
mkl.set_kernel(comb_ker0t)
out=mkl.apply()
evaluator=ErrorRateMeasure()
print("Test error is %2.2f%% :Subkernel1"% (100*evaluator.evaluate(out,BinaryLabels(testlab))))
comb_ker1t.init(feats_train, features(testdata))
mkl.set_kernel(comb_ker1t)
out=mkl.apply()
evaluator=ErrorRateMeasure()
print("Test error is %2.2f%% :subkernel2" % (100*evaluator.evaluate(out,BinaryLabels(testlab))))
Explanation: As we can see the multiple kernel output seems just about right. Kernel 1 gives a sort of overfitting output while the kernel 2 seems not so accurate. The kernel weights are hence so adjusted to get a refined output. We can have a look at the errors by these subkernels to have more food for thought. Most of the time, the MKL error is lesser as it incorporates aspects of both kernels. One of them is strict while other is lenient, MKL finds a balance between those.
End of explanation
def circle(x, radius, neg):
y=sqrt(square(radius)-square(x))
if neg:
return[x, -y]
else:
return [x,y]
def get_circle(radius):
neg=False
range0=linspace(-radius,radius,100)
pos_a=array([circle(i, radius, neg) for i in range0]).T
neg=True
neg_a=array([circle(i, radius, neg) for i in range0]).T
c=concatenate((neg_a,pos_a), axis=1)
return c
def get_data(r1, r2):
c1=get_circle(r1)
c2=get_circle(r2)
c=concatenate((c1, c2), axis=1)
feats_tr=features(c)
return c, feats_tr
l=concatenate((-ones(200),ones(200)))
lab=BinaryLabels(l)
#get two circles with radius 2 and 4
c, feats_tr=get_data(2,4)
c1, feats_tr1=get_data(2,3)
_=gray()
figure(figsize=(10,5))
subplot(121)
title("Circles with different separation")
p=scatter(c[0,:], c[1,:], c=lab)
subplot(122)
q=scatter(c1[0,:], c1[1,:], c=lab)
Explanation: MKL for knowledge discovery
MKL can recover information about the problem at hand. Let us see this with a binary classification problem. The task is to separate two concentric classes shaped like circles. By varying the distance between the boundary of the circles we can control the separability of the problem. Starting with an almost non-separable scenario, the data quickly becomes separable as the distance between the circles increases.
End of explanation
def train_mkl(circles, feats_tr):
#Four kernels with different widths
kernel0=sg.kernel("GaussianKernel", log_width=np.log(1))
kernel1=sg.kernel("GaussianKernel", log_width=np.log(5))
kernel2=sg.kernel("GaussianKernel", log_width=np.log(7))
kernel3=sg.kernel("GaussianKernel", log_width=np.log(10))
kernel = CombinedKernel()
kernel.append_kernel(kernel0)
kernel.append_kernel(kernel1)
kernel.append_kernel(kernel2)
kernel.append_kernel(kernel3)
kernel.init(feats_tr, feats_tr)
mkl = MKLClassification()
mkl.set_mkl_norm(1)
mkl.set_C(1, 1)
mkl.set_kernel(kernel)
mkl.set_labels(lab)
mkl.train()
w=kernel.get_subkernel_weights()
return w, mkl
def test_mkl(mkl, grid):
kernel0t=sg.kernel("GaussianKernel", log_width=np.log(1))
kernel1t=sg.kernel("GaussianKernel", log_width=np.log(5))
kernel2t=sg.kernel("GaussianKernel", log_width=np.log(7))
kernel3t=sg.kernel("GaussianKernel", log_width=np.log(10))
kernelt = CombinedKernel()
kernelt.append_kernel(kernel0t)
kernelt.append_kernel(kernel1t)
kernelt.append_kernel(kernel2t)
kernelt.append_kernel(kernel3t)
kernelt.init(feats_tr, grid)
mkl.set_kernel(kernelt)
out=mkl.apply()
return out
size=50
x1=linspace(-10, 10, size)
x2=linspace(-10, 10, size)
x, y=meshgrid(x1, x2)
grid=features(array((ravel(x), ravel(y))))
w, mkl=train_mkl(c, feats_tr)
print(w)
out=test_mkl(mkl,grid)
z=out.get_values().reshape((size, size))
figure(figsize=(5,5))
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
title('classification with constant separation')
_=colorbar(c)
Explanation: These are the type of circles we want to distinguish between. We can try classification with a constant separation between the circles first.
End of explanation
range1=linspace(5.5,7.5,50)
x=linspace(1.5,3.5,50)
temp=[]
for i in range1:
#vary separation between circles
c, feats=get_data(4,i)
w, mkl=train_mkl(c, feats)
temp.append(w)
y=array([temp[i] for i in range(0,50)]).T
figure(figsize=(20,5))
_=plot(x, y[0,:], color='k', linewidth=2)
_=plot(x, y[1,:], color='r', linewidth=2)
_=plot(x, y[2,:], color='g', linewidth=2)
_=plot(x, y[3,:], color='y', linewidth=2)
title("Comparison between kernel widths and weights")
ylabel("Weight")
xlabel("Distance between circles")
_=legend(["1","5","7","10"])
Explanation: As we can see the MKL classifier classifies them as expected. Now let's vary the separation and see how it affects the weights.The choice of the kernel width of the Gaussian kernel used for classification is expected to depend on the separation distance of the learning problem. An increased distance between the circles will correspond to a larger optimal kernel width. This effect should be visible in the results of the MKL, where we used MKL-SVMs with four kernels with different widths (1,5,7,10).
End of explanation
from scipy.io import loadmat, savemat
from os import path, sep
mat = loadmat(sep.join(['..','..','..','data','multiclass', 'usps.mat']))
Xall = mat['data']
Yall = array(mat['label'].squeeze(), dtype=double)
# map from 1..10 to 0..9, since shogun
# requires multiclass labels to be
# 0, 1, ..., K-1
Yall = Yall - 1
random.seed(0)
subset = random.permutation(len(Yall))
#get first 1000 examples
Xtrain = Xall[:, subset[:1000]]
Ytrain = Yall[subset[:1000]]
Nsplit = 2
all_ks = range(1, 21)
print(Xall.shape)
print(Xtrain.shape)
Explanation: In the above plot we see the kernel weightings obtained for the four kernels. Every line shows one weighting. The courses of the kernel weightings reflect the development of the learning problem: as long as the problem is difficult the best separation can be obtained when using the kernel with smallest width. The low width kernel looses importance when the distance between the circle increases and larger kernel widths obtain a larger weight in MKL. Increasing the distance between the circles, kernels with greater widths are used.
Multiclass classification using MKL
MKL can be used for multiclass classification using the MKLMulticlass class. It is based on the GMNPSVM Multiclass SVM. Its termination criterion is set by set_mkl_epsilon(float64_t eps ) and the maximal number of MKL iterations is set by set_max_num_mkliters(int32_t maxnum). The epsilon termination criterion is the L2 norm between the current MKL weights and their counterpart from the previous iteration. We set it to 0.001 as we want pretty accurate weights.
To see this in action let us compare it to the normal GMNPSVM example as in the KNN notebook, just to see how MKL fares in object recognition. We use the USPS digit recognition dataset.
End of explanation
def plot_example(dat, lab):
for i in range(5):
ax=subplot(1,5,i+1)
title(int(lab[i]))
ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest')
ax.set_xticks([])
ax.set_yticks([])
_=figure(figsize=(17,6))
gray()
plot_example(Xtrain, Ytrain)
Explanation: Let's plot five of the examples to get a feel of the dataset.
End of explanation
# MKL training and output
labels = MulticlassLabels(Ytrain)
feats = features(Xtrain)
#get test data from 5500 onwards
Xrem=Xall[:,subset[5500:]]
Yrem=Yall[subset[5500:]]
#test features not used in training
feats_rem=features(Xrem)
labels_rem=MulticlassLabels(Yrem)
kernel = CombinedKernel()
feats_train = CombinedFeatures()
feats_test = CombinedFeatures()
#append gaussian kernel
subkernel = sg.kernel("GaussianKernel", log_width=np.log(15))
feats_train.append_feature_obj(feats)
feats_test.append_feature_obj(feats_rem)
kernel.append_kernel(subkernel)
#append PolyKernel
feats = features(Xtrain)
subkernel = sg.kernel('PolyKernel', degree=10, c=2)
feats_train.append_feature_obj(feats)
feats_test.append_feature_obj(feats_rem)
kernel.append_kernel(subkernel)
kernel.init(feats_train, feats_train)
mkl = MKLMulticlass(1.2, kernel, labels)
mkl.set_epsilon(1e-2)
mkl.set_mkl_epsilon(0.001)
mkl.set_mkl_norm(1)
mkl.train()
#initialize with test features
kernel.init(feats_train, feats_test)
out = mkl.apply()
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=where(out.get_labels() != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=figure(figsize=(17,6))
gray()
plot_example(Xbad, Ybad)
w=kernel.get_subkernel_weights()
print(w)
# Single kernel:PolyKernel
C=1
pk = sg.kernel('PolyKernel', degree=10, c=2)
svm=GMNPSVM(C, pk, labels)
_=svm.train(feats)
out=svm.apply(feats_rem)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(out.get_labels() != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=figure(figsize=(17,6))
gray()
plot_example(Xbad, Ybad)
#Single Kernel:Gaussian kernel
width=15
C=1
gk=sg.kernel("GaussianKernel", log_width=np.log(width))
svm=GMNPSVM(C, gk, labels)
_=svm.train(feats)
out=svm.apply(feats_rem)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(out.get_labels() != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=figure(figsize=(17,6))
gray()
plot_example(Xbad, Ybad)
Explanation: We combine a Gaussian kernel and a PolyKernel. To test, examples not included in training data are used.
This is just a demonstration but we can see here how MKL is working behind the scene. What we have is two kernels with significantly different properties. The gaussian kernel defines a function space that is a lot larger than that of the linear kernel or the polynomial kernel. The gaussian kernel has a low width, so it will be able to represent more and more complex relationships between the training data. But it requires enough data to train on. The number of training examples here is 1000, which seems a bit less as total examples are 10000. We hope the polynomial kernel can counter this problem, since it will fit the polynomial for you using a lot less data than the squared exponential. The kernel weights are printed below to add some insight.
End of explanation
X = -0.3 * random.randn(100,2)
traindata=r_[X + 2, X - 2].T
X = -0.3 * random.randn(20, 2)
testdata = r_[X + 2, X - 2].T
trainlab=concatenate((ones(99),-ones(1)))
#convert to shogun features and generate labels for data
feats=features(traindata)
labels=BinaryLabels(trainlab)
xx, yy = meshgrid(linspace(-5, 5, 500), linspace(-5, 5, 500))
grid=features(array((ravel(xx), ravel(yy))))
#test features
feats_t=features(testdata)
x_out=(random.uniform(low=-4, high=4, size=(20, 2))).T
feats_out=features(x_out)
kernel=CombinedKernel()
feats_train=CombinedFeatures()
feats_test=CombinedFeatures()
feats_test_out=CombinedFeatures()
feats_grid=CombinedFeatures()
#append gaussian kernel
subkernel=sg.kernel("GaussianKernel", log_width=np.log(8))
feats_train.append_feature_obj(feats)
feats_test.append_feature_obj(feats_t)
feats_test_out.append_feature_obj(feats_out)
feats_grid.append_feature_obj(grid)
kernel.append_kernel(subkernel)
#append PolyKernel
feats = features(traindata)
subkernel = sg.kernel('PolyKernel', degree=10, c=3)
feats_train.append_feature_obj(feats)
feats_test.append_feature_obj(feats_t)
feats_test_out.append_feature_obj(feats_out)
feats_grid.append_feature_obj(grid)
kernel.append_kernel(subkernel)
kernel.init(feats_train, feats_train)
mkl = MKLOneClass()
mkl.set_kernel(kernel)
mkl.set_labels(labels)
mkl.set_interleaved_optimization_enabled(False)
mkl.set_epsilon(1e-2)
mkl.put('mkl_epsilon', 0.1)
mkl.set_mkl_norm(1)
Explanation: The misclassified examples are surely pretty tough to predict. As seen from the accuracy MKL seems to work a shade better in the case. One could try this out with more and different types of kernels too.
One-class classification using MKL
One-class classification can be done using MKL in shogun. This is demonstrated in the following simple example using MKLOneClass. We will see how abnormal data is detected. This is also known as novelty detection. Below we generate some toy data and initialize combined kernels and features.
End of explanation
mkl.train()
print("Weights:")
w=kernel.get_subkernel_weights()
print(w)
#initialize with test features
kernel.init(feats_train, feats_test)
normal_out = mkl.apply()
#test on abnormally generated data
kernel.init(feats_train, feats_test_out)
abnormal_out = mkl.apply()
#test on X-Y grid
kernel.init(feats_train, feats_grid)
grid_out=mkl.apply()
z=grid_out.get_values().reshape((500,500))
z_lab=grid_out.get_labels().reshape((500,500))
a=abnormal_out.get_labels()
n=normal_out.get_labels()
#check for normal and abnormal classified data
idx=where(normal_out.get_labels() != 1)[0]
abnormal=testdata[:,idx]
idx=where(normal_out.get_labels() == 1)[0]
normal=testdata[:,idx]
figure(figsize(15,6))
pl =subplot(121)
title("One-class classification using MKL")
_=pink()
c=pcolor(xx, yy, z)
_=contour(xx, yy, z_lab, linewidths=1, colors='black', hold=True)
_=colorbar(c)
p1=pl.scatter(traindata[0, :], traindata[1,:], cmap=gray(), s=100)
p2=pl.scatter(normal[0,:], normal[1,:], c="red", s=100)
p3=pl.scatter(abnormal[0,:], abnormal[1,:], c="blue", s=100)
p4=pl.scatter(x_out[0,:], x_out[1,:], c=a, cmap=jet(), s=100)
_=pl.legend((p1, p2, p3), ["Training samples", "normal samples", "abnormal samples"], loc=2)
subplot(122)
c=pcolor(xx, yy, z)
title("One-class classification output")
_=gray()
_=contour(xx, yy, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
Explanation: Now that everything is initialized, let's see MKLOneclass in action by applying it on the test data and on the X-Y grid.
End of explanation |
10,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.e - TD noté, 27 novembre 2012 (éléments de code pour le coloriage)
Coloriage d'une image, dessin d'une spirale avec matplotlib
Step1: construction de la spirale
On utilise une représentation paramétrique de la spirale
Step2: dessin de la spirale
Step3: ajouter du rouge | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.e - TD noté, 27 novembre 2012 (éléments de code pour le coloriage)
Coloriage d'une image, dessin d'une spirale avec matplotlib : éléments de code données avec l'énoncé.
End of explanation
import math
# cette fonction construit deux spirales imbriquées dans une matrice nb x nb
# le résultat est retourné sous forme de liste de listes
def construit_matrice (nb) :
mat = [ [ 0 for x in range (0,nb) ] for y in range(0,nb) ]
def pointij (nb,r,th,mat,c,phase) :
i,j = r*th * math.cos(th+phase), r*th*math.sin(th+phase)
i,j = int(i*100/nb), int(j*100/nb)
i,j = (i+nb)//2, (j+nb)//2
if 0 <= i < nb and 0 <= j < nb :
mat[i][j] = c
return i,j
r = 3.5
t = 0
for tinc in range (nb*100000) :
t += 1.0 * nb / 100000
th = t * math.pi * 2
i,j = pointij (nb,r,th,mat,1, 0)
i,j = pointij (nb,r,th,mat,1, math.pi)
if i >= nb and j >= nb : break
return mat
matrice = construit_matrice(100)
Explanation: construction de la spirale
On utilise une représentation paramétrique de la spirale : spirale.
End of explanation
import matplotlib.pyplot as plt
def dessin_matrice (matrice) :
f, ax = plt.subplots()
ax.set_ylim([0, len(matrice[0])])
ax.set_xlim([0, len(matrice)])
colors = { 1: "blue", 2:"red" }
for i in range(0,len(matrice)) :
for j in range (0, len(matrice[i])) :
if matrice [i][j] in colors :
ax.plot ([i-0.5,i-0.5,i+0.5,i+0.5,i-0.5,i+0.5,i-0.5,i+0.5],
[j-0.5,j+0.5,j+0.5,j-0.5,j-0.5,j+0.5,j+0.5,j-0.5],
colors [ matrice[i][j] ])
return ax
dessin_matrice(matrice)
Explanation: dessin de la spirale
End of explanation
n = len(matrice)
for i in range(0, n):
matrice[i][min(n-1, abs(n - 2*i))] = 2
dessin_matrice(matrice)
Explanation: ajouter du rouge
End of explanation |
10,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/JHI_STRAP_Web.png" style="width
Step1: After executing the code cell, you should see a table of values. The table has columns named gene1 and gene2, and rows that are indexed starting at zero (it is typical in many programming languages to start counting at zero).
<a id="summary"></a>
3. Summary statistics
<p></p>
<div class="alert-success">
<b>It can be very useful to get a quantitative overview of a new dataset by looking at some *bulk statistics*
Step2: <a id="correlations"></a>
4. Correlations
<p></p>
<div class="alert-success">
<b>We have decided that, as a way of identifying potentially coregulated genes, we will look for *correlated expression* between `gene1` and `gene2`</b>
</div>
The dataframe provides another method that reports the (Pearson) correlation coefficient between the columns of the dataset
Step3: You can now make a quantitative estimate of whether these two genes are likely to be coregulated.
<img src="images/exercise.png" style="width | Python Code:
# Define your group, for this exercise
mygroup = "A" # <- change the letter in quotes
# Import Python libraries
import os # This lets us interact with the operating system
import pandas as pd # This allows us to use dataframes
import seaborn as sns # This gives us pretty graphics options
# Load the data
datafile = os.path.join('data', 'correlations', mygroup, 'expn.tab')
data = pd.read_csv(datafile, sep="\t")
# Show the first few lines of the data
data.head()
Explanation: <img src="images/JHI_STRAP_Web.png" style="width: 150px; float: right;">
01a - Correlations (15min)
Table of Contents
Biological Motivation
Dataset
Summary Statistics
Correlations
Visualising Data
Comments
<a id="motivation"></a>
1. Biological motivation
<p></p>
<div class="alert-info">
You have been given a dataset of transcript levels (e.g. RNAseq, microarray, qRT-PCR) for two genes in your organism of interest. These transcript levels have been measured over 11 timepoints. You would like to know whether those genes are coregulated or not.
</div>
<img src="images/exercise.png" style="width: 50px; float: left;">
QUESTION: (2min)
<p></p>
<div class="alert-danger">
<b>
How can you determine whether two genes are coregulated from transcript data?
<br></br><br></br>
What is the distinction between coregulation and <i>correlated expression</i>?
</b>
</div>
<a id="dataset"></a>
2. Dataset
<p></p>
<div class="alert-success">
<b>The `Code` cell below contains Python code that will load your dataset.</b>
</div>
You will have been assigned a letter: A, B, C, or D as part of the workshop. Please enter this letter in the first line of Python code, so that you are working with the appropriate dataset:
```python
Define your group, for this exercise
mygroup = "A" # <- change the letter in quotes
```
and then execute the cell with Ctrl-Enter or Shift-Enter. This will load the exercise data into a variable called data.
End of explanation
# Show summary statistics of the dataframe
Explanation: After executing the code cell, you should see a table of values. The table has columns named gene1 and gene2, and rows that are indexed starting at zero (it is typical in many programming languages to start counting at zero).
<a id="summary"></a>
3. Summary statistics
<p></p>
<div class="alert-success">
<b>It can be very useful to get a quantitative overview of a new dataset by looking at some *bulk statistics*: the dataset's *mean*, *median*, *variance*, *standard deviation*, and minimum and maximum values.</b>
</div>
The data you loaded is in a dataframe (this behaves very much like dataframes in the R language), and you can obtain summary statistics quite readily using the .describe() method.
```python
Show summary statistics of the dataframe
data.describe()
```
<p></p>
<div class="alert-danger">
<b>Use the `.describe()` method to obtain summary statistics for your data in the cell below</b>
</div>
End of explanation
# Show the Pearson correlation coefficients between columns in the dataset
Explanation: <a id="correlations"></a>
4. Correlations
<p></p>
<div class="alert-success">
<b>We have decided that, as a way of identifying potentially coregulated genes, we will look for *correlated expression* between `gene1` and `gene2`</b>
</div>
The dataframe provides another method that reports the (Pearson) correlation coefficient between the columns of the dataset:
```python
Show the Pearson correlation coefficients between columns in the dataset
data.corr()
```
<p></p>
<div class="alert-danger">
<b>Use the `.corr()` method to obtain summary statistics for your data in the cell below</b>
</div>
End of explanation
# The line below allows plots to be rendered in the notebook
# This is very useful for literate programming, and for producing reports
%matplotlib inline
# Show a scatter plot of transcript levels for gene1 and gene2
Explanation: You can now make a quantitative estimate of whether these two genes are likely to be coregulated.
<img src="images/exercise.png" style="width: 50px; float: left;">
QUESTION: (2min)
<p></p>
<div class="alert-danger">
<b>How strong do you think the evidence is that these two genes are coregulated?</b>
</div>
<a id="visualising"></a>
5. Visualising Data
<p></p>
<div class="alert-success">
<b>In addition to summary statistics, it is always useful to *visualise* your data, to inspect it for patterns and potential outliers, and to see whether it makes intuitive sense.</b>
</div>
The dataframe provides a group of methods that allow us to plot the data for gene1 and gene2 in various ways. You will use the .plot.scatter() method in the cell below to visualise the way in which their transcript levels vary together.
```python
Show a scatter plot of transcript levels for gene1 and gene2
data.plot.scatter('gene1', 'gene2');
```
<p></p>
<div class="alert-danger">
<b>Use the `.plot.scatter()` method to visualise your data in the cell below</b>
</div>
End of explanation |
10,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural network "playground"
imports
Step1: Caffe computation mode
CPU
Step2: GPU
Make sure you enabled GPU suppor, and have a compatible (ie. nvidia) GPU
Step3: Network loading and tests
Step4: The cell bellows checks that opencv and its python bindings are properly installed
Step6: Face detection in image
This block performs the actual processing of an image (specified in the code btw), and checks for faces in it
Step7: Redraw
Allows you to define a new threshold (provided it's more aggressive than the original), without calculating everything.
Step8: Train dataset appender | Python Code:
import numpy as np
from cStringIO import StringIO
import matplotlib.pyplot as plt
import caffe
from IPython.display import clear_output, Image, display
import cv2
import PIL.Image
import os
os.chdir("start_deep/")
Explanation: Neural network "playground"
imports
End of explanation
caffe.set_mode_cpu()
Explanation: Caffe computation mode
CPU
End of explanation
caffe.set_device(0)
caffe.set_mode_gpu()
Explanation: GPU
Make sure you enabled GPU suppor, and have a compatible (ie. nvidia) GPU
End of explanation
net = caffe.Net('deploy.prototxt', "facenet_iter_200000.caffemodel", caffe.TEST)
#training = caffe.Net('facenet_train_test.prototxt', "facenet_iter_200000.caffemodel", caffe.TRAIN)
#solver = caffe.SGDSolver('facenet_solver.prototxt')
#test_net = solver.testnets[0]
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
#im = np.array(PIL.Image.open('train_images/0/137021_102_88_72_72.pgm'))/256.0
im = np.array(PIL.Image.open('train_images/1/image000619.pgm'))/256.0
im_input = im[np.newaxis, np.newaxis, :, :]
net.blobs['data'].reshape(*im_input.shape)
net.blobs['data'].data[...] = im
showarray(im)
print (len(im))
print (len(im[0]))
print (im)
output = net.forward()
print(output)
if output['prob'][0][0] >0.9:
print "visage"
Explanation: Network loading and tests
End of explanation
print cv2.__version__
Explanation: The cell bellows checks that opencv and its python bindings are properly installed
End of explanation
orig_scale = 4
def process_chunk(chunk):
im_input = chunk[np.newaxis, np.newaxis, :, :]
net.blobs['data'].reshape(*im_input.shape)
net.blobs['data'].data[...] = imtmp
return net.forward()['prob'][0][1]
def get_face_prob(chunk):
return process_chunk(chunk)
def add_outline(img, i, j, scale, confidence):
color = (255,255,255)
if confidence > 0.985:
color = (0,255,0)
elif confidence >0.96:
color = (255,255,0)
else:
color =(255,0,0)
for x in range (int(i*scale), int((i+36)*scale)):
img[x][int(j*scale)]=color
img[x][int((j+36)*scale-1)]=color
for y in range(int(j*scale), int((j+36)*scale)):
img[int(i*scale)][y]=color
img[int((i+36)*scale-1)][y]=color
return img
img = PIL.Image.open('test_vis4.jpg')
w, h = img.size
img = img.resize((int(w/orig_scale),int(h/orig_scale)), PIL.Image.NEAREST)
imbase = img
img = img.convert('L')
w, h = img.size
scale = 1
print "starting processing"
found_pairs = []
next_scale = 1.3
next_i = 4
while w >=36*2 and h >36*2:
img = img.resize((int(w/next_scale),int(h/next_scale)), PIL.Image.NEAREST)
w, h = img.size
print img.size
im = np.array(img)
scale*=next_scale
i = 0
j = 0
last_result = 0
while i < int(h-36):
next_i = 4
while j < int(w-36):
imtmp = np.array(im [i:i+36, j:j+36]/256.0)
face_prob = get_face_prob(imtmp)
if face_prob > 0.5:
next_i = 2
while j < int(w-36) and face_prob > last_result:
imtmp = np.array(im [i:i+36, j:j+36]/256.0)
last_result = face_prob
face_prob = get_face_prob(imtmp)
j += 1
if last_result > 0.92:
next_i = 1
matched = False
print last_result
print "visage trouvé @ %i , %i"%(i, j)
showarray(imtmp*255)
for pair in found_pairs[:]: # copy the list to remove while working in it
if (abs(pair[0]*pair[3] - i*scale) + abs(pair[1]*pair[3] - j*scale)) < 20*scale :
matched = True
if pair[2] < last_result:
found_pairs.remove(pair)
found_pairs.append((i, j, last_result, scale))
if not matched:
found_pairs.append((i, j, last_result, scale))
j+=36
last_result = 0
j+= 4
i+=next_i
j = 0
print "adding overlay"
for pair in found_pairs:
imbase = add_outline(np.array(imbase),pair[0],pair[1],pair[3], pair[2])
showarray(imbase)
Explanation: Face detection in image
This block performs the actual processing of an image (specified in the code btw), and checks for faces in it
End of explanation
threshold = 0.96
img = PIL.Image.open('test_vis4.jpg')
w, h = img.size
img = img.resize((int(w/orig_scale),int(h/orig_scale)), PIL.Image.NEAREST)
imbase = img
print "adding overlay"
for pair in found_pairs:
if pair[2]>threshold:
imbase = add_outline(np.array(imbase),pair[0],pair[1],pair[3], pair[2])
showarray(imbase)
img = PIL.Image.open('test_vis4.jpg').convert('LA')
arr = np.array(img)
w, h = img.size
print w
a
Explanation: Redraw
Allows you to define a new threshold (provided it's more aggressive than the original), without calculating everything.
End of explanation
img = PIL.Image.open('neg1.jpg')
w, h = img.size
#img = img.resize((int(w/4),int(h/4)), PIL.Image.NEAREST)
imbase = img
img = img.convert('L')
w, h = img.size
scale = 1
width = 36
height = 36
import array
findex = 4327
with open('posneg.txt', 'a') as f:
print "starting processing"
while w >=36*2 and h >36*2:
img = img.resize((int(w/1.3),int(h/1.3)), PIL.Image.NEAREST)
w, h = img.size
print img.size
im = np.array(img)
scale*=1.3
found_pairs = []
i = 0
j = 0
last_result = 0
while i < int(h-36):
while j < int(w-36):
imtmp = np.array(im [i:i+36, j:j+36]/256.0)
face_prob = get_face_prob(imtmp)
if face_prob > 0.9:
showarray(imtmp*255)
buff=array.array('B')
for k in range(0, 36):
for l in range(0, 36):
buff.append(int(imtmp[k][l]*255))
findex += 1
# open file for writing
filename = '0/lbe%i.pgm'%findex
try:
fout=open("train_images/"+filename, 'wb')
except IOError, er:
print "Cannot open file ", filename, "Exiting … \n", er
# define PGM Header
pgmHeader = 'P5' + '\n' + str(width) + ' ' + str(height) + ' ' + str(255) + '\n'
# write the header to the file
fout.write(pgmHeader)
# write the data to the file
buff.tofile(fout)
# close the file
fout.close()
f.write(filename + " 0\n")
j+=4
i+=4
j = 0
print "adding overlay"
for pair in found_pairs:
imbase = add_outline(np.array(imbase),pair[0],pair[1],scale, pair[2])
showarray(imbase)
print findex
Explanation: Train dataset appender
End of explanation |
10,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TfTransform #
Learning Objectives
1. Preproccess data and engineer new features using TfTransform
1. Create and deploy Apache Beam pipeline
1. Use processed data to train taxifare model locally then serve a prediction
Overview
While Pandas is fine for experimenting, for operationalization of your workflow it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam allows for streaming. In this lab we will pull data from BigQuery then use Apache Beam TfTransform to process the data.
Only specific combinations of TensorFlow/Beam are supported by tf.transform so make sure to get a combo that works. In this lab we will be using
Step1: NOTE
Step2: <b>Restart the kernel</b> (click on the reload button above).
Step8: Input source
Step9: Let's pull this query down into a Pandas DataFrame and take a look at some of the statistics.
Step13: Create ML dataset using tf.transform and Dataflow
Let's use Cloud Dataflow to read in the BigQuery data and write it out as TFRecord files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.
transformed_data is type pcollection.
Step14: This will take 10-15 minutes. You cannot go on in this lab until your DataFlow job has succesfully completed.
Step15: Train off preprocessed data
Now that we have our data ready and verified it is in the correct location we can train our taxifare model locally.
Step16: Now let's create fake data in JSON format and use it to serve a prediction with gcloud ai-platform local predict | Python Code:
!pip install --user apache-beam[gcp]==2.16.0
!pip install --user tensorflow-transform==0.15.0
Explanation: TfTransform #
Learning Objectives
1. Preproccess data and engineer new features using TfTransform
1. Create and deploy Apache Beam pipeline
1. Use processed data to train taxifare model locally then serve a prediction
Overview
While Pandas is fine for experimenting, for operationalization of your workflow it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam allows for streaming. In this lab we will pull data from BigQuery then use Apache Beam TfTransform to process the data.
Only specific combinations of TensorFlow/Beam are supported by tf.transform so make sure to get a combo that works. In this lab we will be using:
* TFT 0.15.0
* TF 2.0
* Apache Beam [GCP] 2.16.0
End of explanation
!pip download tensorflow-transform==0.15.0 --no-deps
Explanation: NOTE: You may ignore specific incompatibility errors and warnings. These components and issues do not impact your ability to complete the lab.
Download .whl file for tensorflow-transform. We will pass this file to Beam Pipeline Options so it is installed on the DataFlow workers
End of explanation
%%bash
pip freeze | grep -e 'flow\|beam'
import shutil
import tensorflow as tf
import tensorflow_transform as tft
print(tf.__version__)
import os
PROJECT = !gcloud config get-value project
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: <b>Restart the kernel</b> (click on the reload button above).
End of explanation
from google.cloud import bigquery
def create_query(phase, EVERY_N):
Creates a query with the proper splits.
Args:
phase: int, 1=train, 2=valid.
EVERY_N: int, take an example EVERY_N rows.
Returns:
Query string with the proper splits.
base_query =
WITH daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)
SELECT
(tolls_amount + fare_amount) AS fare_amount,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers,
'notneeded' AS key
FROM
`nyc-tlc.yellow.trips`, daynames
WHERE
trip_distance > 0 AND fare_amount > 0
if EVERY_N is None:
if phase < 2:
# training
query = {} AND ABS(MOD(FARM_FINGERPRINT(CAST
(pickup_datetime AS STRING), 4)) < 2.format(
base_query
)
else:
query = {} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING), 4)) = {}.format(
base_query, phase
)
else:
query = {} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING)), {})) = {}.format(
base_query, EVERY_N, phase
)
return query
query = create_query(2, 100000)
Explanation: Input source: BigQuery
Get data from BigQuery but defer the majority of filtering etc. to Beam.
Note that the dayofweek column is now strings.
End of explanation
df_valid = bigquery.Client().query(query).to_dataframe()
display(df_valid.head())
df_valid.describe()
Explanation: Let's pull this query down into a Pandas DataFrame and take a look at some of the statistics.
End of explanation
import datetime
import apache_beam as beam
import tensorflow as tf
import tensorflow_metadata as tfmd
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def is_valid(inputs):
Check to make sure the inputs are valid.
Args:
inputs: dict, dictionary of TableRow data from BigQuery.
Returns:
True if the inputs are valid and False if they are not.
try:
pickup_longitude = inputs["pickuplon"]
dropoff_longitude = inputs["dropofflon"]
pickup_latitude = inputs["pickuplat"]
dropoff_latitude = inputs["dropofflat"]
hourofday = inputs["hourofday"]
dayofweek = inputs["dayofweek"]
passenger_count = inputs["passengers"]
fare_amount = inputs["fare_amount"]
return (
fare_amount >= 2.5
and pickup_longitude > -78
and pickup_longitude < -70
and dropoff_longitude > -78
and dropoff_longitude < -70
and pickup_latitude > 37
and pickup_latitude < 45
and dropoff_latitude > 37
and dropoff_latitude < 45
and passenger_count > 0
)
except:
return False
def preprocess_tft(inputs):
Preproccess the features and add engineered features with tf transform.
Args:
dict, dictionary of TableRow data from BigQuery.
Returns:
Dictionary of preprocessed data after scaling and feature engineering.
import datetime
print(inputs)
result = {}
result["fare_amount"] = tf.identity(inputs["fare_amount"])
# build a vocabulary
result["dayofweek"] = tft.string_to_int(inputs["dayofweek"])
result["hourofday"] = tf.identity(inputs["hourofday"]) # pass through
# scaling numeric values
result["pickuplon"] = tft.scale_to_0_1(inputs["pickuplon"])
result["pickuplat"] = tft.scale_to_0_1(inputs["pickuplat"])
result["dropofflon"] = tft.scale_to_0_1(inputs["dropofflon"])
result["dropofflat"] = tft.scale_to_0_1(inputs["dropofflat"])
result["passengers"] = tf.cast(inputs["passengers"], tf.float32) # a cast
# arbitrary TF func
result["key"] = tf.as_string(tf.ones_like(inputs["passengers"]))
# engineered features
latdiff = inputs["pickuplat"] - inputs["dropofflat"]
londiff = inputs["pickuplon"] - inputs["dropofflon"]
# Scale our engineered features latdiff and londiff between 0 and 1
result["latdiff"] = tft.scale_to_0_1(latdiff)
result["londiff"] = tft.scale_to_0_1(londiff)
dist = tf.sqrt(latdiff * latdiff + londiff * londiff)
result["euclidean"] = tft.scale_to_0_1(dist)
return result
def preprocess(in_test_mode):
Sets up preprocess pipeline.
Args:
in_test_mode: bool, False to launch DataFlow job, True to run locally.
import os
import os.path
import tempfile
from apache_beam.io import tfrecordio
from tensorflow_transform.beam import tft_beam_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import (
dataset_metadata,
dataset_schema,
)
job_name = "preprocess-taxi-features" + "-"
job_name += datetime.datetime.now().strftime("%y%m%d-%H%M%S")
if in_test_mode:
import shutil
print("Launching local job ... hang on")
OUTPUT_DIR = "./preproc_tft"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EVERY_N = 100000
else:
print(f"Launching Dataflow job {job_name} ... hang on")
OUTPUT_DIR = f"gs://{BUCKET}/taxifare/preproc_tft/"
import subprocess
subprocess.call(f"gsutil rm -r {OUTPUT_DIR}".split())
EVERY_N = 10000
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": job_name,
"project": PROJECT,
"num_workers": 1,
"max_num_workers": 1,
"teardown_policy": "TEARDOWN_ALWAYS",
"no_save_main_session": True,
"direct_num_workers": 1,
"extra_packages": ["tensorflow-transform-0.15.0.tar.gz"],
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = "DirectRunner"
else:
RUNNER = "DataflowRunner"
# Set up raw data metadata
raw_data_schema = {
colname: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation()
)
for colname in "dayofweek,key".split(",")
}
raw_data_schema.update(
{
colname: dataset_schema.ColumnSchema(
tf.float32, [], dataset_schema.FixedColumnRepresentation()
)
for colname in "fare_amount,pickuplon,pickuplat,dropofflon,dropofflat".split(
","
)
}
)
raw_data_schema.update(
{
colname: dataset_schema.ColumnSchema(
tf.int64, [], dataset_schema.FixedColumnRepresentation()
)
for colname in "hourofday,passengers".split(",")
}
)
raw_data_metadata = dataset_metadata.DatasetMetadata(
dataset_schema.Schema(raw_data_schema)
)
# Run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, "tmp")):
# Save the raw data metadata
(
raw_data_metadata
| "WriteInputMetadata"
>> tft_beam_io.WriteMetadata(
os.path.join(OUTPUT_DIR, "metadata/rawdata_metadata"),
pipeline=p,
)
)
# Read training data from bigquery and filter rows
raw_data = (
p
| "train_read"
>> beam.io.Read(
beam.io.BigQuerySource(
query=create_query(1, EVERY_N), use_standard_sql=True
)
)
| "train_filter" >> beam.Filter(is_valid)
)
raw_dataset = (raw_data, raw_data_metadata)
# Analyze and transform training data
(
transformed_dataset,
transform_fn,
) = raw_dataset | beam_impl.AnalyzeAndTransformDataset(
preprocess_tft
)
transformed_data, transformed_metadata = transformed_dataset
# Save transformed train data to disk in efficient tfrecord format
transformed_data | "WriteTrainData" >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, "train"),
file_name_suffix=".gz",
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema
),
)
# Read eval data from bigquery and filter rows
raw_test_data = (
p
| "eval_read"
>> beam.io.Read(
beam.io.BigQuerySource(
query=create_query(2, EVERY_N), use_standard_sql=True
)
)
| "eval_filter" >> beam.Filter(is_valid)
)
raw_test_dataset = (raw_test_data, raw_data_metadata)
# Transform eval data
transformed_test_dataset = (
raw_test_dataset,
transform_fn,
) | beam_impl.TransformDataset()
transformed_test_data, _ = transformed_test_dataset
# Save transformed train data to disk in efficient tfrecord format
(
transformed_test_data
| "WriteTestData"
>> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, "eval"),
file_name_suffix=".gz",
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema
),
)
)
# Save transformation function to disk for use at serving time
(
transform_fn
| "WriteTransformFn"
>> transform_fn_io.WriteTransformFn(
os.path.join(OUTPUT_DIR, "metadata")
)
)
# Change to True to run locally
preprocess(in_test_mode=False)
Explanation: Create ML dataset using tf.transform and Dataflow
Let's use Cloud Dataflow to read in the BigQuery data and write it out as TFRecord files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.
transformed_data is type pcollection.
End of explanation
%%bash
# ls preproc_tft
gsutil ls gs://${BUCKET}/taxifare/preproc_tft/
Explanation: This will take 10-15 minutes. You cannot go on in this lab until your DataFlow job has succesfully completed.
End of explanation
%%bash
rm -r ./taxi_trained
export PYTHONPATH=${PYTHONPATH}:$PWD
python3 -m tft_trainer.task \
--train_data_path="gs://${BUCKET}/taxifare/preproc_tft/train*" \
--eval_data_path="gs://${BUCKET}/taxifare/preproc_tft/eval*" \
--output_dir=./taxi_trained \
!ls $PWD/taxi_trained/export/exporter
Explanation: Train off preprocessed data
Now that we have our data ready and verified it is in the correct location we can train our taxifare model locally.
End of explanation
%%writefile /tmp/test.json
{"dayofweek":0, "hourofday":17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "passengers": 2.0}
%%bash
sudo find "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine" -name '*.pyc' -delete
%%bash
model_dir=$(ls $PWD/taxi_trained/export/exporter/)
gcloud ai-platform local predict \
--model-dir=./taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
Explanation: Now let's create fake data in JSON format and use it to serve a prediction with gcloud ai-platform local predict
End of explanation |
10,596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
Step2: Convolution
Step4: Aside
Step5: Convolution
Step6: Max pooling
Step7: Max pooling
Step8: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory
Step9: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
Step10: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug
Step11: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
Step12: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
Step13: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting
Step14: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set
Step15: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following
Step16: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization
Step17: Spatial batch normalization
Step18: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started | Python Code:
# As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print 'Testing conv_forward_naive'
print 'difference: ', rel_error(out, correct_out)
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d/2:-d/2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
Tiny helper to show images as uint8 and remove axis labels
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print 'Testing conv_backward_naive function'
print 'dx error: ', rel_error(dx, dx_num)
print 'dw error: ', rel_error(dw, dw_num)
print 'db error: ', rel_error(db, db_num)
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print 'Testing max_pool_forward_naive function:'
print 'difference: ', rel_error(out, correct_out)
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print 'Testing max_pool_backward_naive function:'
print 'dx error: ', rel_error(dx, dx_num)
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print 'Testing conv_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'Difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting conv_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
print 'dw difference: ', rel_error(dw_naive, dw_fast)
print 'db difference: ', rel_error(db_naive, db_fast)
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print 'Testing pool_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting pool_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print 'Testing conv_relu_pool'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print 'Testing conv_relu:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print 'Initial loss (no regularization): ', loss
model.reg = 0.5
loss, grads = model.loss(X, y)
print 'Initial loss (with regularization): ', loss
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
End of explanation
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
import random
for count in xrange(10):
alpha = 10 ** random.uniform(-5, -2)
lamda = 10 ** random.uniform(-5, -2)
print '\nalpha =', alpha, 'lamda =', lamda
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=lamda)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': alpha,
},
verbose=True, print_every=200)
solver.train()
import cPickle as pickle
pickle.dump(model.params, open("model", "wb"), 1)
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print 'Before spatial batch normalization:'
print ' Shape: ', x.shape
print ' Means: ', x.mean(axis=(0, 2, 3))
print ' Stds: ', x.std(axis=(0, 2, 3))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization:'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization (nontrivial gamma, beta):'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in xrange(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After spatial batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=(0, 2, 3))
print ' stds: ', a_norm.std(axis=(0, 2, 3))
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation
# Train a really good model on CIFAR-10
Explanation: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization aafter affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:
[conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
[conv-relu-pool]XN - [affine]XM - [softmax or SVM]
[conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.
Model ensembles
Data augmentation
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Have fun and happy training!
End of explanation |
10,597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I will write lot about how statistical inference is done in psychological research. I think however that it will be helpfull to first point out few issues which I think are paramount to all my other opinions. In this post I write about the general attitude towards statistics that is imprinted on psychology students in their introductory stats classes.
Toolbox
This is how the statistics works for psychologists.
Formulate hypothesis
Design a study to test it
Collect data
Evaluate data
Draw a conclusion and publish
When we zoom in at point 4 we get following instructions. First, see whether the data are at categorical, ordinal or nominal scale. Second, what design do we have? How many conditions/groups do we have? How many levels does each variable have? Do we have repeated measurements or independent samples? Once we have determined the answers we look at our toolbox and choose the appropriate method. This is what our toolbox looks like.
<img src="/assets/nominaltable.png" width="600">
Then students are told how to perform the analyses in the cells with a statistical software, which values to copy from the software output and how one does report them. In almost all cases p-values are available in the output and are reported along with the test statistic. Since late 90s most software also offers effect size estimates and students are told where to look for them.
Let's go back to the toolbox table. As an example if we measured the performance of two groups (treatment group and control) at three consecutive time points then we have one nominal DV and two IVs
Step1: The data are U.S. Beverage Manufacturer Product Shipments from 1992 to 2006 as provided by www.census.gov. The shipments show rising trend and yearly seasonal fluctuations. We first get rid of the trend to obtain a better look at the seasonal changes.
Step2: The fit of the linear curve is acceptable and we continue with model building. We now subtract the trend from the data to obtain the residuals. These show the remaining paterns in the data that require modeling.
Step3: We add the yearly fluctuations to the model. The rough idea is to use sinusoid $\alpha sin(2\pi (t+\phi))$ where $\alpha$ is the amplitude and $\phi$ is the shift. Here is a sketch what we are looking for. (The parameter values were found through trial and error).
Step4: We fit the model. We simplify the fitting process by writing
$$\alpha sin(2\pi (t+\phi))= \alpha cos(2\pi \phi) sin(2\pi t)+\alpha cos(2\pi t) sin(2\pi \phi)= \beta_1 sin(2\pi t)+\beta_2 cos(2\pi t) $$
Step5: The results look already good. We zoom in at the residuals, to see how it can be further improved.
Step6: The residuals are well described by normal distribution with $\sigma=233$. We can summarize our model as $d=\beta_0+\beta_1 t + \beta_2sin 2\pi t + \beta_3sin 2\pi t + \epsilon$ with error $\epsilon \sim \mathcal{N}(0,\sigma)$
The plots below suggest some additional improvements. | Python Code:
%pylab inline
d= np.loadtxt('b5.dat')
t=np.arange(0,d.size)/12.+1992
plt.plot(t,d)
plt.gca().set_xticks(np.arange(0,d.size/12)+1992)
plt.xlabel('year')
plt.ylabel('product shipments');
Explanation: I will write lot about how statistical inference is done in psychological research. I think however that it will be helpfull to first point out few issues which I think are paramount to all my other opinions. In this post I write about the general attitude towards statistics that is imprinted on psychology students in their introductory stats classes.
Toolbox
This is how the statistics works for psychologists.
Formulate hypothesis
Design a study to test it
Collect data
Evaluate data
Draw a conclusion and publish
When we zoom in at point 4 we get following instructions. First, see whether the data are at categorical, ordinal or nominal scale. Second, what design do we have? How many conditions/groups do we have? How many levels does each variable have? Do we have repeated measurements or independent samples? Once we have determined the answers we look at our toolbox and choose the appropriate method. This is what our toolbox looks like.
<img src="/assets/nominaltable.png" width="600">
Then students are told how to perform the analyses in the cells with a statistical software, which values to copy from the software output and how one does report them. In almost all cases p-values are available in the output and are reported along with the test statistic. Since late 90s most software also offers effect size estimates and students are told where to look for them.
Let's go back to the toolbox table. As an example if we measured the performance of two groups (treatment group and control) at three consecutive time points then we have one nominal DV and two IVs: two independent groups and three repeated measurements. Looking at the table, we see that we have to perform Mixed-design Anova.
In the case where the scale of DV is not nominal the following alternative table is provided. Ordinal scale is assumed.
<img src="/assets/ordinaltable.png" width="600">
Finally, psychologists are also taught $\chi^2$ test which is applied when the DV is dichotomic categorical (i.e. count data).
That's the toolbox approach in a nutshell. It has problems.
Problems with toolbox
First, there is a discrepancy between how it is taught and how it is applied. The procedure sketched above is slavishly obeyed even when it doesn't make any sense. For instance, Anova is used as default model even in context where it is inappropriate (i.e. the assumptions of linearity, normality or heteroskedasticity are not satisfied).
Second, the above mentioned approach is intended as a one-way street. You can go only in one direction from step 1 to 5. This is extremely inflexible. The toolbox approach does not allow for the fitted model to be discarded. The model is fitted and the obtained estimates are reported. The 1972 APA manual captures the toolbox spirit: "Caution: Do not infer trends from data that fail by a small margin to meet the usual levels of significance. Such results are best interpreted as caused by chance and are best reported as such. Treat the result section like an income tax return. Take what's coming to you, but no more"
One may protest that too much flexibility is a bad thing. Obviously, too much rigidity - reporting models that are (in hindsight but nevertheless) incorrect is not the solution.
Third, the toolbox implicitly claims to be exhaustive - it aplies as a tool to all research problems. Of course this doesn't happen and as a consequence two cases arise. First, inappropriate models are being fit and reported. We discussed this already in the previous paragraph. Second, the problem is defined in more general terms, such that not all available information (provided by data or prior knowledge) is used. That is, we throw away information so that some tool becomes available, because we would have no tool available if we included the information in the analysis. Good example are non-normal measurements (e.g. skewed response times) which are handled by rank tests listed in the second table. This is done even where it would be perfectly appropriate to fit a parametric model at the nominal scale. For instance we could fit response times with Gamma regression. Unfortunately, Gamma regression didn't make it into the toolbox. At other times structural information is discarded. In behavioral experiments we mostly obtain data with hierarchical structure. We have several subjects and many consecutive trials for each subject and condition. The across-trials variability (due to learning or order effects) can be difficult to analyze with the tools in the toolbox (i.e. time series methods are not in the toolbox). Common strategy is to build single score for each subject (e.g. average performance across trials) and then to analyze the obtained scores across subjects and conditions.
There is one notable strategy to ensure that you have the appropriate tool in the toolbox. If you can't fit a model to the data, then ensure that your data fit some tool in the toolbox. Psychologists devise experiments with manipulations that map the hypothesis onto few measured variables. Ideally the critical comparison is mapped onto single dimension and can be evaluated with a simple t-test. For example, we test two conditions which are identical except for a single manipulation. In this case we discard all the additional information and structure of the data since this is the same across the two conditions (and we do not expect that it will interact with the manipulation).
Unfortunately, there are other more important considerations which should influence the choice of design than the limits of our toolbox. Ecological validity is more important than convenient analysis. In fact, I think that this is the biggest trouble with the toolbox approach. It not only cripples the analysis, it also cripples the experiment design and in turn the choice of the research question.
Detective Work
Let's now have a look at the detective approach. The most vocal recent advocate has been Andrew Gelman (Shalizi & Gelman, 2013) but the idea goes back to George Box and John Tukey (1969). This approach has been most prevalent in fields that heavily rely on observational data - econometry, sociology and political science. Here, researchers were not able to off-load their problems to experimental design. Instead they had to tackle the problems head on by developing flexible data analysis methods.
While the toolbox approach is a one-way street, the detective approach contains a loop that iterates between model estimation and model checking. The purpose of the model checking part is to see whether the model describes the data appropriately. This can be done for instance by looking at the residuals and whether their distribution does not deviate from the distribution postulated by the model. Another option (the so-called predictive checking) is to generate data from the model and to look whether these are reasonably similar to the actual data. In any case, model checking is informal and doesn't even need to be quantitative. Whether a model is appropriate depends on the purpose of the analysis and which aspects of the data are crucial. Still, model checking is part of the results. It should be transparent and replicable. Even if it is informal there are instances which are rather formal up to the degree that can be written down as an algorithm (e.g. the Box-Jenkins method for analyzing time series). Once an appropriate model has been identified this model is used to estimate the quantities of interest. Often however the (structure of the) model itself is of theoretical interest.
An Application of Detective Approach
I already presented an example of detective approach when I discussed modeling of data with skewed distributions. Here, let's take a look at a non-psychological example which illustrates the logic of the detective approach more clearly. The example is taken from Montgomery, Jennings and Kulahci (2008). The data are available from the companion website.
End of explanation
res=np.linalg.lstsq(np.concatenate(
[np.atleast_2d(np.ones(d.size)),np.atleast_2d(t)]).T,d)
y=res[0][0]+res[0][1]*t
plt.plot(t, y)
plt.plot(t,d)
plt.gca().set_xticks(np.arange(0,d.size/12)+1992);
Explanation: The data are U.S. Beverage Manufacturer Product Shipments from 1992 to 2006 as provided by www.census.gov. The shipments show rising trend and yearly seasonal fluctuations. We first get rid of the trend to obtain a better look at the seasonal changes.
End of explanation
plt.plot(t,d-y)
plt.gca().set_xticks(np.arange(0,d.size/12)+1992);
plt.ylim([-1000,1000]);
plt.ylabel('residuals')
plt.xlabel('year')
plt.figure()
plt.plot(np.mod(range(d.size),12)+1,d-y,'o')
plt.xlim([0.5,12.5])
plt.grid(False,axis='x')
plt.xticks(range(1,13))
plt.ylim([-1000,1000]);
plt.ylabel('residuals')
plt.xlabel('month');
Explanation: The fit of the linear curve is acceptable and we continue with model building. We now subtract the trend from the data to obtain the residuals. These show the remaining paterns in the data that require modeling.
End of explanation
plt.plot(np.mod(range(d.size),12)+1,d-y,'o')
plt.xlim([0.5,12.5])
plt.grid(False,axis='x')
plt.xticks(range(1,13))
plt.ylim([-1000,1000])
plt.ylabel('residuals')
plt.xlabel('month')
tt=np.arange(1,13,0.1)
plt.plot(tt,600*np.sin(2*np.pi*tt/12.+4.3));
Explanation: We add the yearly fluctuations to the model. The rough idea is to use sinusoid $\alpha sin(2\pi (t+\phi))$ where $\alpha$ is the amplitude and $\phi$ is the shift. Here is a sketch what we are looking for. (The parameter values were found through trial and error).
End of explanation
x=np.concatenate([np.atleast_2d(np.cos(2*np.pi*t)),
np.atleast_2d(np.sin(2*np.pi*t))]).T
res=np.linalg.lstsq(x,d-y)
plt.plot(t,y+x.dot(res[0]))
plt.plot(t,d,'-')
plt.gca().set_xticks(np.arange(0,d.size/12)+1992);
Explanation: We fit the model. We simplify the fitting process by writing
$$\alpha sin(2\pi (t+\phi))= \alpha cos(2\pi \phi) sin(2\pi t)+\alpha cos(2\pi t) sin(2\pi \phi)= \beta_1 sin(2\pi t)+\beta_2 cos(2\pi t) $$
End of explanation
ynew=y+x.dot(res[0])
from scipy import stats
plt.figure()
plt.hist(d-ynew,15,normed=True);
plt.xlabel('residuals')
print np.std(d-y)
r=range(-600,600)
plt.plot(r,stats.norm.pdf(r,0,np.std(d-ynew)))
plt.figure()
stats.probplot(d-ynew,dist='norm',plot=plt);
Explanation: The results look already good. We zoom in at the residuals, to see how it can be further improved.
End of explanation
plt.plot(t,d-ynew)
plt.ylabel('residuals')
plt.xlabel('year')
plt.gca().set_xticks(np.arange(0,d.size/12)+1992);
plt.figure()
plt.acorr(d-ynew,maxlags=90);
plt.xlabel('month lag')
plt.ylabel('autocorrelation');
plt.figure()
plt.plot(d,d-ynew,'o');
plt.ylabel('residuals')
plt.xlabel('predicted shipments');
Explanation: The residuals are well described by normal distribution with $\sigma=233$. We can summarize our model as $d=\beta_0+\beta_1 t + \beta_2sin 2\pi t + \beta_3sin 2\pi t + \epsilon$ with error $\epsilon \sim \mathcal{N}(0,\sigma)$
The plots below suggest some additional improvements.
End of explanation |
10,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unpaired assembly challenge
You will implement software to assemble a genome from synthetic reads. We supply Python code snippets that you might use or adapt in your solutions, but you don't have to.
Part 1
Step1: Or you can download them manually from your browser.
Part 2
Step2: Hint 2
Step3: Part 3
Step4: Here is an example of what your output should look like. Note how the sequence is spread over many lines. | Python Code:
# Download the file containing the reads to "reads.fa" in current directory
! wget http://www.cs.jhu.edu/~langmea/resources/f2020_hw4_reads.fa
# Following line is so we can see the first few lines of the reads file
# from within IPython -- don't paste this into your Python code
! head f2020_hw4_reads.fa
Explanation: Unpaired assembly challenge
You will implement software to assemble a genome from synthetic reads. We supply Python code snippets that you might use or adapt in your solutions, but you don't have to.
Part 1: Get and parse the reads
10% of the points for this question
Download the reads:
http://www.cs.jhu.edu/~langmea/resources/f2020_hw4_reads.fa
All the reads come from the same synthetic genome and each is 100 nt long. For simplicity, these reads don't have any quality values.
The following Python code will download the data to a file called reads.fa in the current directory. (Caveat: I don't think the code below works in Python 3. Sorry about that. Go here for details on how to fix: http://python-future.org/compatible_idioms.html#urllib-module.)
End of explanation
def make_kmer_table(seqs, k):
''' Given dictionary (e.g. output of parse_fasta) and integer k,
return a dictionary that maps each k-mer to the set of names
of reads containing the k-mer. '''
table = {} # maps k-mer to set of names of reads containing k-mer
for name, seq in seqs.items():
for i in range(0, len(seq) - k + 1):
kmer = seq[i:i+k]
if kmer not in table:
table[kmer] = set()
table[kmer].add(name)
return table
Explanation: Or you can download them manually from your browser.
Part 2: Build an overlap graph
40% of the points for this question
Goal: Write a file containing each read's best buddy to the right. Let's define that.
For each read $A$, find the other read $B$ that has the longest suffix/prefix match with $A$, i.e. a suffix of $A$ matches a prefix of $B$. $B$ is $A$'s best buddy to the right. However, if there is a tie, or if the longest suffix/prefix match is less than 40 nucleotides long, then $A$ has no best buddy to the right. For each read, your program should output either (a) nothing, if there is no best buddy to the right, or (b) a single, space-separated line with the IDs of $A$ and $B$ and the length of the overlap, like this:
0255/2 2065/1 88
This indicates a 88 bp suffix of the read with ID 0255/2 is a prefix of the read with ID 2065/1. Because of how we defined best buddy, it also means no other read besides 2065/1 has a prefix of 88+ bp that is also a suffix of read 0255/2. A corrolary of this is that a particular read ID should appear in the first column of your program's output at most once. Also, since we require the overlap to be at least 40 bases long, no number less than 40 should every appear in the last column.
Notes:
* You can assume all reads are error-free and from the forward strand. You do not need to consider sequencing errors or reverse complements.
* Below is a hint that can make things speedier.
* The order of the output lines is not important.
Hint 1: the following function groups reads such that you can avoid comparing every read to every other read when looking for suffix/prefix matches. It builds a dictionary called table where the keys are k-mers and the values are sets containing the names of all reads containing that k-mer. Since you are looking for overlaps of length at least 40, you only need to compare reads if they have at least 1 40-mer in common.
End of explanation
def suffixPrefixMatch(str1, str2, min_overlap):
''' Returns length of longest suffix of str1 that is prefix of
str2, as long as that suffix is at least as long as min_overlap. '''
if len(str2) < min_overlap: return 0
str2_prefix = str2[:min_overlap]
str1_pos = -1
while True:
str1_pos = str1.find(str2_prefix, str1_pos + 1)
if str1_pos == -1: return 0
str1_suffix = str1[str1_pos:]
if str2.startswith(str1_suffix): return len(str1_suffix)
Explanation: Hint 2: here's a function for finding suffix/prefix matches; we saw this in class:
End of explanation
import sys
def write_solution(genome, per_line=60, out=sys.stdout):
offset = 0
out.write('>solution\n')
while offset < len(genome):
nchars = min(len(genome) - offset, per_line)
line = genome[offset:offset+nchars]
offset += nchars
out.write(line + '\n')
Explanation: Part 3: Build unitigs
50% of the points for this question
Goal: Write a program that takes the output of the overlap program from part 1 and creates uniquely assemblable contigs (unitigs), using the best buddy algorithm described below.
We already determined each read's best buddy to the right. I'll abbreviate this as bbr. We did not attempt to compute each read's best buddy to the left (bbl), but we can infer it from the bbrs. Consider the following output:
A B 60
E A 40
C B 70
D C 40
$A$'s bbr is $B$. But $B$'s bbl is $C$, not $A$! Your program should form unitigs by joining together two reads $X$ and $Y$ if they are mutual best buddies. $X$ and $Y$ are mutual best buddies if $X$'s bbr is $Y$ and $Y$'s bbl is $X$, or vice versa. In this example, we would join $D$, $C$, and $B$ into a single unitig (and in that order), and would join reads $E$ and $A$ into a single unitig (also in that order).
Your program's output should consist of several entries like the following, with one entry per unitig:
START UNITIG 1 D
C 40
B 70
END UNITIG 1
START UNITIG 2 E
A 40
END UNITIG 2
The first entry represents a unitig with ID 1 consisting of 3 reads. The first (leftmost) read is D. The second read, C, has a 40 nt prefix that is a suffix of the previous read (D). The third (rightmost) read in the contig (B) has a 70 bp prefix that is a suffix of the previous read (C).
Each read should be contained in exactly one unitig. The order of unitigs in the file is not important, but the unitig IDs should be integers and assigned in ascending order.
Note: we will never provide an input that can result in a circular unitig (i.e. one where a chain of mutual best buddies loops back on itself.)
Hint: the correct solution consists of exactly 4 unitigs.
OPTIONAL Part 4: Finish the assembly
This part is optional. You can submit your solution if you like. No extra credit will be awarded.
Goal: Assemble the genome! Report the sequence of the original genome as a FASTA file.
This requires that you compare the unitigs to each other, think about what order they must go in, and then put them together accordingly. Submit your solution as a single FASTA file containing a single sequence named "solution". The FASTA file should be "wrapped" so that no line has more than 60 characters. You can use the following Python code to write out your answer.
End of explanation
import random
random.seed(5234)
write_solution(''.join([random.choice('ACGT') for _ in range(500)]))
Explanation: Here is an example of what your output should look like. Note how the sequence is spread over many lines.
End of explanation |
10,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Data Challenge
Files are stored in an S3 bucket. The purpose here is to fully analyze the data and make some predictions.
This workbook was exported to a Python script and the resulting code was checked for PEP8 problems. The problems found were with formatting and order of imports. We can't fix the latter due to the way the script is exported from the notebook.
Any known or potential performance problems are flagged for further work.
Some of the information here is based on posts from Stackoverflow. I haven't kept up with it, so I'll just indicate by flagging with SO.
Prepare an AWS Instance
We're using a tiny instance here since the data and processing requirements are minimal.
Install Anaconda for Python 3.6.
Install the boto3 and seaborn packages.
Install and configure the AWS command-line tools.
Use jupyter notebook --generate-config to generate a config file in ~/.jupyter. An example is enclosed in the GitHub repository.
Run conda install -c conda-forge jupyter_contrib_nbextensions.
Configure Jupyter on the remote side to run without a browser and to require a password. Make sure you put the config file in the proper place.
On the client side, run ssh -N -L 8888
Step3: Explanation of Import Code
We are using dfr instead of fr. The latter is the name of a function in R.
We want to use the Pandas csv import and parse the timestamp into a Timestamp field.
The code assumes there are only csv files to process in the data directory. This can be fixed but it makes the code more complicated and will not be addressed here.
See above for a few issues discovered with the data.
We parse the url field to obtain the item counts purchased. This allows us to infer prices.
urlparse returns a structure. The first element is the hostname, and the fourth is the query string (if available). All our url strings have a query string, so we don't need any special processing here.
Apply the function convert_list to the query Series. The result is a Series. Why is this important?
python
qq = item_query.apply(lambda x
Step4: Just to make sure there are no inconsistencies in the data, let's check the website_id against the domain name we've extracted.
Step5: Finally, let's drop the error column. It affects a single row, and we don't have enough information to determine if this is a legitimate error or not.
Step6: Potential Performance Issues
Import code processing by file
Step7: Before we comment further, here are total sales per file_date. This is the date we got from the file name, assuming that each file contains a single day of data.
Step8: So now we can see what's happening. Since the files do not separate dates properly, the average sales are off. We need to use the actual transaction date to calculate sales. Additionally, some of the checkout values are negative. This distorts the sales.
The analyst's question is thus answered - there is a problem with just importing one file to calculate the average sales. Also, average sales, while meaningful, is not a good metric. It is sensitive to outliers, so a very large or small transaction has a large effect on the average. A better measure would be the median or quartiles. But let's look at a boxplot of the data first.
Step9: This is a boxplot, which is a good way to display outliers, the median, quartiles, and range. We can see immediately that we have a problem with the wide range of checkout amounts. This is why the interquartile range is so compressed. There are two options.
The large values are not legitimate transactions. In this case, they are true outliers and should be ignored.
The large values are definitely legitimate transactions. This will complicate any predictive model.
We need more information about sales. We need to know if the negative values are recorded wrong.
Extracting Prices from the URL
Let's now use the pricing information extracted from the url. We can query items in the DataFrame to find this information. At this point, we assume that prices are not changed on items during the day. This seems to be the case with this data, but we can't make that assumption about future data.
Note
Step10: Based on this table, it seems true that prices do not change from day to day, or during the day. However, what if a price was changed during the day, and then changed back? Our analysis will not pick this up.
Notes for the Analyst
Make sure you understand what is in the files before you run an analysis.
Research with the customer the negative checkout amounts.
There is a single transaction with an error in the url query, but there is a seemingly valid item. Ask about this.
What does placeholder mean? Why is it blank in two files?
One of the websites has checkouts from two different subdomains. Make sure you understand this. Is one set of checkout amounts mobile versus desktop?
The code I have produced is inefficient in many aspects. If you can modify the code to make it more efficient, do so. Otherwise, get back to me.
Note that I can confirm your sales figure on the 3rd if I don't take the absolute value. The above analysis is with the absolute value of the checkout amount.
Are the checkout amounts consistent with the prices we calculated? Check this.
Check the data issues above to find anything I missed.
Analysis of Purchases
This is not hard, but the format of the results is a bit difficult to see. Let's reduce our dataset down a bit to examine counts of purchases.
Step11: Now we can look at the top five purchases. Let's concentrate first on Bignay.
At the risk of overcomplicating the code, let's make a data structure which may not be optimal. We shall see.
Step12: We can see that a lot of items are bought from example.com in bulk, and together. This means shipping costs for these orders are higher.
Step13: And for xyz.com, we see a different pattern. the checkout amounts are lower, and the number of items bought together is much lower. So shipping costs are lower.
We can get the data for 7/2/2017 using gb_frames[2], etc, and for 7/3/2017 using gb_frames[4], etc. They will not be displayed here.
It's a bit difficult to generalize this to other columns without further analysis, and it's tedious to do nlargest for each column. Let's look at the correlation between order amounts for each website. First, an example. Then we will look at each date individually.
Note
Step14: This type of plot gives us information about which items are bought together. Let's see one for xyz.com.
Step15: It's very interesting to see that the item correlations are much lower. Originally, we used the same colormap as for the first heatmap, but the lower correlations didn't look as well (the colors wash out). It's difficult to distinguish the colors here. More research is needed to produce meaningful colors in these plots.
At this point, we could write a few loops and use subplots to display more information.
Sales Prediction
Let's take a look at the daily sales again.
Step16: Here are a few modeling considerations.
We only have three days to work with. That's not enough.
There are questions to clear up about the data.
How does the app version and placeholder affect the data?
However, we can make some predictions at this point. First, we can make the following prediction by averaging.
Step17: We can also draw a regression plot and make predictions using the regression line.
First, there are a few details with the index. Placing the labels is also annoying. The alternatives are to use the Pandas plotting capabilities. In fact, there is no really good solution to plotting a time series and associated regression line without creating the regression line values in the DataFrame. This latter idea is what I usually do, however, the below charts were produced in a different way.
Note that both show a clear linear trend. However, there are problems - we just don't have enough data. regplot is a good tool for linear regression plots, but it does have its deficiencies, such as not providing the slope and intercept of the regression line. Additionally, the error region is not meaningful with this small amount of data.
Step18: And the sales prediction is
Step19: Again, the sales prediction is | Python Code:
def convert_list(query_string):
Parse the query string of the url into a dictionary.
Handle special cases:
- There is a single query "error=True" which is rewritten to 1 if True, else 0.
- Parsing the query returns a dictionary of key-value pairs. The value is a list.
We must get the list value as an int.
Note: This function may be a bottleneck when processing larger data files.
def handle_error(z, col):
Called in the dictionary comprehension below to handle the "error" key.
if "error" in col:
return 1 if "True" in z else 0
return z
dd = parse_qs(query_string)
return {k: int(handle_error(dd[k][0], k)) for k in dd}
Explanation: Data Challenge
Files are stored in an S3 bucket. The purpose here is to fully analyze the data and make some predictions.
This workbook was exported to a Python script and the resulting code was checked for PEP8 problems. The problems found were with formatting and order of imports. We can't fix the latter due to the way the script is exported from the notebook.
Any known or potential performance problems are flagged for further work.
Some of the information here is based on posts from Stackoverflow. I haven't kept up with it, so I'll just indicate by flagging with SO.
Prepare an AWS Instance
We're using a tiny instance here since the data and processing requirements are minimal.
Install Anaconda for Python 3.6.
Install the boto3 and seaborn packages.
Install and configure the AWS command-line tools.
Use jupyter notebook --generate-config to generate a config file in ~/.jupyter. An example is enclosed in the GitHub repository.
Run conda install -c conda-forge jupyter_contrib_nbextensions.
Configure Jupyter on the remote side to run without a browser and to require a password. Make sure you put the config file in the proper place.
On the client side, run ssh -N -L 8888:localhost:8888 [email protected] (the hostname may vary). This sets up the ssh tunnel and maps to port 8888 locally.
ssh into the instance using ssh [email protected].
Start jupyter notebook.
Note: You can start the tmux terminal multiplexer in order to use notebooks when you are logged out.
Note: This doesn't start X windows tunnelling, which is a separate configuration and is not presented here.
Get the Data from S3
Instead of processing the data with the boto3.s3 class, we chose to
bash
aws s3 cp --recursive s3://my_bucket_name local_folder
This is not scalable, so we can use code such as
```python
s3 = boto3.resource('s3')
bucket_name = "postie-testing-assets"
test = s3.Bucket(bucket_name)
s3.meta.client.head_bucket(Bucket=bucket_name)
{'ResponseMetadata': {'HTTPHeaders': {'content-type': 'application/xml',
'date': 'Mon, 16 Oct 2017 18:06:53 GMT',
'server': 'AmazonS3',
'transfer-encoding': 'chunked',
'x-amz-bucket-region': 'us-east-1',
'x-amz-id-2': 'YhUEo61GDGSwz1qOpFGJl+C9Sxal34XKRYzOI0TF49PsSSGsbGg2Y6xwbf07z+KHIKusPIYkjxE=',
'x-amz-request-id': 'DDD0C4B61BDF320E'},
'HTTPStatusCode': 200,
'HostId': 'YhUEo61GDGSwz1qOpFGJl+C9Sxal34XKRYzOI0TF49PsSSGsbGg2Y6xwbf07z+KHIKusPIYkjxE=',
'RequestId': 'DDD0C4B61BDF320E',
'RetryAttempts': 0}}
for key in test.objects.all():
print(key.key)
2017-07-01.csv
2017-07-02.csv
2017-07-03.csv
```
We did not bother to use boto3.
Import and Process the Data
The most difficult part was processing the url query string. We are using convert_list as a helper funtion during import. It allows us to parse the url query string out properly into a Pandas Series whose elements are dictionaries. We're rewriting the dictonyary before returning it to transform everything to int. Note that we are handling the error key in the url. There is only one of these, but it looks like a completed transaction. We don't know why it's there, so we will keep it until we learn more.
Also, once we have a pd.Series object, we can apply the Series constructor to map the key-value pairs from the parsed query string into a separate DataFrame. This frame has the same number of rows, and is in the same order, as the original data. So we can just use the join method to put together the two DataFrames.
We then make a large DataFrame to keep all the data, and keep a list of DataFrame around for each file imported (just in case).
Prediction
To build a predictive model, we need more data. Ideally, we should enrich with customer and item information. An interesting idea is to use the item images from the website to generate features.
Data Issues
Column names have spaces, so we need to remove them. A more sophisticated method would do this on import. However, processing the first row in this way may slow down the import process, particularly if the files are much larger. There are ways to read chunks via read_csv, which can be used in a class to get the first line of the file, process it as a header, then continue reading the rest of the file in chunks. This is probably the best way to read many large files.
Placeholder is blank (NaN) for two files. But is this needed?
The file labeled "2017-07-01" has transactions for 7/1/2017 and 7/2/2017.
The file labeled "2017-07-02" has transactions for 7/2/2017 and 7/3/2017.
The file labeled "2017-07-03" has transactions only for 7/3/2017.
There are two website id's, but one website id has two separate domain names: store.example.com, and www.example.com. This affects counts and also reporting if using the domain name. Handling the domain names is very dependent on this dataset - no effort was made to write a more general solution.
Some of the checkout values are negative. Do the websites allow online returns? What does a negative checkout amount mean? We will assume that negative values are recorded with the wrong sign. So we take the absolute value of the checkout_amount.
End of explanation
dfr = pd.DataFrame()
col_names = ["timestamp", "website_id", "customer_id", "app_version", "placeholder", "checkout_amount", "url"]
data_report = []
individual_days = []
item_lists = []
for fname in os.listdir("data"):
ffr = pd.read_csv(os.path.join("data", fname),
header=0, names=col_names,
infer_datetime_format=True, parse_dates=[0])
file_date = fname.split(".")[0]
ffr["file_date"] = file_date
transaction_date = ffr.timestamp.apply(lambda x: x.strftime('%Y-%m-%d')) # reformat transaction timestamp
ffr["transaction_date"] = transaction_date
url_items = ffr.url.apply(lambda x: urlparse(x))
domain_name = url_items.apply(lambda x: x[1])
# handle store.example.com and www.example.com as the same website
ffr["domain_name"] = domain_name.apply(lambda x: x if not "example.com" in x else ".".join(x.split(".")[1:]))
item_query = url_items.apply(lambda x: x[4])
qq = item_query.apply(lambda x: convert_list(x)).apply(pd.Series).fillna(value=0)
item_lists += qq.columns.tolist()
final_fr = ffr.join(qq)
print("date {} has {} sales for rows {} and unique dates {}".format(fname, ffr.checkout_amount.sum(),
ffr.shape[0],
transaction_date.unique().shape[0]))
data_report.append({"file_date": file_date, "sales": ffr.checkout_amount.sum(),
"n_placeholder_nan": sum(ffr.placeholder.isnull()),
"n_rows": ffr.shape[0],
"n_websites": ffr.website_id.unique().shape[0],
"n_customers": ffr.customer_id.unique().shape[0],
"n_app_versions": ffr.app_version.unique().shape[0],
"n_dates": transaction_date.unique().shape[0]})
dfr = dfr.append(final_fr)
individual_days.append(final_fr)
### Note: This is an assumption
dfr["checkout_amount"] = dfr["checkout_amount"].abs()
dfr.reset_index(drop=True, inplace=True)
item_lists = list(set([item for item in item_lists if not "error" in item]))
dfr.shape
Explanation: Explanation of Import Code
We are using dfr instead of fr. The latter is the name of a function in R.
We want to use the Pandas csv import and parse the timestamp into a Timestamp field.
The code assumes there are only csv files to process in the data directory. This can be fixed but it makes the code more complicated and will not be addressed here.
See above for a few issues discovered with the data.
We parse the url field to obtain the item counts purchased. This allows us to infer prices.
urlparse returns a structure. The first element is the hostname, and the fourth is the query string (if available). All our url strings have a query string, so we don't need any special processing here.
Apply the function convert_list to the query Series. The result is a Series. Why is this important?
python
qq = item_query.apply(lambda x: convert_list(x)).apply(pd.Series).fillna(value=0)
We need to apply the Series constructor to each row of the results of convert_list. The constructor parses the key-value pairs into columns and creates a DataFrame. We then fill the NaN values introduced. Since the resulting DataFrame has the same rows, in the same order, as the source frame (ffr, see below), we can just use the join method.
We keep a list DataFrames of the separate files.
End of explanation
pd.pivot_table(dfr, values="website_id", index="transaction_date", columns="domain_name", aggfunc=[np.max])
Explanation: Just to make sure there are no inconsistencies in the data, let's check the website_id against the domain name we've extracted.
End of explanation
dfr.drop(["error"], axis=1, inplace=True)
Explanation: Finally, let's drop the error column. It affects a single row, and we don't have enough information to determine if this is a legitimate error or not.
End of explanation
pd.pivot_table(dfr, values="checkout_amount", index="transaction_date", columns="domain_name",
aggfunc=[np.sum], margins=True)
Explanation: Potential Performance Issues
Import code processing by file: handle url and domain name processing outside the loop.
Wrap import code into a function and read in the data using a list comprehension.
Keeping a list of DataFrames for each file imported is not necessary and should be eliminated in pipeline code.
Extract the individual item price by domain name upon import.
Summaries of the Data
First, let's check out the total sales per transaction_date. This is the date from the timestamp field. A separate field file_date will also be used to reveal problems with the data.
Note: These pivot tables can obviously be reformatted but we will not bother here. That's something to do when presenting externally.
End of explanation
pd.pivot_table(dfr, values="checkout_amount", index="file_date", columns="domain_name",
aggfunc=[np.sum], margins=True)
Explanation: Before we comment further, here are total sales per file_date. This is the date we got from the file name, assuming that each file contains a single day of data.
End of explanation
sns.boxplot(x="transaction_date", y="checkout_amount", data=dfr);
Explanation: So now we can see what's happening. Since the files do not separate dates properly, the average sales are off. We need to use the actual transaction date to calculate sales. Additionally, some of the checkout values are negative. This distorts the sales.
The analyst's question is thus answered - there is a problem with just importing one file to calculate the average sales. Also, average sales, while meaningful, is not a good metric. It is sensitive to outliers, so a very large or small transaction has a large effect on the average. A better measure would be the median or quartiles. But let's look at a boxplot of the data first.
End of explanation
cols = [item for item in item_lists if not "error" in item]
pricing_temp = dfr[dfr[cols].astype(bool).sum(axis=1) == 1].copy() # to avoid setting values on a view
pricing_temp.drop_duplicates(subset=cols + ["domain_name", "transaction_date"], inplace=True)
price_cols = []
for col in cols:
price_cols.append(np.abs(pricing_temp["checkout_amount"]/pricing_temp[col]))
pricing = pd.concat(price_cols, axis=1)
pricing.columns = cols
price_cols = [col + "_price" for col in cols]
px = pricing_temp.join(pricing, rsuffix="_price")[price_cols + ["transaction_date", "domain_name"]]
px = px.replace([np.inf, -np.inf], np.nan).fillna(value=0)
pd.pivot_table(px, values=price_cols, index="transaction_date", columns="domain_name", aggfunc=np.max).transpose()
Explanation: This is a boxplot, which is a good way to display outliers, the median, quartiles, and range. We can see immediately that we have a problem with the wide range of checkout amounts. This is why the interquartile range is so compressed. There are two options.
The large values are not legitimate transactions. In this case, they are true outliers and should be ignored.
The large values are definitely legitimate transactions. This will complicate any predictive model.
We need more information about sales. We need to know if the negative values are recorded wrong.
Extracting Prices from the URL
Let's now use the pricing information extracted from the url. We can query items in the DataFrame to find this information. At this point, we assume that prices are not changed on items during the day. This seems to be the case with this data, but we can't make that assumption about future data.
Note: We are doing the item prices separately since the method used to produce the DataFrame above isn't adaptable. A different method of processing the data which avoids using the convienence of the pd.Series constructor can be used, at the expense of more difficult processing of the parsed url. Also, it's possible that an item only appears for a given domain name on a particular day, meaning we must process all the data first.
First, to get the rows where only one item was purchased, we can use (SO)
python
dfr[dfr[cols].astype(bool).sum(axis=1) == 1]
We then calculate the pricing and reformulate the data into a table we can view. This code is annoying to write, so it's preferable to redo the above loop to get the pricing. However, that may slow down processing.
End of explanation
frame = dfr[item_lists + ["transaction_date", "domain_name", "checkout_amount"]]
gr = frame.groupby(["transaction_date", "domain_name"])
Explanation: Based on this table, it seems true that prices do not change from day to day, or during the day. However, what if a price was changed during the day, and then changed back? Our analysis will not pick this up.
Notes for the Analyst
Make sure you understand what is in the files before you run an analysis.
Research with the customer the negative checkout amounts.
There is a single transaction with an error in the url query, but there is a seemingly valid item. Ask about this.
What does placeholder mean? Why is it blank in two files?
One of the websites has checkouts from two different subdomains. Make sure you understand this. Is one set of checkout amounts mobile versus desktop?
The code I have produced is inefficient in many aspects. If you can modify the code to make it more efficient, do so. Otherwise, get back to me.
Note that I can confirm your sales figure on the 3rd if I don't take the absolute value. The above analysis is with the absolute value of the checkout amount.
Are the checkout amounts consistent with the prices we calculated? Check this.
Check the data issues above to find anything I missed.
Analysis of Purchases
This is not hard, but the format of the results is a bit difficult to see. Let's reduce our dataset down a bit to examine counts of purchases.
End of explanation
gb_frames = []
for name, group in gr:
gb_frames.append({"date": name[0], "domain_name": name[1], "frame": group})
gb_frames[0]["frame"].nlargest(5, "Bignay")
Explanation: Now we can look at the top five purchases. Let's concentrate first on Bignay.
At the risk of overcomplicating the code, let's make a data structure which may not be optimal. We shall see.
End of explanation
gb_frames[1]["frame"].nlargest(5, "Bignay")
Explanation: We can see that a lot of items are bought from example.com in bulk, and together. This means shipping costs for these orders are higher.
End of explanation
corr = gb_frames[0]["frame"][item_lists].corr()
ax = sns.heatmap(corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values,
linewidths=.5, cmap="YlGnBu")
plt.title("{} Item Correlation for {}".format(gb_frames[0]["domain_name"], gb_frames[0]["date"]));
Explanation: And for xyz.com, we see a different pattern. the checkout amounts are lower, and the number of items bought together is much lower. So shipping costs are lower.
We can get the data for 7/2/2017 using gb_frames[2], etc, and for 7/3/2017 using gb_frames[4], etc. They will not be displayed here.
It's a bit difficult to generalize this to other columns without further analysis, and it's tedious to do nlargest for each column. Let's look at the correlation between order amounts for each website. First, an example. Then we will look at each date individually.
Note: The checkout amount is collinear with the item counts.
End of explanation
corr = gb_frames[1]["frame"][item_lists].corr()
ax = sns.heatmap(corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values,
linewidths=.5) #, cmap="YlGnBu")
plt.title("{} Item Correlation for {}".format(gb_frames[1]["domain_name"], gb_frames[1]["date"]));
Explanation: This type of plot gives us information about which items are bought together. Let's see one for xyz.com.
End of explanation
pt = pd.pivot_table(dfr, values="checkout_amount", index="transaction_date", columns="domain_name",
aggfunc=[np.sum], margins=False)
pt.columns = ["example.com", "xyz.com"] # get rid of multiindex
pt
Explanation: It's very interesting to see that the item correlations are much lower. Originally, we used the same colormap as for the first heatmap, but the lower correlations didn't look as well (the colors wash out). It's difficult to distinguish the colors here. More research is needed to produce meaningful colors in these plots.
At this point, we could write a few loops and use subplots to display more information.
Sales Prediction
Let's take a look at the daily sales again.
End of explanation
for col in pt.columns.tolist():
print("Tomorrow's (7/14/2017) sales for {} is {}".format(col, pt[col].mean()))
Explanation: Here are a few modeling considerations.
We only have three days to work with. That's not enough.
There are questions to clear up about the data.
How does the app version and placeholder affect the data?
However, we can make some predictions at this point. First, we can make the following prediction by averaging.
End of explanation
pt = pd.pivot_table(dfr, values="checkout_amount", index="transaction_date", columns="domain_name",
aggfunc=[np.sum], margins=False)
pt.columns = ['example.com', 'xyz.com']
pt.index = pd.DatetimeIndex(pt.index)
idx = pd.date_range(pt.index.min(), pt.index.max())
pt = pt.reindex(index=idx)
pt.insert(pt.shape[1],
'row_count',
pt.index.value_counts().sort_index().cumsum())
slope, intercept, r_value, p_value, std_err = linregress(x=pt["row_count"].values,
y=pt["example.com"].values)
pt["example.com regression line"] = intercept + slope * pt["row_count"]
ax = pt[["example.com", "example.com regression line"]].plot()
Explanation: We can also draw a regression plot and make predictions using the regression line.
First, there are a few details with the index. Placing the labels is also annoying. The alternatives are to use the Pandas plotting capabilities. In fact, there is no really good solution to plotting a time series and associated regression line without creating the regression line values in the DataFrame. This latter idea is what I usually do, however, the below charts were produced in a different way.
Note that both show a clear linear trend. However, there are problems - we just don't have enough data. regplot is a good tool for linear regression plots, but it does have its deficiencies, such as not providing the slope and intercept of the regression line. Additionally, the error region is not meaningful with this small amount of data.
End of explanation
print("Predicted sales for example.com on 7/4/2014 is {} with significance {}".format(intercept + slope * 4, r_value*r_value))
pt = pd.pivot_table(dfr, values="checkout_amount", index="transaction_date", columns="domain_name",
aggfunc=[np.sum], margins=False)
pt.columns = ["example.com", "xyz.com"]
pt.index = pd.DatetimeIndex(pt.index)
idx = pd.date_range(pt.index.min(), pt.index.max())
pt = pt.reindex(index=idx)
pt.insert(pt.shape[1],
'row_count',
pt.index.value_counts().sort_index().cumsum())
slope, intercept, r_value, p_value, std_err = linregress(x=pt["row_count"].values,
y=pt["xyz.com"].values)
pt["xyz.com regression line"] = intercept + slope * pt["row_count"]
ax = pt[["xyz.com", "xyz.com regression line"]].plot()
Explanation: And the sales prediction is
End of explanation
print("Predicted sales for xyz.com on 7/4/2014 is {} with significance {}".format(intercept + slope * 4, r_value*r_value))
Explanation: Again, the sales prediction is
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.