Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
10,200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial #16
Reinforcement Learning (Q-Learning)
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial is about so-called Reinforcement Learning in which an agent is learning how to navigate some environment, in this case Atari games from the 1970-80's. The agent does not know anything about the game and must learn how to play it from trial and error. The only information that is available to the agent is the screen output of the game, and whether the previous action resulted in a reward or penalty.
This is a very difficult problem in Machine Learning / Artificial Intelligence, because the agent must both learn to distinguish features in the game-images, and then connect the occurence of certain features in the game-images with its own actions and a reward or penalty that may be deferred many steps into the future.
This problem was first solved by the researchers from Google DeepMind. This tutorial is based on the main ideas from their early research papers (especially this and this), although we make several changes because the original DeepMind algorithm was awkward and over-complicated in some ways. But it turns out that you still need several tricks in order to stabilize the training of the agent, so the implementation in this tutorial is unfortunately also somewhat complicated.
The basic idea is to have the agent estimate so-called Q-values whenever it sees an image from the game-environment. The Q-values tell the agent which action is most likely to lead to the highest cumulative reward in the future. The problem is then reduced to finding these Q-values and storing them for later retrieval using a function approximator.
This builds on some of the previous tutorials. You should be familiar with TensorFlow and Convolutional Neural Networks from Tutorial #01 and #02. It will also be helpful if you are familiar with one of the builder APIs in Tutorials #03 or #03-B.
The Problem
This tutorial uses the Atari game Breakout, where the player or agent is supposed to hit a ball with a paddle, thus avoiding death while scoring points when the ball smashes pieces of a wall.
When a human learns to play a game like this, the first thing to figure out is what part of the game environment you are controlling - in this case the paddle at the bottom. If you move right on the joystick then the paddle moves right and vice versa. The next thing is to figure out what the goal of the game is - in this case to smash as many bricks in the wall as possible so as to maximize the score. Finally you need to learn what to avoid - in this case you must avoid dying by letting the ball pass beside the paddle.
Below are shown 3 images from the game that demonstrate what we need our agent to learn. In the image to the left, the ball is going downwards and the agent must learn to move the paddle so as to hit the ball and avoid death. The image in the middle shows the paddle hitting the ball, which eventually leads to the image on the right where the ball smashes some bricks and scores points. The ball then continues downwards and the process repeats.
The problem is that there are 10 states between the ball going downwards and the paddle hitting the ball, and there are an additional 18 states before the reward is obtained when the ball hits the wall and smashes some bricks. How can we teach an agent to connect these three situations and generalize to similar situations? The answer is to use so-called Reinforcement Learning with a Neural Network, as shown in this tutorial.
Q-Learning
One of the simplest ways of doing Reinforcement Learning is called Q-learning. Here we want to estimate so-called Q-values which are also called action-values, because they map a state of the game-environment to a numerical value for each possible action that the agent may take. The Q-values indicate which action is expected to result in the highest future reward, thus telling the agent which action to take.
Unfortunately we do not know what the Q-values are supposed to be, so we have to estimate them somehow. The Q-values are all initialized to zero and then updated repeatedly as new information is collected from the agent playing the game. When the agent scores a point then the Q-value must be updated with the new information.
There are different formulas for updating Q-values, but the simplest is to set the new Q-value to the reward that was observed, plus the maximum Q-value for the following state of the game. This gives the total reward that the agent can expect from the current game-state and onwards. Typically we also multiply the max Q-value for the following state by a so-called discount-factor slightly below 1. This causes more distant rewards to contribute less to the Q-value, thus making the agent favour rewards that are closer in time.
The formula for updating the Q-value is
Step1: The main source-code for Reinforcement Learning is located in the following module
Step2: This was developed using Python 3.6.0 (Anaconda) with package versions
Step3: Game Environment
This is the name of the game-environment that we want to use in OpenAI Gym.
Step4: This is the base-directory for the TensorFlow checkpoints as well as various log-files.
Step5: Once the base-dir has been set, you need to call this function to set all the paths that will be used. This will also create the checkpoint-dir if it does not already exist.
Step6: Download Pre-Trained Model
The original version of this tutorial provided some TensorFlow checkpoints with pre-trained models for download. But due to changes in both TensorFlow and OpenAI Gym, these pre-trained models cannot be loaded anymore so they have been deleted from the web-server. You will therefore have to train your own model further below.
Create Agent
The Agent-class implements the main loop for playing the game, recording data and optimizing the Neural Network. We create an object-instance and need to set training=True because we want to use the replay-memory to record states and Q-values for plotting further below. We disable logging so this does not corrupt the logs from the actual training that was done previously. We can also set render=True but it will have no effect as long as training==True.
Step7: The Neural Network is automatically instantiated by the Agent-class. We will create a direct reference for convenience.
Step8: Similarly, the Agent-class also allocates the replay-memory when training==True. The replay-memory will require more than 3 GB of RAM, so it should only be allocated when needed. We will need the replay-memory in this Notebook to record the states and Q-values we observe, so they can be plotted further below.
Step9: Training
The agent's run() function is used to play the game. This uses the Neural Network to estimate Q-values and hence determine the agent's actions. If training==True then it will also gather states and Q-values in the replay-memory and train the Neural Network when the replay-memory is sufficiently full. You can set num_episodes=None if you want an infinite loop that you would stop manually with ctrl-c. In this case we just set num_episodes=1 because we are not actually interested in training the Neural Network any further, we merely want to collect some states and Q-values in the replay-memory so we can plot them below.
Step10: In training-mode, this function will output a line for each episode. The first counter is for the number of episodes that have been processed. The second counter is for the number of states that have been processed. These two counters are stored in the TensorFlow checkpoint along with the weights of the Neural Network, so you can restart the training e.g. if you only have one computer and need to train during the night.
Note that the number of episodes is almost 90k. It is impractical to print that many lines in this Notebook, so the training is better done in a terminal window by running the following commands
Step11: We can now read the logs from file
Step12: Training Progress
Step13: Training Progress
Step14: Testing
When the agent and Neural Network is being trained, the so-called epsilon-probability is typically decreased from 1.0 to 0.1 over a large number of steps, after which the probability is held fixed at 0.1. This means the probability is 0.1 or 10% that the agent will select a random action in each step, otherwise it will select the action that has the highest Q-value. This is known as the epsilon-greedy policy. The choice of 0.1 for the epsilon-probability is a compromise between taking the actions that are already known to be good, versus exploring new actions that might lead to even higher rewards or might lead to death of the agent.
During testing it is common to lower the epsilon-probability even further. We have set it to 0.01 as shown here
Step15: We will now instruct the agent that it should no longer perform training by setting this boolean
Step16: We also reset the previous episode rewards.
Step17: We can render the game-environment to screen so we can see the agent playing the game, by setting this boolean
Step18: We can now run a single episode by calling the run() function again. This should open a new window that shows the game being played by the agent. At the time of this writing, it was not possible to resize this tiny window, and the developers at OpenAI did not seem to care about this feature which should obviously be there.
Step19: Mean Reward
The game-play is slightly random, both with regard to selecting actions using the epsilon-greedy policy, but also because the OpenAI Gym environment will repeat any action between 2-4 times, with the number chosen at random. So the reward of one episode is not an accurate estimate of the reward that can be expected in general from this agent.
We need to run 30 or even 50 episodes to get a more accurate estimate of the reward that can be expected.
We will first reset the previous episode rewards.
Step20: We disable the screen-rendering so the game-environment runs much faster.
Step21: We can now run 30 episodes. This records the rewards for each episode. It might have been a good idea to disable the output so it does not print all these lines - you can do this as an exercise.
Step22: We can now print some statistics for the episode rewards, which vary greatly from one episode to the next.
Step23: We can also plot a histogram with the episode rewards.
Step25: Example States
We can plot examples of states from the game-environment and the Q-values that are estimated by the Neural Network.
This helper-function prints the Q-values for a given index in the replay-memory.
Step27: This helper-function plots a state from the replay-memory and optionally prints the Q-values.
Step28: The replay-memory has room for 200k states but it is only partially full from the above call to agent.run(num_episodes=1). This is how many states are actually used.
Step29: Get the Q-values from the replay-memory that are actually used.
Step30: For each state, calculate the min / max Q-values and their difference. This will be used to lookup interesting states in the following sections.
Step31: Example States
Step32: This state is where the ball hits the wall so the agent scores a point.
We can show the surrounding states leading up to and following this state. Note how the Q-values are very close for the different actions, because at this point it really does not matter what the agent does as the reward is already guaranteed. But note how the Q-values decrease significantly after the ball has hit the wall and a point has been scored.
Also note that the agent uses the Epsilon-greedy policy for taking actions, so there is a small probability that a random action is taken instead of the action with the highest Q-value.
Step33: Example
Step34: Example
Step35: Example
Step36: Example
Step38: Output of Convolutional Layers
The outputs of the convolutional layers can be plotted so we can see how the images from the game-environment are being processed by the Neural Network.
This is the helper-function for plotting the output of the convolutional layer with the given name, when inputting the given state from the replay-memory.
Step39: Game State
This is the state that is being input to the Neural Network. The image on the left is the last image from the game-environment. The image on the right is the processed motion-trace that shows the trajectories of objects in the game-environment.
Step40: Output of Convolutional Layer 1
This shows the images that are output by the 1st convolutional layer, when inputting the above state to the Neural Network. There are 16 output channels of this convolutional layer.
Note that you can invert the colors by setting inverse_cmap=True in the parameters to this function.
Step41: Output of Convolutional Layer 2
These are the images output by the 2nd convolutional layer, when inputting the above state to the Neural Network. There are 32 output channels of this convolutional layer.
Step42: Output of Convolutional Layer 3
These are the images output by the 3rd convolutional layer, when inputting the above state to the Neural Network. There are 64 output channels of this convolutional layer.
All these images are flattened to a one-dimensional array (or tensor) which is then used as the input to a fully-connected layer in the Neural Network.
During the training-process, the Neural Network has learnt what convolutional filters to apply to the images from the game-environment so as to produce these images, because they have proven to be useful when estimating Q-values.
Can you see what it is that the Neural Network has learned to detect in these images?
Step44: Weights for Convolutional Layers
We can also plot the weights of the convolutional layers in the Neural Network. These are the weights that are being optimized so as to improve the ability of the Neural Network to estimate Q-values. Tutorial #02 explains in greater detail what convolutional weights are.
There are also weights for the fully-connected layers but they are not shown here.
This is the helper-function for plotting the weights of a convoluational layer.
Step45: Weights for Convolutional Layer 1
These are the weights of the first convolutional layer of the Neural Network, with respect to the first input channel of the state. That is, these are the weights that are used on the image from the game-environment. Some basic statistics are also shown.
Note how the weights are more negative (blue) than positive (red). It is unclear why this happens as these weights are found through optimization. It is apparently beneficial for the following layers to have this processing with more negative weights in the first convolutional layer.
Step46: We can also plot the convolutional weights for the second input channel, that is, the motion-trace of the game-environment. Once again we see that the negative weights (blue) have a much greater magnitude than the positive weights (red).
Step47: Weights for Convolutional Layer 2
These are the weights of the 2nd convolutional layer in the Neural Network. There are 16 input channels and 32 output channels of this layer. You can change the number for the input-channel to see the associated weights.
Note how the weights are more balanced between positive (red) and negative (blue) compared to the weights for the 1st convolutional layer above.
Step48: Weights for Convolutional Layer 3
These are the weights of the 3rd convolutional layer in the Neural Network. There are 32 input channels and 64 output channels of this layer. You can change the number for the input-channel to see the associated weights.
Note again how the weights are more balanced between positive (red) and negative (blue) compared to the weights for the 1st convolutional layer above. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import gym
import numpy as np
import math
# Use TensorFlow v.2 with this old v.1 code.
# E.g. placeholder variables and sessions have changed in TF2.
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
Explanation: TensorFlow Tutorial #16
Reinforcement Learning (Q-Learning)
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial is about so-called Reinforcement Learning in which an agent is learning how to navigate some environment, in this case Atari games from the 1970-80's. The agent does not know anything about the game and must learn how to play it from trial and error. The only information that is available to the agent is the screen output of the game, and whether the previous action resulted in a reward or penalty.
This is a very difficult problem in Machine Learning / Artificial Intelligence, because the agent must both learn to distinguish features in the game-images, and then connect the occurence of certain features in the game-images with its own actions and a reward or penalty that may be deferred many steps into the future.
This problem was first solved by the researchers from Google DeepMind. This tutorial is based on the main ideas from their early research papers (especially this and this), although we make several changes because the original DeepMind algorithm was awkward and over-complicated in some ways. But it turns out that you still need several tricks in order to stabilize the training of the agent, so the implementation in this tutorial is unfortunately also somewhat complicated.
The basic idea is to have the agent estimate so-called Q-values whenever it sees an image from the game-environment. The Q-values tell the agent which action is most likely to lead to the highest cumulative reward in the future. The problem is then reduced to finding these Q-values and storing them for later retrieval using a function approximator.
This builds on some of the previous tutorials. You should be familiar with TensorFlow and Convolutional Neural Networks from Tutorial #01 and #02. It will also be helpful if you are familiar with one of the builder APIs in Tutorials #03 or #03-B.
The Problem
This tutorial uses the Atari game Breakout, where the player or agent is supposed to hit a ball with a paddle, thus avoiding death while scoring points when the ball smashes pieces of a wall.
When a human learns to play a game like this, the first thing to figure out is what part of the game environment you are controlling - in this case the paddle at the bottom. If you move right on the joystick then the paddle moves right and vice versa. The next thing is to figure out what the goal of the game is - in this case to smash as many bricks in the wall as possible so as to maximize the score. Finally you need to learn what to avoid - in this case you must avoid dying by letting the ball pass beside the paddle.
Below are shown 3 images from the game that demonstrate what we need our agent to learn. In the image to the left, the ball is going downwards and the agent must learn to move the paddle so as to hit the ball and avoid death. The image in the middle shows the paddle hitting the ball, which eventually leads to the image on the right where the ball smashes some bricks and scores points. The ball then continues downwards and the process repeats.
The problem is that there are 10 states between the ball going downwards and the paddle hitting the ball, and there are an additional 18 states before the reward is obtained when the ball hits the wall and smashes some bricks. How can we teach an agent to connect these three situations and generalize to similar situations? The answer is to use so-called Reinforcement Learning with a Neural Network, as shown in this tutorial.
Q-Learning
One of the simplest ways of doing Reinforcement Learning is called Q-learning. Here we want to estimate so-called Q-values which are also called action-values, because they map a state of the game-environment to a numerical value for each possible action that the agent may take. The Q-values indicate which action is expected to result in the highest future reward, thus telling the agent which action to take.
Unfortunately we do not know what the Q-values are supposed to be, so we have to estimate them somehow. The Q-values are all initialized to zero and then updated repeatedly as new information is collected from the agent playing the game. When the agent scores a point then the Q-value must be updated with the new information.
There are different formulas for updating Q-values, but the simplest is to set the new Q-value to the reward that was observed, plus the maximum Q-value for the following state of the game. This gives the total reward that the agent can expect from the current game-state and onwards. Typically we also multiply the max Q-value for the following state by a so-called discount-factor slightly below 1. This causes more distant rewards to contribute less to the Q-value, thus making the agent favour rewards that are closer in time.
The formula for updating the Q-value is:
Q-value for state and action = reward + discount * max Q-value for next state
In academic papers, this is typically written with mathematical symbols like this:
$$
Q(s_{t},a_{t}) \leftarrow \underbrace{r_{t}}{\rm reward} + \underbrace{\gamma}{\rm discount} \cdot \underbrace{\max_{a}Q(s_{t+1}, a)}_{\rm estimate~of~future~rewards}
$$
Furthermore, when the agent loses a life, then we know that the future reward is zero because the agent is dead, so we set the Q-value for that state to zero.
Simple Example
The images below demonstrate how Q-values are updated in a backwards sweep through the game-states that have previously been visited. In this simple example we assume all Q-values have been initialized to zero. The agent gets a reward of 1 point in the right-most image. This reward is then propagated backwards to the previous game-states, so when we see similar game-states in the future, we know that the given actions resulted in that reward.
The discounting is an exponentially decreasing function. This example uses a discount-factor of 0.97 so the Q-value for the 3rd image is about $0.885 \simeq 0.97^4$ because it is 4 states prior to the state that actually received the reward. Similarly for the other states. This example only shows one Q-value per state, but in reality there is one Q-value for each possible action in the state, and the Q-values are updated in a backwards-sweep using the formula above. This is shown in the next section.
Detailed Example
This is a more detailed example showing the Q-values for two successive states of the game-environment and how to update them.
The Q-values for the possible actions have been estimated by a Neural Network. For the action NOOP in state $t$ the Q-value is estimated to be 2.900, which is the highest Q-value for that state so the agent takes that action, i.e. the agent does not do anything between state $t$ and $t+1$ because NOOP means "No Operation".
In state $t+1$ the agent scores 4 points, but this is limited to 1 point in this implementation so as to stabilize the training. The maximum Q-value for state $t+1$ is 1.830 for the action RIGHTFIRE. So if we select that action and continue to select the actions proposed by the Q-values estimated by the Neural Network, then the discounted sum of all the future rewards is expected to be 1.830.
Now that we know the reward of taking the NOOP action from state $t$ to $t+1$, we can update the Q-value to incorporate this new information. This uses the formula above:
$$
Q(state_{t},NOOP) \leftarrow \underbrace{r_{t}}{\rm reward} + \underbrace{\gamma}{\rm discount} \cdot \underbrace{\max_{a}Q(state_{t+1}, a)}_{\rm estimate~of~future~rewards} = 1.0 + 0.97 \cdot 1.830 \simeq 2.775
$$
The new Q-value is 2.775 which is slightly lower than the previous estimate of 2.900. This Neural Network has already been trained for 150 hours so it is quite good at estimating Q-values, but earlier during the training, the estimated Q-values would be more different.
The idea is to have the agent play many, many games and repeatedly update the estimates of the Q-values as more information about rewards and penalties becomes available. This will eventually lead to good estimates of the Q-values, provided the training is numerically stable, as discussed further below. By doing this, we create a connection between rewards and prior actions.
Motion Trace
If we only use a single image from the game-environment then we cannot tell which direction the ball is moving. The typical solution is to use multiple consecutive images to represent the state of the game-environment.
This implementation uses another approach by processing the images from the game-environment in a motion-tracer that outputs two images as shown below. The left image is from the game-environment and the right image is the processed image, which shows traces of recent movements in the game-environment. In this case we can see that the ball is going downwards and has bounced off the right wall, and that the paddle has moved from the left to the right side of the screen.
Note that the motion-tracer has only been tested for Breakout and partially tested for Space Invaders, so it may not work for games with more complicated graphics such as Doom.
Training Stability
We need a function approximator that can take a state of the game-environment as input and produce as output an estimate of the Q-values for that state. We will use a Convolutional Neural Network for this. Although they have achieved great fame in recent years, they are actually a quite old technologies with many problems - one of which is training stability. A significant part of the research for this tutorial was spent on tuning and stabilizing the training of the Neural Network.
To understand why training stability is a problem, consider the 3 images below which show the game-environment in 3 consecutive states. At state $t$ the agent is about to score a point, which happens in the following state $t+1$. Assuming all Q-values were zero prior to this, we should now set the Q-value for state $t+1$ to be 1.0 and it should be 0.97 for state $t$ if the discount-value is 0.97, according to the formula above for updating Q-values.
If we were to train a Neural Network to estimate the Q-values for the two states $t$ and $t+1$ with Q-values 0.97 and 1.0, respectively, then the Neural Network will most likely be unable to distinguish properly between the images of these two states. As a result the Neural Network will also estimate a Q-value near 1.0 for state $t+2$ because the images are so similar. But this is clearly wrong because the Q-values for state $t+2$ should be zero as we do not know anything about future rewards at this point, and that is what the Q-values are supposed to estimate.
If this is continued and the Neural Network is trained after every new game-state is observed, then it will quickly cause the estimated Q-values to explode. This is an artifact of training Neural Networks which must have sufficiently large and diverse training-sets. For this reason we will use a so-called Replay Memory so we can gather a large number of game-states and shuffle them during training of the Neural Network.
Flowchart
This flowchart shows roughly how Reinforcement Learning is implemented in this tutorial. There are two main loops which are run sequentially until the Neural Network is sufficiently accurate at estimating Q-values.
The first loop is for playing the game and recording data. This uses the Neural Network to estimate Q-values from a game-state. It then stores the game-state along with the corresponding Q-values and reward/penalty in the Replay Memory for later use.
The other loop is activated when the Replay Memory is sufficiently full. First it makes a full backwards sweep through the Replay Memory to update the Q-values with the new rewards and penalties that have been observed. Then it performs an optimization run so as to train the Neural Network to better estimate these updated Q-values.
There are many more details in the implementation, such as decreasing the learning-rate and increasing the fraction of the Replay Memory being used during training, but this flowchart shows the main ideas.
Neural Network Architecture
The Neural Network used in this implementation has 3 convolutional layers, all of which have filter-size 3x3. The layers have 16, 32, and 64 output channels, respectively. The stride is 2 in the first two convolutional layers and 1 in the last layer.
Following the 3 convolutional layers there are 4 fully-connected layers each with 1024 units and ReLU-activation. Then there is a single fully-connected layer with linear activation used as the output of the Neural Network.
This architecture is different from those typically used in research papers from DeepMind and others. They often have large convolutional filter-sizes of 8x8 and 4x4 with high stride-values. This causes more aggressive down-sampling of the game-state images. They also typically have only a single fully-connected layer with 256 or 512 ReLU units.
During the research for this tutorial, it was found that smaller filter-sizes and strides in the convolutional layers, combined with several fully-connected layers having more units, were necessary in order to have sufficiently accurate Q-values. The Neural Network architectures originally used by DeepMind appear to distort the Q-values quite significantly. A reason that their approach still worked, is possibly due to their use of a very large Replay Memory with 1 million states, and that the Neural Network did one mini-batch of training for each step of the game-environment, and some other tricks.
The architecture used here is probably excessive but it takes several days of training to test each architecture, so it is left as an exercise for the reader to try and find a smaller Neural Network architecture that still performs well.
Installation
The documentation for OpenAI Gym currently suggests that you need to build it in order to install it. But if you just want to install the Atari games, then you only need to install a single pip-package by typing the following commands in a terminal.
conda create --name tf-gym --clone tf
source activate tf-gym
pip install gym[atari]
This assumes you already have an Anaconda environment named tf which has TensorFlow installed, it will then be cloned to another environment named tf-gym where OpenAI Gym is also installed. This allows you to easily switch between your normal TensorFlow environment and another one which also contains OpenAI Gym.
You can also have two environments named tf-gpu and tf-gpu-gym for the GPU versions of TensorFlow.
TensorFlow 2
This tutorial was developed using TensorFlow v.1 back in the year 2016-2017. There have been significant API changes in TensorFlow v.2. This tutorial uses TF2 in "v.1 compatibility mode". It would be too big a job for me to keep updating these tutorials every time Google's engineers update the TensorFlow API, so this tutorial may eventually stop working.
Imports
End of explanation
import reinforcement_learning as rl
Explanation: The main source-code for Reinforcement Learning is located in the following module:
End of explanation
# TensorFlow
tf.__version__
# OpenAI Gym
gym.__version__
Explanation: This was developed using Python 3.6.0 (Anaconda) with package versions:
End of explanation
env_name = 'Breakout-v0'
# env_name = 'SpaceInvaders-v0'
Explanation: Game Environment
This is the name of the game-environment that we want to use in OpenAI Gym.
End of explanation
rl.checkpoint_base_dir = 'checkpoints_tutorial16/'
Explanation: This is the base-directory for the TensorFlow checkpoints as well as various log-files.
End of explanation
rl.update_paths(env_name=env_name)
Explanation: Once the base-dir has been set, you need to call this function to set all the paths that will be used. This will also create the checkpoint-dir if it does not already exist.
End of explanation
agent = rl.Agent(env_name=env_name,
training=True,
render=True,
use_logging=False)
Explanation: Download Pre-Trained Model
The original version of this tutorial provided some TensorFlow checkpoints with pre-trained models for download. But due to changes in both TensorFlow and OpenAI Gym, these pre-trained models cannot be loaded anymore so they have been deleted from the web-server. You will therefore have to train your own model further below.
Create Agent
The Agent-class implements the main loop for playing the game, recording data and optimizing the Neural Network. We create an object-instance and need to set training=True because we want to use the replay-memory to record states and Q-values for plotting further below. We disable logging so this does not corrupt the logs from the actual training that was done previously. We can also set render=True but it will have no effect as long as training==True.
End of explanation
model = agent.model
Explanation: The Neural Network is automatically instantiated by the Agent-class. We will create a direct reference for convenience.
End of explanation
replay_memory = agent.replay_memory
Explanation: Similarly, the Agent-class also allocates the replay-memory when training==True. The replay-memory will require more than 3 GB of RAM, so it should only be allocated when needed. We will need the replay-memory in this Notebook to record the states and Q-values we observe, so they can be plotted further below.
End of explanation
agent.run(num_episodes=1)
Explanation: Training
The agent's run() function is used to play the game. This uses the Neural Network to estimate Q-values and hence determine the agent's actions. If training==True then it will also gather states and Q-values in the replay-memory and train the Neural Network when the replay-memory is sufficiently full. You can set num_episodes=None if you want an infinite loop that you would stop manually with ctrl-c. In this case we just set num_episodes=1 because we are not actually interested in training the Neural Network any further, we merely want to collect some states and Q-values in the replay-memory so we can plot them below.
End of explanation
log_q_values = rl.LogQValues()
log_reward = rl.LogReward()
Explanation: In training-mode, this function will output a line for each episode. The first counter is for the number of episodes that have been processed. The second counter is for the number of states that have been processed. These two counters are stored in the TensorFlow checkpoint along with the weights of the Neural Network, so you can restart the training e.g. if you only have one computer and need to train during the night.
Note that the number of episodes is almost 90k. It is impractical to print that many lines in this Notebook, so the training is better done in a terminal window by running the following commands:
source activate tf-gpu-gym # Activate your Python environment with TF and Gym.
python reinforcement_learning.py --env Breakout-v0 --training
Training Progress
Data is being logged during training so we can plot the progress afterwards. The reward for each episode and a running mean of the last 30 episodes are logged to file. Basic statistics for the Q-values in the replay-memory are also logged to file before each optimization run.
This could be logged using TensorFlow and TensorBoard, but they were designed for logging variables of the TensorFlow graph and data that flows through the graph. In this case the data we want logged does not reside in the graph, so it becomes a bit awkward to use TensorFlow to log this data.
We have therefore implemented a few small classes that can write and read these logs.
End of explanation
log_q_values.read()
log_reward.read()
Explanation: We can now read the logs from file:
End of explanation
plt.plot(log_reward.count_states, log_reward.episode, label='Episode Reward')
plt.plot(log_reward.count_states, log_reward.mean, label='Mean of 30 episodes')
plt.xlabel('State-Count for Game Environment')
plt.legend()
plt.show()
Explanation: Training Progress: Reward
This plot shows the reward for each episode during training, as well as the running mean of the last 30 episodes. Note how the reward varies greatly from one episode to the next, so it is difficult to say from this plot alone whether the agent is really improving during the training, although the running mean does appear to trend upwards slightly.
End of explanation
plt.plot(log_q_values.count_states, log_q_values.mean, label='Q-Value Mean')
plt.xlabel('State-Count for Game Environment')
plt.legend()
plt.show()
Explanation: Training Progress: Q-Values
The following plot shows the mean Q-values from the replay-memory prior to each run of the optimizer for the Neural Network. Note how the mean Q-values increase rapidly in the beginning and then they increase fairly steadily for 40 million states, after which they still trend upwards but somewhat more irregularly.
The fast improvement in the beginning is probably due to (1) the use of a smaller replay-memory early in training so the Neural Network is optimized more often and the new information is used faster, (2) the backwards-sweeping of the replay-memory so the rewards are used to update the Q-values for many of the states, instead of just updating the Q-values for a single state, and (3) the replay-memory is balanced so at least half of each mini-batch contains states whose Q-values have high estimation-errors for the Neural Network.
The original paper from DeepMind showed much slower progress in the first phase of training, see Figure 2 in that paper but note that the Q-values are not directly comparable, possibly because they used a higher discount factor of 0.99 while we only used 0.97 here.
End of explanation
agent.epsilon_greedy.epsilon_testing
Explanation: Testing
When the agent and Neural Network is being trained, the so-called epsilon-probability is typically decreased from 1.0 to 0.1 over a large number of steps, after which the probability is held fixed at 0.1. This means the probability is 0.1 or 10% that the agent will select a random action in each step, otherwise it will select the action that has the highest Q-value. This is known as the epsilon-greedy policy. The choice of 0.1 for the epsilon-probability is a compromise between taking the actions that are already known to be good, versus exploring new actions that might lead to even higher rewards or might lead to death of the agent.
During testing it is common to lower the epsilon-probability even further. We have set it to 0.01 as shown here:
End of explanation
agent.training = False
Explanation: We will now instruct the agent that it should no longer perform training by setting this boolean:
End of explanation
agent.reset_episode_rewards()
Explanation: We also reset the previous episode rewards.
End of explanation
agent.render = True
Explanation: We can render the game-environment to screen so we can see the agent playing the game, by setting this boolean:
End of explanation
agent.run(num_episodes=1)
Explanation: We can now run a single episode by calling the run() function again. This should open a new window that shows the game being played by the agent. At the time of this writing, it was not possible to resize this tiny window, and the developers at OpenAI did not seem to care about this feature which should obviously be there.
End of explanation
agent.reset_episode_rewards()
Explanation: Mean Reward
The game-play is slightly random, both with regard to selecting actions using the epsilon-greedy policy, but also because the OpenAI Gym environment will repeat any action between 2-4 times, with the number chosen at random. So the reward of one episode is not an accurate estimate of the reward that can be expected in general from this agent.
We need to run 30 or even 50 episodes to get a more accurate estimate of the reward that can be expected.
We will first reset the previous episode rewards.
End of explanation
agent.render = False
Explanation: We disable the screen-rendering so the game-environment runs much faster.
End of explanation
agent.run(num_episodes=30)
Explanation: We can now run 30 episodes. This records the rewards for each episode. It might have been a good idea to disable the output so it does not print all these lines - you can do this as an exercise.
End of explanation
rewards = agent.episode_rewards
print("Rewards for {0} episodes:".format(len(rewards)))
print("- Min: ", np.min(rewards))
print("- Mean: ", np.mean(rewards))
print("- Max: ", np.max(rewards))
print("- Stdev: ", np.std(rewards))
Explanation: We can now print some statistics for the episode rewards, which vary greatly from one episode to the next.
End of explanation
_ = plt.hist(rewards, bins=30)
Explanation: We can also plot a histogram with the episode rewards.
End of explanation
def print_q_values(idx):
Print Q-values and actions from the replay-memory at the given index.
# Get the Q-values and action from the replay-memory.
q_values = replay_memory.q_values[idx]
action = replay_memory.actions[idx]
print("Action: Q-Value:")
print("====================")
# Print all the actions and their Q-values.
for i, q_value in enumerate(q_values):
# Used to display which action was taken.
if i == action:
action_taken = "(Action Taken)"
else:
action_taken = ""
# Text-name of the action.
action_name = agent.get_action_name(i)
print("{0:12}{1:.3f} {2}".format(action_name, q_value,
action_taken))
# Newline.
print()
Explanation: Example States
We can plot examples of states from the game-environment and the Q-values that are estimated by the Neural Network.
This helper-function prints the Q-values for a given index in the replay-memory.
End of explanation
def plot_state(idx, print_q=True):
Plot the state in the replay-memory with the given index.
# Get the state from the replay-memory.
state = replay_memory.states[idx]
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(1, 2)
# Plot the image from the game-environment.
ax = axes.flat[0]
ax.imshow(state[:, :, 0], vmin=0, vmax=255,
interpolation='lanczos', cmap='gray')
# Plot the motion-trace.
ax = axes.flat[1]
ax.imshow(state[:, :, 1], vmin=0, vmax=255,
interpolation='lanczos', cmap='gray')
# This is necessary if we show more than one plot in a single Notebook cell.
plt.show()
# Print the Q-values.
if print_q:
print_q_values(idx=idx)
Explanation: This helper-function plots a state from the replay-memory and optionally prints the Q-values.
End of explanation
num_used = replay_memory.num_used
num_used
Explanation: The replay-memory has room for 200k states but it is only partially full from the above call to agent.run(num_episodes=1). This is how many states are actually used.
End of explanation
q_values = replay_memory.q_values[0:num_used, :]
Explanation: Get the Q-values from the replay-memory that are actually used.
End of explanation
q_values_min = q_values.min(axis=1)
q_values_max = q_values.max(axis=1)
q_values_dif = q_values_max - q_values_min
Explanation: For each state, calculate the min / max Q-values and their difference. This will be used to lookup interesting states in the following sections.
End of explanation
idx = np.argmax(replay_memory.rewards)
idx
Explanation: Example States: Highest Reward
This example shows the states surrounding the state with the highest reward.
During the training we limit the rewards to the range [-1, 1] so this basically just gets the first state that has a reward of 1.
End of explanation
for i in range(-5, 3):
plot_state(idx=idx+i)
Explanation: This state is where the ball hits the wall so the agent scores a point.
We can show the surrounding states leading up to and following this state. Note how the Q-values are very close for the different actions, because at this point it really does not matter what the agent does as the reward is already guaranteed. But note how the Q-values decrease significantly after the ball has hit the wall and a point has been scored.
Also note that the agent uses the Epsilon-greedy policy for taking actions, so there is a small probability that a random action is taken instead of the action with the highest Q-value.
End of explanation
idx = np.argmax(q_values_max)
idx
for i in range(0, 5):
plot_state(idx=idx+i)
Explanation: Example: Highest Q-Value
This example shows the states surrounding the one with the highest Q-values. This means that the agent has high expectation that several points will be scored in the following steps. Note that the Q-values decrease significantly after the points have been scored.
End of explanation
idx = np.argmax(replay_memory.end_life)
idx
for i in range(-10, 0):
plot_state(idx=idx+i)
Explanation: Example: Loss of Life
This example shows the states leading up to a loss of life for the agent.
End of explanation
idx = np.argmax(q_values_dif)
idx
for i in range(0, 5):
plot_state(idx=idx+i)
Explanation: Example: Greatest Difference in Q-Values
This example shows the state where there is the greatest difference in Q-values, which means that the agent believes one action will be much more beneficial than another. But because the agent uses the Epsilon-greedy policy, it sometimes selects a random action instead.
End of explanation
idx = np.argmin(q_values_dif)
idx
for i in range(0, 5):
plot_state(idx=idx+i)
Explanation: Example: Smallest Difference in Q-Values
This example shows the state where there is the smallest difference in Q-values, which means that the agent believes it does not really matter which action it selects, as they all have roughly the same expectations for future rewards.
The Neural Network estimates these Q-values and they are not precise. The differences in Q-values may be so small that they fall within the error-range of the estimates.
End of explanation
def plot_layer_output(model, layer_name, state_index, inverse_cmap=False):
Plot the output of a convolutional layer.
:param model: An instance of the NeuralNetwork-class.
:param layer_name: Name of the convolutional layer.
:param state_index: Index into the replay-memory for a state that
will be input to the Neural Network.
:param inverse_cmap: Boolean whether to inverse the color-map.
# Get the given state-array from the replay-memory.
state = replay_memory.states[state_index]
# Get the output tensor for the given layer inside the TensorFlow graph.
# This is not the value-contents but merely a reference to the tensor.
layer_tensor = model.get_layer_tensor(layer_name=layer_name)
# Get the actual value of the tensor by feeding the state-data
# to the TensorFlow graph and calculating the value of the tensor.
values = model.get_tensor_value(tensor=layer_tensor, state=state)
# Number of image channels output by the convolutional layer.
num_images = values.shape[3]
# Number of grid-cells to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_images))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids, figsize=(10, 10))
print("Dim. of each image:", values.shape)
if inverse_cmap:
cmap = 'gray_r'
else:
cmap = 'gray'
# Plot the outputs of all the channels in the conv-layer.
for i, ax in enumerate(axes.flat):
# Only plot the valid image-channels.
if i < num_images:
# Get the image for the i'th output channel.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap=cmap)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Output of Convolutional Layers
The outputs of the convolutional layers can be plotted so we can see how the images from the game-environment are being processed by the Neural Network.
This is the helper-function for plotting the output of the convolutional layer with the given name, when inputting the given state from the replay-memory.
End of explanation
idx = np.argmax(q_values_max)
plot_state(idx=idx, print_q=False)
Explanation: Game State
This is the state that is being input to the Neural Network. The image on the left is the last image from the game-environment. The image on the right is the processed motion-trace that shows the trajectories of objects in the game-environment.
End of explanation
plot_layer_output(model=model, layer_name='layer_conv1', state_index=idx, inverse_cmap=False)
Explanation: Output of Convolutional Layer 1
This shows the images that are output by the 1st convolutional layer, when inputting the above state to the Neural Network. There are 16 output channels of this convolutional layer.
Note that you can invert the colors by setting inverse_cmap=True in the parameters to this function.
End of explanation
plot_layer_output(model=model, layer_name='layer_conv2', state_index=idx, inverse_cmap=False)
Explanation: Output of Convolutional Layer 2
These are the images output by the 2nd convolutional layer, when inputting the above state to the Neural Network. There are 32 output channels of this convolutional layer.
End of explanation
plot_layer_output(model=model, layer_name='layer_conv3', state_index=idx, inverse_cmap=False)
Explanation: Output of Convolutional Layer 3
These are the images output by the 3rd convolutional layer, when inputting the above state to the Neural Network. There are 64 output channels of this convolutional layer.
All these images are flattened to a one-dimensional array (or tensor) which is then used as the input to a fully-connected layer in the Neural Network.
During the training-process, the Neural Network has learnt what convolutional filters to apply to the images from the game-environment so as to produce these images, because they have proven to be useful when estimating Q-values.
Can you see what it is that the Neural Network has learned to detect in these images?
End of explanation
def plot_conv_weights(model, layer_name, input_channel=0):
Plot the weights for a convolutional layer.
:param model: An instance of the NeuralNetwork-class.
:param layer_name: Name of the convolutional layer.
:param input_channel: Plot the weights for this input-channel.
# Get the variable for the weights of the given layer.
# This is a reference to the variable inside TensorFlow,
# not its actual value.
weights_variable = model.get_weights_variable(layer_name=layer_name)
# Retrieve the values of the weight-variable from TensorFlow.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
w = model.get_variable_value(variable=weights_variable)
# Get the weights for the given input-channel.
w_channel = w[:, :, input_channel, :]
# Number of output-channels for the conv. layer.
num_output_channels = w_channel.shape[2]
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w_channel)
w_max = np.max(w_channel)
# This is used to center the colour intensity at zero.
abs_max = max(abs(w_min), abs(w_max))
# Print statistics for the weights.
print("Min: {0:.5f}, Max: {1:.5f}".format(w_min, w_max))
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w_channel.mean(),
w_channel.std()))
# Number of grids to plot.
# Rounded-up, square-root of the number of output-channels.
num_grids = math.ceil(math.sqrt(num_output_channels))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i < num_output_channels:
# Get the weights for the i'th filter of this input-channel.
img = w_channel[:, :, i]
# Plot image.
ax.imshow(img, vmin=-abs_max, vmax=abs_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Weights for Convolutional Layers
We can also plot the weights of the convolutional layers in the Neural Network. These are the weights that are being optimized so as to improve the ability of the Neural Network to estimate Q-values. Tutorial #02 explains in greater detail what convolutional weights are.
There are also weights for the fully-connected layers but they are not shown here.
This is the helper-function for plotting the weights of a convoluational layer.
End of explanation
plot_conv_weights(model=model, layer_name='layer_conv1', input_channel=0)
Explanation: Weights for Convolutional Layer 1
These are the weights of the first convolutional layer of the Neural Network, with respect to the first input channel of the state. That is, these are the weights that are used on the image from the game-environment. Some basic statistics are also shown.
Note how the weights are more negative (blue) than positive (red). It is unclear why this happens as these weights are found through optimization. It is apparently beneficial for the following layers to have this processing with more negative weights in the first convolutional layer.
End of explanation
plot_conv_weights(model=model, layer_name='layer_conv1', input_channel=1)
Explanation: We can also plot the convolutional weights for the second input channel, that is, the motion-trace of the game-environment. Once again we see that the negative weights (blue) have a much greater magnitude than the positive weights (red).
End of explanation
plot_conv_weights(model=model, layer_name='layer_conv2', input_channel=0)
Explanation: Weights for Convolutional Layer 2
These are the weights of the 2nd convolutional layer in the Neural Network. There are 16 input channels and 32 output channels of this layer. You can change the number for the input-channel to see the associated weights.
Note how the weights are more balanced between positive (red) and negative (blue) compared to the weights for the 1st convolutional layer above.
End of explanation
plot_conv_weights(model=model, layer_name='layer_conv3', input_channel=0)
Explanation: Weights for Convolutional Layer 3
These are the weights of the 3rd convolutional layer in the Neural Network. There are 32 input channels and 64 output channels of this layer. You can change the number for the input-channel to see the associated weights.
Note again how the weights are more balanced between positive (red) and negative (blue) compared to the weights for the 1st convolutional layer above.
End of explanation |
10,201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Character Sequence to Sequence
In this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models.
<img src="images/sequence-to-sequence.jpg"/>
Dataset
The dataset lives in the /data/ folder. At the moment, it is made up of the following files
Step1: Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.
Step2: target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.
Step3: Preprocess
To do anything useful with it, we'll need to turn the characters into a list of integers
Step4: The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length.
Step5: This is the final shape we need them to be in. We can now proceed to building the model.
Model
Check the Version of TensorFlow
This will check to make sure you have the correct version of TensorFlow
Step6: Hyperparameters
Step7: Input
Step8: Sequence to Sequence
The decoder is probably the most complex part of this model. We need to declare a decoder for the training phase, and a decoder for the inference/prediction phase. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).
First, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.
Then, we'll need to hookup a fully connected layer to the output of decoder. The output of this layer tells us which word the RNN is choosing to output at each time step.
Let's first look at the inference/prediction decoder. It is the one we'll use when we deploy our chatbot to the wild (even though it comes second in the actual code).
<img src="images/sequence-to-sequence-inference-decoder.png"/>
We'll hand our encoder hidden state to the inference decoder and have it process its output. TensorFlow handles most of the logic for us. We just have to use tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder and supply them with the appropriate inputs.
Notice that the inference decoder feeds the output of each time step as an input to the next.
As for the training decoder, we can think of it as looking like this
Step9: Process Decoding Input
Step10: Decoding
Embed the decoding input
Build the decoding RNNs
Build the output layer in the decoding scope, so the weight and bias can be shared between the training and inference decoders.
Step11: Decoder During Training
Build the training decoder using tf.contrib.seq2seq.simple_decoder_fn_train and tf.contrib.seq2seq.dynamic_rnn_decoder.
Apply the output layer to the output of the training decoder
Step12: Decoder During Inference
Reuse the weights the biases from the training decoder using tf.variable_scope("decoding", reuse=True)
Build the inference decoder using tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder.
The output function is applied to the output in this step
Step13: Optimization
Our loss function is tf.contrib.seq2seq.sequence_loss provided by the tensor flow seq2seq module. It calculates a weighted cross-entropy loss for the output logits.
Step14: Train
We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.
Step15: Prediction | Python Code:
import helper
source_path = 'data/letters_source.txt'
target_path = 'data/letters_target.txt'
source_sentences = helper.load_data(source_path)
target_sentences = helper.load_data(target_path)
Explanation: Character Sequence to Sequence
In this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models.
<img src="images/sequence-to-sequence.jpg"/>
Dataset
The dataset lives in the /data/ folder. At the moment, it is made up of the following files:
* letters_source.txt: The list of input letter sequences. Each sequence is its own line.
* letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.
End of explanation
source_sentences[:50].split('\n')
Explanation: Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.
End of explanation
target_sentences[:50].split('\n')
Explanation: target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.
End of explanation
def extract_character_vocab(data):
special_words = ['<pad>', '<unk>', '<s>', '<\s>']
set_words = set([character for line in data.split('\n') for character in line])
int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}
vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}
return int_to_vocab, vocab_to_int
# Build int2letter and letter2int dicts
source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)
target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)
# Convert characters to ids
source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<unk>']) for letter in line] for line in source_sentences.split('\n')]
target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<unk>']) for letter in line] for line in target_sentences.split('\n')]
print("Example source sequence")
print(source_letter_ids[:3])
print("\n")
print("Example target sequence")
print(target_letter_ids[:3])
print()
print("<s> index is {}".format(target_letter_to_int['<s>']))
Explanation: Preprocess
To do anything useful with it, we'll need to turn the characters into a list of integers:
End of explanation
def pad_id_sequences(source_ids, source_letter_to_int, target_ids, target_letter_to_int, sequence_length):
new_source_ids = [sentence + [source_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \
for sentence in source_ids]
new_target_ids = [sentence + [target_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \
for sentence in target_ids]
return new_source_ids, new_target_ids
# Use the longest sequence as sequence length
sequence_length = max(
[len(sentence) for sentence in source_letter_ids] + [len(sentence) for sentence in target_letter_ids])
# Pad all sequences up to sequence length
source_ids, target_ids = pad_id_sequences(source_letter_ids, source_letter_to_int,
target_letter_ids, target_letter_to_int, sequence_length)
print("Sequence Length")
print(sequence_length)
print("\n")
print("Input sequence example")
print(source_ids[:3])
print("\n")
print("Target sequence example")
print(target_ids[:3])
Explanation: The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length.
End of explanation
from distutils.version import LooseVersion
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
Explanation: This is the final shape we need them to be in. We can now proceed to building the model.
Model
Check the Version of TensorFlow
This will check to make sure you have the correct version of TensorFlow
End of explanation
# Number of Epochs
epochs = 60
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 50
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 13
decoding_embedding_size = 13
# Learning Rate
learning_rate = 0.001
Explanation: Hyperparameters
End of explanation
input_data = tf.placeholder(tf.int32, [batch_size, sequence_length])
targets = tf.placeholder(tf.int32, [batch_size, sequence_length])
lr = tf.placeholder(tf.float32)
Explanation: Input
End of explanation
#print(source_letter_to_int)
source_vocab_size = len(source_letter_to_int)
print("Length of letter to int is {}".format(source_vocab_size))
print("encoding embedding size is {}".format(encoding_embedding_size))
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)
# Encoder
enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, dtype=tf.float32)
Explanation: Sequence to Sequence
The decoder is probably the most complex part of this model. We need to declare a decoder for the training phase, and a decoder for the inference/prediction phase. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).
First, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.
Then, we'll need to hookup a fully connected layer to the output of decoder. The output of this layer tells us which word the RNN is choosing to output at each time step.
Let's first look at the inference/prediction decoder. It is the one we'll use when we deploy our chatbot to the wild (even though it comes second in the actual code).
<img src="images/sequence-to-sequence-inference-decoder.png"/>
We'll hand our encoder hidden state to the inference decoder and have it process its output. TensorFlow handles most of the logic for us. We just have to use tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder and supply them with the appropriate inputs.
Notice that the inference decoder feeds the output of each time step as an input to the next.
As for the training decoder, we can think of it as looking like this:
<img src="images/sequence-to-sequence-training-decoder.png"/>
The training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).
Encoding
Embed the input data using tf.contrib.layers.embed_sequence
Pass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.
End of explanation
import numpy as np
# Process the input we'll feed to the decoder
ending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<s>']), ending], 1)
#Demonstration/Example
demonstration_outputs = np.reshape(range(batch_size * sequence_length), (batch_size, sequence_length))
sess = tf.InteractiveSession()
print("Targets")
print(demonstration_outputs[:2])
print("\n")
print("Processed Decoding Input")
print(sess.run(dec_input, {targets: demonstration_outputs})[:2])
print("targets shape is {} and ending shape is {}".format(targets.shape, ending.shape))
print("demonstration_outputs shape is {}".format(demonstration_outputs.shape))
Explanation: Process Decoding Input
End of explanation
target_vocab_size = len(target_letter_to_int)
#print(target_vocab_size, " : ", decoding_embedding_size)
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
#print(dec_input, target_vocab_size, decoding_embedding_size)
# Decoder RNNs
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, target_vocab_size, None, scope=decoding_scope)
Explanation: Decoding
Embed the decoding input
Build the decoding RNNs
Build the output layer in the decoding scope, so the weight and bias can be shared between the training and inference decoders.
End of explanation
with tf.variable_scope("decoding") as decoding_scope:
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(enc_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
Explanation: Decoder During Training
Build the training decoder using tf.contrib.seq2seq.simple_decoder_fn_train and tf.contrib.seq2seq.dynamic_rnn_decoder.
Apply the output layer to the output of the training decoder
End of explanation
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, enc_state, dec_embeddings, target_letter_to_int['<s>'], target_letter_to_int['<\s>'],
sequence_length - 1, target_vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
print(inference_logits.shape)
Explanation: Decoder During Inference
Reuse the weights the biases from the training decoder using tf.variable_scope("decoding", reuse=True)
Build the inference decoder using tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder.
The output function is applied to the output in this step
End of explanation
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([batch_size, sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Optimization
Our loss function is tf.contrib.seq2seq.sequence_loss provided by the tensor flow seq2seq module. It calculates a weighted cross-entropy loss for the output logits.
End of explanation
import numpy as np
train_source = source_ids[batch_size:]
train_target = target_ids[batch_size:]
valid_source = source_ids[:batch_size]
valid_target = target_ids[:batch_size]
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch, targets: target_batch, lr: learning_rate})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source})
train_acc = np.mean(np.equal(target_batch, np.argmax(batch_train_logits, 2)))
valid_acc = np.mean(np.equal(valid_target, np.argmax(batch_valid_logits, 2)))
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_ids) // batch_size, train_acc, valid_acc, loss))
Explanation: Train
We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.
End of explanation
input_sentence = 'hello'
input_sentence = [source_letter_to_int.get(word, source_letter_to_int['<unk>']) for word in input_sentence.lower()]
input_sentence = input_sentence + [0] * (sequence_length - len(input_sentence))
batch_shell = np.zeros((batch_size, sequence_length))
batch_shell[0] = input_sentence
chatbot_logits = sess.run(inference_logits, {input_data: batch_shell})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in input_sentence]))
print(' Input Words: {}'.format([source_int_to_letter[i] for i in input_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(chatbot_logits, 1)]))
print(' Chatbot Answer Words: {}'.format([target_int_to_letter[i] for i in np.argmax(chatbot_logits, 1)]))
Explanation: Prediction
End of explanation |
10,202 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced model training using hyperopt
In the Advanced Model Training tutorial we have already taken a look into hyperparameter optimasation using GridHyperparamOpt in the deepchem pacakge. In this tutorial, we will take a look into another hyperparameter tuning library called hyperopt.
Colab
This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem and Hyperopt within Colab, you'll need to run the following installation commands. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install DeepChem and Hyperopt in your local machine again.
Step1: Hyperparameter Optimization via hyperopt
Let's start by loading the HIV dataset. It classifies over 40,000 molecules based on whether they inhibit HIV replication.
Step2: Now, lets import the hyperopt library, which we will be using to fund the best parameters
Step3: Then we have to declare a dictionary with all the hyperparameters and their range that you will be tuning them in. This dictionary will serve as the search space for the hyperopt.
Some basic ways of declaring the ranges in the dictionary are
Step4: We should then declare a function to be minimized by the hyperopt. So, here we should use the function to minimize our multitaskclassifier model. Additionally, we are using a validation callback to validate the classifier for every 1000 steps, then we are passing the best score as the return. The metric used here is 'roc_auc_score', which needs to be maximized. To maximize a non-negative value is equivalent to minimize its opposite number, hence we are returning the negative of the validation score.
Step5: Here, we are calling the fmin function of the hyperopt, where we pass on the function to be minimized, the algorithm to be followed, max number of evals and a trials object. The Trials object is used to keep All hyperparameters, loss, and other information, this means you can access them after running optimization. Also, trials can help you to save important information and later load and then resume the optimization process.
Moreover, for the algorithm there are three choice which can be used without any additional configuration. they are
Step6: The code below is used to print the best hyperparameters found by the hyperopt. | Python Code:
!pip install deepchem
!pip install hyperopt
Explanation: Advanced model training using hyperopt
In the Advanced Model Training tutorial we have already taken a look into hyperparameter optimasation using GridHyperparamOpt in the deepchem pacakge. In this tutorial, we will take a look into another hyperparameter tuning library called hyperopt.
Colab
This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem and Hyperopt within Colab, you'll need to run the following installation commands. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install DeepChem and Hyperopt in your local machine again.
End of explanation
import deepchem as dc
tasks, datasets, transformers = dc.molnet.load_hiv(featurizer='ECFP', split='scaffold')
train_dataset, valid_dataset, test_dataset = datasets
Explanation: Hyperparameter Optimization via hyperopt
Let's start by loading the HIV dataset. It classifies over 40,000 molecules based on whether they inhibit HIV replication.
End of explanation
from hyperopt import hp, fmin, tpe, Trials
Explanation: Now, lets import the hyperopt library, which we will be using to fund the best parameters
End of explanation
search_space = {
'layer_sizes': hp.choice('layer_sizes',[[500], [1000], [2000],[1000,1000]]),
'dropouts': hp.uniform('dropout',low=0.2, high=0.5),
'learning_rate': hp.uniform('learning_rate',high=0.001, low=0.0001)
}
Explanation: Then we have to declare a dictionary with all the hyperparameters and their range that you will be tuning them in. This dictionary will serve as the search space for the hyperopt.
Some basic ways of declaring the ranges in the dictionary are:
hp.choice('label',[choices]) : this is used to specify a list of choices
hp.uniform('label' ,low=low_value ,high=high_value) : this is used to specify a uniform distibution between the low and high values. The values between them can be any real number, not necessaarily an integer.
Here, we are going to use a multitaskclassifier to classify the HIV dataset and hence the appropriate search space is as follows.
End of explanation
import tempfile
#tempfile is used to save the best checkpoint later in the program.
metric = dc.metrics.Metric(dc.metrics.roc_auc_score)
def fm(args):
save_dir = tempfile.mkdtemp()
model = dc.models.MultitaskClassifier(n_tasks=len(tasks),n_features=1024,layer_sizes=args['layer_sizes'],dropouts=args['dropouts'],learning_rate=args['learning_rate'])
#validation callback that saves the best checkpoint, i.e the one with the maximum score.
validation=dc.models.ValidationCallback(valid_dataset, 1000, [metric],save_dir=save_dir,transformers=transformers,save_on_minimum=False)
model.fit(train_dataset, nb_epoch=25,callbacks=validation)
#restoring the best checkpoint and passing the negative of its validation score to be minimized.
model.restore(model_dir=save_dir)
valid_score = model.evaluate(valid_dataset, [metric], transformers)
return -1*valid_score['roc_auc_score']
Explanation: We should then declare a function to be minimized by the hyperopt. So, here we should use the function to minimize our multitaskclassifier model. Additionally, we are using a validation callback to validate the classifier for every 1000 steps, then we are passing the best score as the return. The metric used here is 'roc_auc_score', which needs to be maximized. To maximize a non-negative value is equivalent to minimize its opposite number, hence we are returning the negative of the validation score.
End of explanation
trials=Trials()
best = fmin(fm,
space= search_space,
algo=tpe.suggest,
max_evals=15,
trials = trials)
Explanation: Here, we are calling the fmin function of the hyperopt, where we pass on the function to be minimized, the algorithm to be followed, max number of evals and a trials object. The Trials object is used to keep All hyperparameters, loss, and other information, this means you can access them after running optimization. Also, trials can help you to save important information and later load and then resume the optimization process.
Moreover, for the algorithm there are three choice which can be used without any additional configuration. they are :-
Random Search - rand.suggest
TPE (Tree Parzen Estimators) - tpe.suggest
Adaptive TPE - atpe.suggest
End of explanation
print("Best: {}".format(best))
Explanation: The code below is used to print the best hyperparameters found by the hyperopt.
End of explanation |
10,203 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Econophysics
Names of group members
// put your names here!
Goals of this assignment
Witness what we call "emergent behavior"; large patterns manifesting from the simple interactions of tiny agents
Develop a graphical way to show the dispersion of money across a society.
Create a working implementation of an econophysics game you'll design
Playing In For a Penny
Each Student Should Check Their Intuition
Before Playing, Take 60 seconds to think. When we start the game, say with 17 agents, here's what it could look like if we plotted how much money each agent had
Step1: In For A Pound
Play the Game!
When you are finished, report how many pennies you have in each round in the following spreadsheet (use the "In for a Pound" sheet, listed at the bottom) | Python Code:
# Use Python to make a filled-in plot
# from the data that got reported out
Explanation: Econophysics
Names of group members
// put your names here!
Goals of this assignment
Witness what we call "emergent behavior"; large patterns manifesting from the simple interactions of tiny agents
Develop a graphical way to show the dispersion of money across a society.
Create a working implementation of an econophysics game you'll design
Playing In For a Penny
Each Student Should Check Their Intuition
Before Playing, Take 60 seconds to think. When we start the game, say with 17 agents, here's what it could look like if we plotted how much money each agent had:
<img src="starting-money-in-for-a-penny.png" width=400 alt="Starting Money for Each Agent in In For a Penny">
Does That Plot Make Sense?
Each Student Should Do This Now
What do you think will that graph look like after one round of the game?
Why?
What do you think that graph will look like after many rounds of the game?
Why?
How much money do you expect to end up with? Don't share your predictions! It'll be more fun that way ;-)
Put Your Answers Here
// right here
Play the Game!
When you are finished, report how many pennies you have in each round in the following spreadsheet (use the "In for a Penny" sheet, listed at the bottom):
https://docs.google.com/spreadsheets/d/1PvX_IdjdDrKTdH6Ic5I6p_eLeveaGNkE45syjxJBgFA/edit?usp=sharing
Each Student Should Fill In This Plot
Use Python to create a bar plot like this in your own individual notebook.
<img src="blank_money_plot.png" width=400 alt="A blank plot of agent_id versus money that students should fill in">
End of explanation
# Use Python to make a filled-in plot
# from the data that got reported out
Explanation: In For A Pound
Play the Game!
When you are finished, report how many pennies you have in each round in the following spreadsheet (use the "In for a Pound" sheet, listed at the bottom):
https://docs.google.com/spreadsheets/d/1PvX_IdjdDrKTdH6Ic5I6p_eLeveaGNkE45syjxJBgFA/edit?usp=sharing
Each Student Should Fill In This Plot
Use Python to create a bar plot like this in your own individual notebook.
<img src="blank_money_plot.png" width=400 alt="A blank plot of agent_id versus money that students should fill in">
End of explanation |
10,204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Instalación
Lo primero es instalar Python. Para ello, la mejor forma es bajarse Anaconda, está disponible en Windows, Linux y MacOS. <img src="http
Step1: Crear un entorno
Anaconda nos permite tener distintos entornos, cada uno con distintas librerías (y versiones de librerías).
De esa manera evitamos conflictos si queremos tener aplicaciones que requieran librerías incompatibles entre sí.
Vamos a crear el entorno para Inteligencia de Negocio con la versión 3 de Python, y la librería scikit-learn se crea de la siguiente forma
Step2: Notebooks
Los notebooks de Python son fichero terminados en ".pynb", que pueden ser abiertos desde el navegador usando jupyter. Estos notebooks se dividen en celdas que pueden contener texto y código Python que se puede ejecutar, mostrando la salida de su ejecución.
Github entiende el formato, y permite visualizar un notebook, pero no se puede ejecutar el código, para eso es necesario descargar el fichero y editarlo localmente.
Recursos
Existe una gran cantidad de Notebooks muy útiles, en la propia wiki de Jupyter se encuentra una extensa galería de interesantes notebooks
Ejemplo del lenguaje Python
Vamos a empezar con el Hello, World. Mientras que en C sería
include<stdio.h>
int main(void) {
printf("Hello World\n");
return 0;
}
El código en Python es mucho más sencillo.
Step3: Como véis, no hay puntos y coma al final de cada sentencia, le basta con fin de línea. Y en vez de printf usa print, que es mucho más sencillo.
Un ejemplo un poco más completo
Step4: Visualizando datos
Vamos a visualizar unos pocos datos
Step5: Un ejemplo más completo
Step6: Haciendo uso de Machine Learning
Python posee la estupenda librería scikit-learn para trabajar con Machine Learning, que implementa muchos interesantes métodos. Vamos a mostrarlo aplicando clustering usando el algoritmo K-means.
Primero cargamos las librerías
Step7: Primero pintamos los puntos
Step8: Aplicamos k-means
Step9: Detectando criterio para abordar orquídeas
El '''hello world''' de aprendizaje es aprender a detectar el tipo de flores de orquídeas a partir de cuatro atributos. | Python Code:
#from IPython.display import HTML
#HTML('''<script>
#code_show=true;
#function code_toggle() {
# if (code_show){
# $('div.input').hide();
# } else {
# $('div.input').show();
# }#
# code_show = !code_show
#}
#$( ocument ).ready(code_toggle);
#</script>
#The raw code for this IPython notebook is by default hidden for easier reading.
#To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
Explanation: Instalación
Lo primero es instalar Python. Para ello, la mejor forma es bajarse Anaconda, está disponible en Windows, Linux y MacOS. <img src="http://www.gurobi.com/images/logo-anaconda.png" alt="Anaconda" style="width: 200px;"/>
Descargar Anaconda
Recordar coger la versión de 64 bits y accesible directamente en los botones que aparecen. Las versiones de abajo son para otro tipo de arquitectura o para 32 bits, lo cual puede dar problemas si el SO es de 64 bits.
Verificar que está instalado
conda --version
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("qb7FT68tcA8")
Explanation: Crear un entorno
Anaconda nos permite tener distintos entornos, cada uno con distintas librerías (y versiones de librerías).
De esa manera evitamos conflictos si queremos tener aplicaciones que requieran librerías incompatibles entre sí.
Vamos a crear el entorno para Inteligencia de Negocio con la versión 3 de Python, y la librería scikit-learn se crea de la siguiente forma:
conda create -n IN
Y para activarlo se ejecuta
source activate IN
Si todo va bien, en Linux, deberá de de verse "(IN)" en la línea de comandos.
Para desactivar se puede hacer:
source deactivate
Por qué Python
Python es un lenguaje muy usado en Machine-Learning y Ciencias de Datos en General. Tiene muchas ventajas:
Es Software Libre, así que no hay problemas de licencias.
Es un lenguaje muy fácil de aprender.
Librerías científicas muy buenas.
Fácil de integrar con otras librerías.
Librerías que usaremos
Las librerías que vamos a usar ya vienen instaladas por defecto. Son:
numpy, librería matemática, muy potente, y consigue la eficiencia de trabajar en C.
pandas, librería que permite trabajar con tablas de datos (como Excel) y leerlos y escribirlos de ficheros .csv y Excel.
scikit-learn, librería de 'Marchine Learning'.
Entorno de Desarrollo
Existen muchos entornos de desarrollo, como PyCharm, Spyder o PyDev (entorno Eclipse).
Nosotros vamos a usar Jupyter, que nos ofrece un entorno web para escribir código Python y ver el resultado de forma bastante interactiva.
Con conda se instala sólo. Para ejecutarlo primer vamos al directorio que queremos ejecutar y hacemos:
jupyter notebook
y luego desde el navegador abrimos la dirección, y podemos empezar a trabajar.
End of explanation
print("Hola a todos")
Explanation: Notebooks
Los notebooks de Python son fichero terminados en ".pynb", que pueden ser abiertos desde el navegador usando jupyter. Estos notebooks se dividen en celdas que pueden contener texto y código Python que se puede ejecutar, mostrando la salida de su ejecución.
Github entiende el formato, y permite visualizar un notebook, pero no se puede ejecutar el código, para eso es necesario descargar el fichero y editarlo localmente.
Recursos
Existe una gran cantidad de Notebooks muy útiles, en la propia wiki de Jupyter se encuentra una extensa galería de interesantes notebooks
Ejemplo del lenguaje Python
Vamos a empezar con el Hello, World. Mientras que en C sería
include<stdio.h>
int main(void) {
printf("Hello World\n");
return 0;
}
El código en Python es mucho más sencillo.
End of explanation
sumcars = 0
sumwords = 0
for word in ['hola', 'a', 'todos']:
print("Frase: ", word)
sumcars += len(word)
sumwords += 1
print("Se han mostrado ", sumwords, " palabras y ", sumwords, " caracteres")
Explanation: Como véis, no hay puntos y coma al final de cada sentencia, le basta con fin de línea. Y en vez de printf usa print, que es mucho más sencillo.
Un ejemplo un poco más completo
End of explanation
%pylab inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(30)
plt.plot(x, x**2)
Explanation: Visualizando datos
Vamos a visualizar unos pocos datos
End of explanation
# example with a legend and latex symbols
fig, ax = plt.subplots()
ax.plot(x, x**2, label=r"$y = \alpha^2$")
ax.plot(x, x**3, label=r"$y = \alpha^3$")
ax.legend(loc=2) # upper left corner
ax.set_xlabel(r'$\alpha$', fontsize=18)
ax.set_ylabel(r'$y$', fontsize=18)
ax.set_title('Ejemplo más completo');
Explanation: Un ejemplo más completo
End of explanation
import sklearn.datasets
import sklearn.cluster
import matplotlib.pyplot as plot
# Creamos los puntos
n = 1000
k = 4
# Generate fake data
data, labels = sklearn.datasets.make_blobs(
n_samples=n, n_features=2, centers=k)
Explanation: Haciendo uso de Machine Learning
Python posee la estupenda librería scikit-learn para trabajar con Machine Learning, que implementa muchos interesantes métodos. Vamos a mostrarlo aplicando clustering usando el algoritmo K-means.
Primero cargamos las librerías
End of explanation
plot.scatter(data[:, 0], data[:, 1])
Explanation: Primero pintamos los puntos
End of explanation
# scikit-learn
kmeans = sklearn.cluster.KMeans(k, max_iter=300)
kmeans.fit(data)
means = kmeans.cluster_centers_
plot.scatter(data[:, 0], data[:, 1], c=labels)
plot.scatter(means[:, 0], means[:, 1], linewidths=2, color='r')
plot.show()
Explanation: Aplicamos k-means
End of explanation
import seaborn as sns
iris = sns.load_dataset("iris")
g = sns.PairGrid(iris, hue="species")
g.map(plt.scatter);
g = g.add_legend()
from sklearn import datasets
# load the iris dataset
iris = datasets.load_iris()
# start with the first two features: sepal length (cm) and sepal width (cm)
X = iris.data[:100,:2]
# save the target values as y
y = iris.target[:100]
# Define bounds on the X and Y axes
X_min, X_max = X[:,0].min()-.5, X[:,0].max()+.5
y_min, y_max = X[:,1].min()-.5, X[:,1].max()+.5
for target in set(y):
x = [X[i,0] for i in range(len(y)) if y[i]==target]
z = [X[i,1] for i in range(len(y)) if y[i]==target]
plt.scatter(x,z,color=['red','blue'][target], label=iris.target_names[:2][target])
plt.xlabel('Sepal Length')
plt.ylabel('Sepal Width')
plt.xlim(X_min,X_max)
plt.ylim(y_min,y_max)
plt.title('Scatter Plot of Sepal Length vs. Sepal Width')
plt.legend(iris.target_names[:2], loc='lower right')
plt.show()
Explanation: Detectando criterio para abordar orquídeas
El '''hello world''' de aprendizaje es aprender a detectar el tipo de flores de orquídeas a partir de cuatro atributos.
End of explanation |
10,205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The number of unique values is huge. This makes me think in a direction where we could center basis functions at the centers of discovered clusters. Discover cluster centers via K-Means?
Step1: Null Hypothesis
Step2: We have p = 0.068, hence the null hypothesis does not hold
Step3: We have p = 0, hence the null hypothesis does not hold
We can also observe that as time passes, we mostly observe that accuracy falls in 3 distinct ranges
1. Analysis
Notes
Essential questions
Did you specify the type of data analytic question (e.g. exploration, association causality) before touching the data?
We are trying to order the places (i.e by their likelihood) based on the following measurements from the dataset
Step4: 2.1 K-Means clustering
Step5: Note
Step6: 2.2.1 Run Experiment | Python Code:
train_X = train.values[:,:-1]
train_t = train.values[:,-1]
print train_X.shape
print train_t.shape
train.describe()
train.head()
train.tail()
Explanation: The number of unique values is huge. This makes me think in a direction where we could center basis functions at the centers of discovered clusters. Discover cluster centers via K-Means?
End of explanation
# train['place_id'].value_counts().plot(kind='bar')
# train['place_id'].value_counts().plot(kind='barh')
sb.distplot(train['accuracy'], bins=50, kde=False, rug=True);
sb.distplot(train['accuracy'], hist=False, rug=True);
with sb.axes_style("white"):
sb.jointplot(x=train['x'], y=train['y'], kind="hex", color="k");
Explanation: Null Hypothesis: the plotted joints are identical
End of explanation
with sb.axes_style("white"):
sb.jointplot(x=train['accuracy'], y=train['time'], kind="hex", color="k");
Explanation: We have p = 0.068, hence the null hypothesis does not hold
End of explanation
col_headers = list(train.columns.values)
print col_headers
train[col_headers[1:-1]] = train[col_headers[1:-1]].apply(lambda x: (x - x.min()) / (x.max() - x.min()))
train['accuracy'] = 1 - train['accuracy']
train.describe()
train.head()
train.tail()
train_X_norm = train.values[:,:-1]
print train_X_norm.shape
K = uniq
clusters = range(0,K)
batch_size = 500
n_init = 10
Explanation: We have p = 0, hence the null hypothesis does not hold
We can also observe that as time passes, we mostly observe that accuracy falls in 3 distinct ranges
1. Analysis
Notes
Essential questions
Did you specify the type of data analytic question (e.g. exploration, association causality) before touching the data?
We are trying to order the places (i.e by their likelihood) based on the following measurements from the dataset: coordinates, accuracy (?), time (?) and place_id.
Did you define the metric for success before beginning?
The metric is Mean Average Precision (What is this?)
Did you understand the context for the question and the scientific or business application?
*We are building a system that would rank a list of places given 'coords', 'accuracy' and 'time'. The purpose might be to enable for specific ads (i.e interesting places around the hotel) to be shown to the person (on FB?) depending on this list.
Did you record the experimental design?
Given.
Did you consider whether the question could be answered with the available data?
We need to further explore 'accuracy' and to check if we could identify different clusters of users - we don't know if the data was genereted by 1 person or many, so we need to check its structure.
Checking the data
Null values?
No!
What do we know of the measurements?
First column is ID and is useless.
Second and Third are coords., they are in kilometers and are floating point. Min is (0,0) and max is (10,10);
Fourth column is accuracy. Range is (1, 1033) and seems to follow a power law distribution. We assume that this is the accuracy of the location given by the GPS. This claim is supported by the fact that the data comes from a mobile device, which is able to give location but this information is sometimes not accurate (i.e in buildings), so we would like to know what is the accuracy of the reading. In order to convert this into real accuracy, we need to normalize the column and assign it values of (1 - current_val).
The fifth column is time given as a timestamp. What patterns are there?
Last column is the class_id, given as an integer
2. Pre-processing
End of explanation
random_state = np.random.RandomState(0)
mbk = MiniBatchKMeans(init='k-means++', n_clusters=K, batch_size=batch_size,
n_init=n_init, max_no_improvement=10, verbose=True)
X_kmeans = mbk.fit(train_X_norm)
print "Done!"
Explanation: 2.1 K-Means clustering
End of explanation
import numpy as np
import cv2
from matplotlib import pyplot as plt
X = np.random.randint(25,50,(25,2))
Y = np.random.randint(60,85,(25,2))
Z = np.vstack((X,Y))
# convert to np.float32
Z = np.float32(Z)
# define criteria and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
ret,label,center=cv2.kmeans(Z,2,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Now separate the data, Note the flatten()
A = Z[label.ravel()==0]
B = Z[label.ravel()==1]
# Plot the data
plt.scatter(A[:,0],A[:,1])
plt.scatter(B[:,0],B[:,1],c = 'r')
plt.scatter(center[:,0],center[:,1],s = 80,c = 'y', marker = 's')
plt.xlabel('Height'),plt.ylabel('Weight')
plt.show()
Explanation: Note: dataset of 1.3 GB is ginormous! Need to use GPU-powered algorithms ;(
2.2 K-Means clustering (OpenCV)
2.2.1 Test
End of explanation
# define criteria and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
ret, label, center=cv2.kmeans(train_X_norm, K, None, criteria, n_init, cv2.KMEANS_RANDOM_CENTERS)
print center.shape
Explanation: 2.2.1 Run Experiment
End of explanation |
10,206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 深度卷积生成对抗网络
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 加载和准备数据集
您将使用 MNIST 数据集来训练生成器和判别器。生成器将生成类似于 MNIST 数据集的手写数字。
Step3: 创建模型
生成器和判别器均使用 Keras Sequential API 定义。
生成器
生成器使用 tf.keras.layers.Conv2DTranspose(上采样)层来从种子(随机噪声)中生成图像。以一个使用该种子作为输入的 Dense 层开始,然后多次上采样,直至达到所需的 28x28x1 的图像大小。请注意,除了输出层使用双曲正切之外,其他每层均使用 tf.keras.layers.LeakyReLU 激活。
Step4: 使用(尚未训练的)生成器创建一张图片。
Step5: 判别器
判别器是一个基于 CNN 的图片分类器。
Step6: 使用(尚未训练的)判别器对所生成的图像进行真伪分类。模型将被训练为对真实图像输出正值,对伪造图像输出负值。
Step7: 定义损失函数和优化器
为两个模型定义损失函数和优化器。
Step8: 判别器损失
该方法量化判别器从判断真伪图片的能力。它将判别器对真实图片的预测值与值全为 1 的数组进行对比,将判别器对伪造(生成的)图片的预测值与值全为 0 的数组进行对比。
Step9: 生成器损失
生成器的损失可量化其欺骗判别器的能力。直观地说,如果生成器表现良好,判别器会将伪造图像分类为真实图像(或 1)。在此,需要将判别器对生成图像的决策与值全为 1 的数组进行对比。
Step10: 判别器和生成器优化器不同,因为您将分别训练两个网络。
Step11: 保存检查点
本笔记还演示了如何保存和恢复模型,这在长时间训练任务被中断的情况下比较有帮助。
Step12: 定义训练循环
Step13: 训练循环在生成器接收到一个随机种子作为输入时开始。该种子用于生成一个图像。判别器随后被用于对真实图像(选自训练集)和伪造图像(由生成器生成)进行分类。为每一个模型计算损失,并使用梯度更新生成器和判别器。
Step14: 生成与保存图片
Step15: 训练模型
调用上面定义的 train() 方法来同时训练生成器和判别器。注意,训练 GANs 可能是棘手的。重要的是,生成器和判别器不能够互相压制对方(例如,他们以相似的学习率训练)。
在训练之初,生成的图片看起来像是随机噪声。随着训练过程的进行,生成的数字将越来越真实。在大概 50 个 epoch 之后,这些图片看起来像是 MNIST 数字。使用 Colab 中的默认设置可能需要大约 1 分钟每 epoch。
Step16: 恢复最新的检查点。
Step17: 创建 GIF
Step18: 使用训练过程中生成的图片通过 imageio 生成动态 gif | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
tf.__version__
# To generate GIFs
!pip install imageio
!pip install git+https://github.com/tensorflow/docs
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time
from IPython import display
Explanation: 深度卷积生成对抗网络
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/generative/dcgan"> <img src="https://tensorflow.google.cn/images/tf_logo_32px.png"> 在 tensorFlow.google.cn 上查看</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/generative/dcgan.ipynb"> <img src="https://tensorflow.google.cn/images/colab_logo_32px.png"> 在 Google Colab 中运行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/generative/dcgan.ipynb"> <img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png"> 在 GitHub 上查看源代码</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/generative/dcgan.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载 notebook</a> </td>
</table>
本教程演示了如何使用深度卷积生成对抗网络 (DCGAN) 生成手写数字的图像。该代码是使用 Keras 序列式 API 与 tf.GradientTape 训练循环编写的。
什么是生成对抗网络?
生成对抗网络 (GAN) 是当今计算机科学领域最有趣的想法之一。两个模型通过对抗过程同时训练。生成器(“艺术家”)学习创建看起来真实的图像,而判别器(“艺术评论家”)学习区分真假图像。
训练过程中,生成器在生成逼真图像方面逐渐变强,而判别器在辨别这些图像的能力上逐渐变强。当判别器不再能够区分真实图片和伪造图片时,训练过程达到平衡。
本笔记在 MNIST 数据集上演示了该过程。下方动画展示了当训练了 50 个epoch (全部数据集迭代50次) 时生成器所生成的一系列图片。图片从随机噪声开始,随着时间的推移越来越像手写数字。
要详细了解 GAN,请参阅 MIT 的深度学习介绍课程。
Import TensorFlow and other libraries
End of explanation
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
Explanation: 加载和准备数据集
您将使用 MNIST 数据集来训练生成器和判别器。生成器将生成类似于 MNIST 数据集的手写数字。
End of explanation
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
Explanation: 创建模型
生成器和判别器均使用 Keras Sequential API 定义。
生成器
生成器使用 tf.keras.layers.Conv2DTranspose(上采样)层来从种子(随机噪声)中生成图像。以一个使用该种子作为输入的 Dense 层开始,然后多次上采样,直至达到所需的 28x28x1 的图像大小。请注意,除了输出层使用双曲正切之外,其他每层均使用 tf.keras.layers.LeakyReLU 激活。
End of explanation
generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
Explanation: 使用(尚未训练的)生成器创建一张图片。
End of explanation
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
return model
Explanation: 判别器
判别器是一个基于 CNN 的图片分类器。
End of explanation
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print (decision)
Explanation: 使用(尚未训练的)判别器对所生成的图像进行真伪分类。模型将被训练为对真实图像输出正值,对伪造图像输出负值。
End of explanation
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
Explanation: 定义损失函数和优化器
为两个模型定义损失函数和优化器。
End of explanation
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
Explanation: 判别器损失
该方法量化判别器从判断真伪图片的能力。它将判别器对真实图片的预测值与值全为 1 的数组进行对比,将判别器对伪造(生成的)图片的预测值与值全为 0 的数组进行对比。
End of explanation
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
Explanation: 生成器损失
生成器的损失可量化其欺骗判别器的能力。直观地说,如果生成器表现良好,判别器会将伪造图像分类为真实图像(或 1)。在此,需要将判别器对生成图像的决策与值全为 1 的数组进行对比。
End of explanation
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
Explanation: 判别器和生成器优化器不同,因为您将分别训练两个网络。
End of explanation
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
Explanation: 保存检查点
本笔记还演示了如何保存和恢复模型,这在长时间训练任务被中断的情况下比较有帮助。
End of explanation
EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
# You will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
Explanation: 定义训练循环
End of explanation
# Notice the use of `tf.function`
# This annotation causes the function to be "compiled".
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as you go
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
Explanation: 训练循环在生成器接收到一个随机种子作为输入时开始。该种子用于生成一个图像。判别器随后被用于对真实图像(选自训练集)和伪造图像(由生成器生成)进行分类。为每一个模型计算损失,并使用梯度更新生成器和判别器。
End of explanation
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
Explanation: 生成与保存图片
End of explanation
train(train_dataset, EPOCHS)
Explanation: 训练模型
调用上面定义的 train() 方法来同时训练生成器和判别器。注意,训练 GANs 可能是棘手的。重要的是,生成器和判别器不能够互相压制对方(例如,他们以相似的学习率训练)。
在训练之初,生成的图片看起来像是随机噪声。随着训练过程的进行,生成的数字将越来越真实。在大概 50 个 epoch 之后,这些图片看起来像是 MNIST 数字。使用 Colab 中的默认设置可能需要大约 1 分钟每 epoch。
End of explanation
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
Explanation: 恢复最新的检查点。
End of explanation
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(EPOCHS)
Explanation: 创建 GIF
End of explanation
anim_file = 'dcgan.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import tensorflow_docs.vis.embed as embed
embed.embed_file(anim_file)
Explanation: 使用训练过程中生成的图片通过 imageio 生成动态 gif
End of explanation |
10,207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex SDK
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step13: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
Step14: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step16: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary
Step17: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step18: Create and run custom training job
To train a custom model, you perform two steps
Step19: Prepare your command-line arguments
Now define the command-line arguments for your custom training container
Step20: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters
Step21: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step22: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements
Step23: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step24: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
Step25: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters
Step26: Get test items
You will use examples out of the test (holdout) portion of the dataset as a test items.
Step27: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a dictionary entry of the form
Step28: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters
Step29: Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
Step30: Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format
Step31: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex SDK: Custom training tabular regression model for batch prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to train and deploy a custom tabular regression model for batch prediction.
Dataset
The dataset used for this tutorial is the Boston Housing Prices dataset. The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD.
Objective
In this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a batch prediction on the uploaded model. You can alternatively create custom models using gcloud command-line tool or online using Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train the TensorFlow model.
Retrieve and load the model artifacts.
View the model evaluation.
Upload the model as a Vertex Model resource.
Make a batch prediction.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more here hardware accelerator support for your region
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
BATCH_SIZE = 16
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_dnn_model()
# Train the model
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads Boston Housing dataset from TF.Keras builtin datasets
Builds a simple deep neural network model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs specified by args.epochs.
Saves the trained model (save(args.model_dir)) to the specified model directory.
Saves the maximum value for each feature f.write(str(params)) to the specified parameters file.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
job = aip.CustomTrainingJob(
display_name="boston_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
Explanation: Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human readable name for the custom training job.
container_uri: The training container image.
requirements: Package requirements for the training container image (e.g., pandas).
script_path: The relative path to the training script.
End of explanation
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
Explanation: Prepare your command-line arguments
Now define the command-line arguments for your custom training container:
args: The command-line arguments to pass to the executable that is set as the entry point into the container.
--model-dir : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.
direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
"--epochs=" + EPOCHS: The number of epochs for training.
"--steps=" + STEPS: The number of steps per epoch.
End of explanation
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
Explanation: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
args: The command-line arguments to pass to the training script.
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import numpy as np
from tensorflow.keras.datasets import boston_housing
(_, _), (x_test, y_test) = boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float32)
return feature
# Let's save one data item that has not been scaled
x_test_notscaled = x_test[0:1].copy()
for _ in range(13):
x_test[_] = scale(x_test[_])
x_test = x_test.astype(np.float32)
print(x_test.shape, x_test.dtype, y_test.shape)
print("scaled", x_test[0])
print("unscaled", x_test_notscaled)
Explanation: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).
You don't need the training data, and hence why we loaded it as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescale) the data in each column by dividing each value by the maximum value of that column. This replaces each single value with a 32-bit floating point number between 0 and 1.
End of explanation
local_model.evaluate(x_test, y_test)
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
End of explanation
model = aip.Model.upload(
display_name="boston_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
sync=False,
)
model.wait()
Explanation: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
test_item_1 = x_test[0]
test_label_1 = y_test[0]
test_item_2 = x_test[1]
test_label_2 = y_test[1]
print(test_item_1.shape)
Explanation: Get test items
You will use examples out of the test (holdout) portion of the dataset as a test items.
End of explanation
import json
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {serving_input: test_item_1.tolist()}
f.write(json.dumps(data) + "\n")
data = {serving_input: test_item_2.tolist()}
f.write(json.dumps(data) + "\n")
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a dictionary entry of the form:
{serving_input: content}
serving_input: The name of the input layer of the underlying model.
content: The feature values of the test item as a list.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
batch_predict_job = model.batch_predict(
job_display_name="boston_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="jsonl",
predictions_format="jsonl",
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=False,
)
print(batch_predict_job)
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
instances_format: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
predictions_format: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
batch_predict_job.wait()
Explanation: Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
import json
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
Explanation: Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
instance: The prediction request.
prediction: The prediction response.
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
10,208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preprocessing
For local run, we cannot afford running with full data. We will sample the data randomly (using hash) to about 200~300 instances. It takes about 15 minutes.
Step1: Train
To get help of a certain method, run 'mymethod??' and on the right side you will see the method signature. For example, run 'local_train??'.
Step2: You can start hosted Tensorboard to check events.
Step3: Evaluation
Our model was trained with a small subset of the data so accuracy is not very high.
First, we can check the TF summary events from training.
Step4: We will do more evaluation with more data using batch prediction.
Prediction
Instant prediction
Step5: Batch prediction. Note that we sample eval data so we use about 200 instances.
Step6: Compute accuracy per label.
Step7: You can view the results using Feature-Slice-View. This time do logloss.
Step8: Clean up | Python Code:
import mltoolbox.image.classification as model
from google.datalab.ml import *
worker_dir = '/content/datalab/tmp/coast'
preprocessed_dir = worker_dir + '/coast300'
model_dir = worker_dir + '/model300'
train_set = BigQueryDataSet('SELECT image_url, label FROM coast.train WHERE rand() < 0.04')
model.preprocess(train_set, preprocessed_dir)
Explanation: Preprocessing
For local run, we cannot afford running with full data. We will sample the data randomly (using hash) to about 200~300 instances. It takes about 15 minutes.
End of explanation
import logging
logging.getLogger().setLevel(logging.INFO)
model.train(preprocessed_dir, 30, 1000, model_dir)
logging.getLogger().setLevel(logging.WARNING)
Explanation: Train
To get help of a certain method, run 'mymethod??' and on the right side you will see the method signature. For example, run 'local_train??'.
End of explanation
tb_id = TensorBoard.start(model_dir)
Explanation: You can start hosted Tensorboard to check events.
End of explanation
summary = Summary(model_dir)
summary.list_events()
summary.plot('accuracy')
summary.plot('loss')
Explanation: Evaluation
Our model was trained with a small subset of the data so accuracy is not very high.
First, we can check the TF summary events from training.
End of explanation
# gs://tamucc_coastline/esi_images/IMG_2849_SecDE_Spr12.jpg,3B
# gs://tamucc_coastline/esi_images/IMG_0047_SecBC_Spr12.jpg,10A
# gs://tamucc_coastline/esi_images/IMG_0617_SecBC_Spr12.jpg,7
# gs://tamucc_coastline/esi_images/IMG_2034_SecEGH_Sum12_Pt2.jpg,10A
images = [
'gs://tamucc_coastline/esi_images/IMG_2849_SecDE_Spr12.jpg',
'gs://tamucc_coastline/esi_images/IMG_0047_SecBC_Spr12.jpg',
'gs://tamucc_coastline/esi_images/IMG_0617_SecBC_Spr12.jpg',
'gs://tamucc_coastline/esi_images/IMG_2034_SecEGH_Sum12_Pt2.jpg'
]
# Set show_image to True to see the images
model.predict(model_dir, images, show_image=False)
Explanation: We will do more evaluation with more data using batch prediction.
Prediction
Instant prediction:
End of explanation
eval_set = BigQueryDataSet('select * from coast.eval WHERE rand()<0.1')
model.batch_predict(eval_set, model_dir, output_bq_table='coast.eval200tinymodel')
ConfusionMatrix.from_bigquery('select * from coast.eval200tinymodel').plot()
Explanation: Batch prediction. Note that we sample eval data so we use about 200 instances.
End of explanation
%%bq query --name accuracy
SELECT
target,
SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END) as correct,
COUNT(*) as total,
SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END)/COUNT(*) as accuracy
FROM
coast.eval200tinymodel
GROUP BY
target
accuracy.execute().result()
Explanation: Compute accuracy per label.
End of explanation
%%bq query --name logloss
SELECT feature, AVG(-logloss) as logloss, COUNT(*) as count FROM
(
SELECT feature, CASE WHEN correct=1 THEN LOG(prob) ELSE LOG(1-prob) END as logloss
FROM
(
SELECT
target as feature,
CASE WHEN target=predicted THEN 1 ELSE 0 END as correct,
target_prob as prob
FROM coast.eval200tinymodel
)
)
GROUP BY feature
FeatureSliceView().plot(logloss)
Explanation: You can view the results using Feature-Slice-View. This time do logloss.
End of explanation
import shutil
import google.datalab.bigquery as bq
TensorBoard.stop(tb_id)
bq.Table('coast.eval200tinymodel').delete()
shutil.rmtree(worker_dir)
Explanation: Clean up
End of explanation |
10,209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Data
You'll often want to compare data in your dataset, to see if you can discern trends or relationships.
Univariate Data
Univariate data is data that consist of only one variable or feature. While it may initially seem as though there's not much we can do to analyze univariate data, we've already seen that we can explore its distribution in terms of measures of central tendency and measures of variance. We've also seen how we can visualize this distribution using histograms and box plots.
Here's a reminder of how you can visualize the distribution of univariate data, using our student grade data with a few additional observations in the sample
Step1: Bivariate and Multivariate Data
It can often be useful to compare bivariate data; in other words, compare two variables, or even more (in which case we call it multivariate data).
For example, our student data includes three numeric variables for each student
Step2: Let's suppose you want to compare the distributions of these variables. You might simply create a boxplot for each variable, like this
Step3: Hmm, that's not particularly useful is it?
The problem is that the data are all measured in different scales. Salaries are typically in tens of thousands, while hours and grades are in single or double digits.
Normalizing Data
When you need to compare data in different units of measurement, you can normalize or scale the data so that the values are measured in the same proportional scale. For example, in Python you can use a MinMax scaler to normalize multiple numeric variables to a proportional value between 0 and 1 based on their minimum and maximum values. Run the following cell to do this
Step4: Now the numbers on the y axis aren't particularly meaningful, but they're on a similar scale.
Comparing Bivariate Data with a Scatter Plot
When you need to compare two numeric values, a scatter plot can be a great way to see if there is any apparent relationship between them so that changes in the value of one variable affect the value of the other.
Let's look at a scatter plot of Salary and Grade
Step5: Look closely at the scatter plot. Can you see a diagonal trend in the plotted points, rising up to the right? It looks as though the higher the student's grade is, the higher their salary is.
You can see the trend more clearly by adding a line of best fit (sometimes called a trendline) to the plot
Step6: The line of best fit makes it clearer that there is some apparent colinearity between these variables (the relationship is colinear if one variable's value increases or decreases in line with the other).
Correlation
The apparently colinear relationship you saw in the scatter plot can be verified by calculating a statistic that quantifies the relationship between the two variables. The statistic usually used to do this is correlation, though there is also a statistic named covariance that is sometimes used. Correlation is generally preferred because the value it produces is more easily interpreted.
A correlation value is always a number between -1 and 1.
- A positive value indicates a positive correlation (as the value of variable x increases, so does the value of variable y).
- A negative value indicates a negative correlation (as the value of variable x increases, the value of variable y decreases).
- The closer to zero the correlation value is, the weaker the correlation between x and y.
- A correlation of exactly zero means there is no apparent relationship between the variables.
The formula to calculate correlation is
Step7: In this case, the correlation is just over 0.8; making it a reasonably high positive correlation that indicates salary increases in line with grade.
Let's see if we can find a correlation between Grade and Hours
Step8: In this case, the correlation value is just under -0.8; meaning a fairly strong negative correlation in which the number of hours worked decreases as the grade increases. The line of best fit on the scatter plot corroborates this statistic.
It's important to remember that correlation is not causation. In other words, even though there's an apparent relationship, you can't say for sure that one variable is the cause of the other. In this example, we can say that students who achieved higher grades tend to work shorter hours; but we can't say that those who work shorter hours do so because they achieved a high grade!
Least Squares Regression
In the previous examples, we drew a line on a scatter plot to show the best fit of the data. In many cases, your initial attempts to identify any colinearity might involve adding this kind of line by hand (or just mentally visualizing it); but as you may suspect from the use of the numpy.polyfit function in the code above, there are ways to calculate the coordinates for this line mathematically. One of the most commonly used techniques is least squares regression, and that's what we'll look at now.
Cast your mind back to when you were learning how to solve linear equations, and recall that the slope-intercept form of a linear equation lookes like this
Step9: In this case, the line fits the middle values fairly well, but is less accurate for the outlier at the low end. This is often the case, which is why statisticians and data scientists often treat outliers by removing them or applying a threshold value; though in this example there are too few data points to conclude that the data points are really outliers.
Let's look at a slightly larger dataset and apply the same approach to compare Grade and Salary
Step10: In this case, we used Python expressions to calculate the slope and y-intercept using the same approach and formula as before. In practice, Python provides great support for statistical operations like this; and you can use the linregress function in the scipy.stats package to retrieve the slope and y-intercept (as well as the correlation, p-value, and standard error) for a matched array of x and y values (we'll discuss p-values later!).
Here's the Python code to calculate the regression line variables using the linregress function
Step11: Note that the slope and y-intercept values are the same as when we worked them out using the formula.
Similarly to the simple study hours example, the regression line doesn't fit the outliers very well. In this case, the extremes include a student who scored only 5, and a student who scored 95. Let's see what happens if we remove these students from our sample | Python Code:
%matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],
'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})
plt.figure()
df['Grade'].plot( kind='box', title='Grade Distribution')
plt.figure()
df['Grade'].hist(bins=9)
plt.show()
print(df.describe())
print('median: ' + str(df['Grade'].median()))
Explanation: Comparing Data
You'll often want to compare data in your dataset, to see if you can discern trends or relationships.
Univariate Data
Univariate data is data that consist of only one variable or feature. While it may initially seem as though there's not much we can do to analyze univariate data, we've already seen that we can explore its distribution in terms of measures of central tendency and measures of variance. We've also seen how we can visualize this distribution using histograms and box plots.
Here's a reminder of how you can visualize the distribution of univariate data, using our student grade data with a few additional observations in the sample:
End of explanation
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],
'Salary':[50000,54000,50000,189000,55000,40000,59000,42000,47000,78000,119000,95000,49000,29000,130000],
'Hours':[41,40,36,17,35,39,40,45,41,35,30,33,38,47,24],
'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})
df[['Name', 'Salary', 'Hours', 'Grade']]
Explanation: Bivariate and Multivariate Data
It can often be useful to compare bivariate data; in other words, compare two variables, or even more (in which case we call it multivariate data).
For example, our student data includes three numeric variables for each student: their salary, the number of hours they work per week, and their final school grade. Run the following code to see an enlarged sample of this data as a table:
End of explanation
%matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],
'Salary':[50000,54000,50000,189000,55000,40000,59000,42000,47000,78000,119000,95000,49000,29000,130000],
'Hours':[41,40,36,17,35,39,40,45,41,35,30,33,38,47,24],
'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})
df.plot(kind='box', title='Distribution', figsize = (10,8))
plt.show()
Explanation: Let's suppose you want to compare the distributions of these variables. You might simply create a boxplot for each variable, like this:
End of explanation
%matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.preprocessing import MinMaxScaler
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],
'Salary':[50000,54000,50000,189000,55000,40000,59000,42000,47000,78000,119000,95000,49000,29000,130000],
'Hours':[41,40,36,17,35,39,40,45,41,35,30,33,38,47,24],
'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})
# Normalize the data
scaler = MinMaxScaler()
df[['Salary', 'Hours', 'Grade']] = scaler.fit_transform(df[['Salary', 'Hours', 'Grade']])
# Plot the normalized data
df.plot(kind='box', title='Distribution', figsize = (10,8))
plt.show()
Explanation: Hmm, that's not particularly useful is it?
The problem is that the data are all measured in different scales. Salaries are typically in tens of thousands, while hours and grades are in single or double digits.
Normalizing Data
When you need to compare data in different units of measurement, you can normalize or scale the data so that the values are measured in the same proportional scale. For example, in Python you can use a MinMax scaler to normalize multiple numeric variables to a proportional value between 0 and 1 based on their minimum and maximum values. Run the following cell to do this:
End of explanation
%matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],
'Salary':[50000,54000,50000,189000,55000,40000,59000,42000,47000,78000,119000,95000,49000,29000,130000],
'Hours':[41,40,36,17,35,39,40,45,41,35,30,33,38,47,24],
'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})
# Create a scatter plot of Salary vs Grade
df.plot(kind='scatter', title='Grade vs Hours', x='Grade', y='Salary')
plt.show()
Explanation: Now the numbers on the y axis aren't particularly meaningful, but they're on a similar scale.
Comparing Bivariate Data with a Scatter Plot
When you need to compare two numeric values, a scatter plot can be a great way to see if there is any apparent relationship between them so that changes in the value of one variable affect the value of the other.
Let's look at a scatter plot of Salary and Grade:
End of explanation
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],
'Salary':[50000,54000,50000,189000,55000,40000,59000,42000,47000,78000,119000,95000,49000,29000,130000],
'Hours':[41,40,36,17,35,39,40,45,41,35,30,33,38,47,24],
'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})
# Create a scatter plot of Salary vs Grade
df.plot(kind='scatter', title='Grade vs Salary', x='Grade', y='Salary')
# Add a line of best fit
plt.plot(np.unique(df['Grade']), np.poly1d(np.polyfit(df['Grade'], df['Salary'], 1))(np.unique(df['Grade'])))
plt.show()
Explanation: Look closely at the scatter plot. Can you see a diagonal trend in the plotted points, rising up to the right? It looks as though the higher the student's grade is, the higher their salary is.
You can see the trend more clearly by adding a line of best fit (sometimes called a trendline) to the plot:
End of explanation
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,17,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
# Calculate the correlation between *Salary* and *Grade*
print(df['Grade'].corr(df['Salary']))
Explanation: The line of best fit makes it clearer that there is some apparent colinearity between these variables (the relationship is colinear if one variable's value increases or decreases in line with the other).
Correlation
The apparently colinear relationship you saw in the scatter plot can be verified by calculating a statistic that quantifies the relationship between the two variables. The statistic usually used to do this is correlation, though there is also a statistic named covariance that is sometimes used. Correlation is generally preferred because the value it produces is more easily interpreted.
A correlation value is always a number between -1 and 1.
- A positive value indicates a positive correlation (as the value of variable x increases, so does the value of variable y).
- A negative value indicates a negative correlation (as the value of variable x increases, the value of variable y decreases).
- The closer to zero the correlation value is, the weaker the correlation between x and y.
- A correlation of exactly zero means there is no apparent relationship between the variables.
The formula to calculate correlation is:
\begin{equation}r_{x,y} = \frac{\displaystyle\sum_{i=1}^{n} (x_{i} -\bar{x})(y_{i} -\bar{y})}{\sqrt{\displaystyle\sum_{i=1}^{n} (x_{i} -\bar{x})^{2}(y_{i} -\bar{y})^{2}}}\end{equation}
r<sub>x, y</sub> is the notation for the correlation between x and y.
The formula is pretty complex, but fortunately Python makes it very easy to calculate the correlation by using the corr function:
End of explanation
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],
'Salary':[50000,54000,50000,189000,55000,40000,59000,42000,47000,78000,119000,95000,49000,29000,130000],
'Hours':[41,40,36,17,35,39,40,45,41,35,30,33,38,47,24],
'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})
r = df['Grade'].corr(df['Hours'])
print('Correlation: ' + str(r))
# Create a scatter plot of Salary vs Grade
df.plot(kind='scatter', title='Grade vs Hours', x='Grade', y='Hours')
# Add a line of best fit-
plt.plot(np.unique(df['Grade']), np.poly1d(np.polyfit(df['Grade'], df['Hours'], 1))(np.unique(df['Grade'])))
plt.show()
Explanation: In this case, the correlation is just over 0.8; making it a reasonably high positive correlation that indicates salary increases in line with grade.
Let's see if we can find a correlation between Grade and Hours:
End of explanation
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Study':[1,0.75,0.6,2,1,0.2,1.2],
'Grade':[50,50,46,95,50,5,57],
'fx':[52.0159,40.9106,34.2480,96.4321,52.0149,16.4811,60.8983]})
# Create a scatter plot of Study vs Grade
df.plot(kind='scatter', title='Study Time vs Grade Regression', x='Study', y='Grade', color='red')
# Plot the regression line
plt.plot(df['Study'],df['fx'])
plt.show()
Explanation: In this case, the correlation value is just under -0.8; meaning a fairly strong negative correlation in which the number of hours worked decreases as the grade increases. The line of best fit on the scatter plot corroborates this statistic.
It's important to remember that correlation is not causation. In other words, even though there's an apparent relationship, you can't say for sure that one variable is the cause of the other. In this example, we can say that students who achieved higher grades tend to work shorter hours; but we can't say that those who work shorter hours do so because they achieved a high grade!
Least Squares Regression
In the previous examples, we drew a line on a scatter plot to show the best fit of the data. In many cases, your initial attempts to identify any colinearity might involve adding this kind of line by hand (or just mentally visualizing it); but as you may suspect from the use of the numpy.polyfit function in the code above, there are ways to calculate the coordinates for this line mathematically. One of the most commonly used techniques is least squares regression, and that's what we'll look at now.
Cast your mind back to when you were learning how to solve linear equations, and recall that the slope-intercept form of a linear equation lookes like this:
\begin{equation}y = mx + b\end{equation}
In this equation, y and x are the coordinate variables, m is the slope of the line, and b is the y-intercept of the line.
In the case of our scatter plot for our former-student's working hours, we already have our values for x (Grade) and y (Hours), so we just need to calculate the intercept and slope of the straight line that lies closest to those points. Then we can form a linear equation that calculates the a new y value on that line for each of our x (Grade) values - to avoid confusion, we'll call this new y value f(x) (because it's the output from a linear equation function based on x). The difference between the original y (Hours) value and the f(x) value is the error between our regression line of best fit and the actual Hours worked by the former student. Our goal is to calculate the slope and intercept for a line with the lowest overall error.
Specifically, we define the overall error by taking the error for each point, squaring it, and adding all the squared errors together. The line of best fit is the line that gives us the lowest value for the sum of the squared errors - hence the name least squares regression.
So how do we accomplish this? First we need to calculate the slope (m), which we do using this formula (in which n is the number of observations in our data sample):
\begin{equation}m = \frac{n(\sum{xy}) - (\sum{x})(\sum{y})}{n(\sum{x^{2}})-(\sum{x})^{2}}\end{equation}
After we've calculated the slope (m), we can use is to calculate the intercept (b) like this:
\begin{equation}b = \frac{\sum{y} - m(\sum{x})}{n}\end{equation}
Let's look at a simple example that compares the number of hours of nightly study each student undertook with the final grade the student achieved:
| Name | Study | Grade |
|----------|-------|-------|
| Dan | 1 | 50 |
| Joann | 0.75 | 50 |
| Pedro | 0.6 | 46 |
| Rosie | 2 | 95 |
| Ethan | 1 | 50 |
| Vicky | 0.2 | 5 |
| Frederic | 1.2 | 57 |
First, let's take each x (Study) and y (Grade) pair and calculate x<sup>2</sup> and xy, because we're going to need these to work out the slope:
| Name | Study | Grade | x<sup>2</sup> | xy |
|----------|-------|-------|------|------|
| Dan | 1 | 50 | 1 | 50 |
| Joann | 0.75 | 50 | 0.55 | 37.5 |
| Pedro | 0.6 | 46 | 0.36 | 27.6 |
| Rosie | 2 | 95 | 4 | 190 |
| Ethan | 1 | 50 | 1 | 50 |
| Vicky | 0.2 | 5 | 0.04 | 1 |
| Frederic | 1.2 | 57 | 1.44 | 68.4 |
Now we'll sum x, y, x<sup>2</sup>, and xy:
| Name | Study | Grade | x<sup>2</sup> | xy |
|----------|-------|-------|------|------|
| Dan | 1 | 50 | 1 | 50 |
| Joann | 0.75 | 50 | 0.55 | 37.5 |
| Pedro | 0.6 | 46 | 0.36 | 27.6 |
| Rosie | 2 | 95 | 4 | 190 |
| Ethan | 1 | 50 | 1 | 50 |
| Vicky | 0.2 | 5 | 0.04 | 1 |
| Frederic | 1.2 | 57 | 1.44 | 68.4 |
| Σ | 6.75 | 353 | 8.4025| 424.5 |
OK, now we're ready to calculate the slope for our 7 observations:
\begin{equation}m = \frac{(7\times 424.5) - (6.75\times353)}{(7\times8.4025)-6.75^{2}}\end{equation}
Which is:
\begin{equation}m = \frac{2971.5 - 2382.75}{58.8175-45.5625}\end{equation}
So:
\begin{equation}m = \frac{588.75}{13.255} \approx 44.4172\end{equation}
Now we can calculate b:
\begin{equation}b = \frac{353 - (44.4172\times6.75)}{7}\end{equation}
Which simplifies to:
\begin{equation}b = \frac{53.18389}{7} = 7.597699\end{equation}
Now we have our linear function:
\begin{equation}f(x) = 44.4172x + 7.597699\end{equation}
We can use this for each x (Study) value to calculate the y values for the regression line (f(x)), and we can subtract the original y (Grade) from these to calculate the error for each point:
| Name | Study | Grade | f(x) | Error |
|----------|-------|-------|------|------ |
| Dan | 1 | 50 |52.0149 |2.0149 |
| Joann | 0.75 | 50 |40.9106 |-9.0894|
| Pedro | 0.6 | 46 |34.2480 |-11.752|
| Rosie | 2 | 95 |96.4321 |1.4321 |
| Ethan | 1 | 50 |52.0149 |2.0149 |
| Vicky | 0.2 | 5 |16.4811 |11.4811|
| Frederic | 1.2 | 57 |60.8983 |3.8983 |
As you can see, the f(x) values are mostly quite close to the actual Grade values, and the errors (which when we're comparing estimated values from a function with actual known values we we often call residuals) are generally pretty small.
Let's plot the least squares regression line with the actual values:
End of explanation
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],
'Salary':[50000,54000,50000,189000,55000,40000,59000,42000,47000,78000,119000,95000,49000,29000,130000],
'Hours':[41,40,36,17,35,39,40,45,41,35,30,33,38,47,24],
'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})
# Calculate least squares regression line
df['x2'] = df['Grade']**2
df['xy'] = df['Grade'] * df['Salary']
x = df['Grade'].sum()
y = df['Salary'].sum()
x2 = df['x2'].sum()
xy = df['xy'].sum()
n = df['Grade'].count()
m = ((n*xy) - (x*y))/((n*x2)-(x**2))
b = (y - (m*x))/n
df['fx'] = (m*df['Grade']) + b
df['error'] = df['fx'] - df['Salary']
print('slope: ' + str(m))
print('y-intercept: ' + str(b))
# Create a scatter plot of Grade vs Salary
df.plot(kind='scatter', title='Grade vs Salary Regression', x='Grade', y='Salary', color='red')
# Plot the regression line
plt.plot(df['Grade'],df['fx'])
plt.show()
# Show the original x,y values, the f(x) value, and the error
df[['Grade', 'Salary', 'fx', 'error']]
Explanation: In this case, the line fits the middle values fairly well, but is less accurate for the outlier at the low end. This is often the case, which is why statisticians and data scientists often treat outliers by removing them or applying a threshold value; though in this example there are too few data points to conclude that the data points are really outliers.
Let's look at a slightly larger dataset and apply the same approach to compare Grade and Salary:
End of explanation
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],
'Salary':[50000,54000,50000,189000,55000,40000,59000,42000,47000,78000,119000,95000,49000,29000,130000],
'Hours':[41,40,36,17,35,39,40,45,41,35,30,33,38,47,24],
'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})
# Get the regression line slope and intercept
m, b, r, p, se = stats.linregress(df['Grade'], df['Salary'])
df['fx'] = (m*df['Grade']) + b
df['error'] = df['fx'] - df['Salary']
print('slope: ' + str(m))
print('y-intercept: ' + str(b))
# Create a scatter plot of Grade vs Salary
df.plot(kind='scatter', title='Grade vs Salary Regression', x='Grade', y='Salary', color='red')
# Plot the regression line
plt.plot(df['Grade'],df['fx'])
plt.show()
# Show the original x,y values, the f(x) value, and the error
df[['Grade', 'Salary', 'fx', 'error']]
Explanation: In this case, we used Python expressions to calculate the slope and y-intercept using the same approach and formula as before. In practice, Python provides great support for statistical operations like this; and you can use the linregress function in the scipy.stats package to retrieve the slope and y-intercept (as well as the correlation, p-value, and standard error) for a matched array of x and y values (we'll discuss p-values later!).
Here's the Python code to calculate the regression line variables using the linregress function:
End of explanation
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],
'Salary':[50000,54000,50000,189000,55000,40000,59000,42000,47000,78000,119000,95000,49000,29000,130000],
'Hours':[41,40,36,17,35,39,40,45,41,35,30,33,38,47,24],
'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})
df = df[(df['Grade'] > 5) & (df['Grade'] < 95)]
# Get the regression line slope and intercept
m, b, r, p, se = stats.linregress(df['Grade'], df['Salary'])
df['fx'] = (m*df['Grade']) + b
df['error'] = df['fx'] - df['Salary']
print('slope: ' + str(m))
print('y-intercept: ' + str(b))
# Create a scatter plot of Grade vs Salary
df.plot(kind='scatter', title='Grade vs Salary Regression', x='Grade', y='Salary', color='red')
# Plot the regression line
plt.plot(df['Grade'],df['fx'])
plt.show()
# Show the original x,y values, the f(x) value, and the error
df[['Grade', 'Salary', 'fx', 'error']]
Explanation: Note that the slope and y-intercept values are the same as when we worked them out using the formula.
Similarly to the simple study hours example, the regression line doesn't fit the outliers very well. In this case, the extremes include a student who scored only 5, and a student who scored 95. Let's see what happens if we remove these students from our sample:
End of explanation |
10,210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FPE Interface Board Bring-up Procedure
Abstract
Step1: Test Start Date
Step2: Test Conductor Information
Please write down your personal information for accountability purposes
Step3: Unit Under Test
Please record the part number and serial number of the unit under test
Step4: Test Equipment
Note test equipment model #'s and serial #'s
Step5: Prepare for Inspections
1) Verify work area is ESD safe.
Step6: 2) Photograph the front of the assembly. Use a service like Flickr to upload your photo and paste the URL below, in place of the placeholder image.
You will need to double-click the active area to the left of the image to see the embedded link, then delete it and replace it with your new link. For Flickr, use the link given at "Share Photo" then "Embed". Hit shift-enter to run the cell and see the image.
<!-- Delete the mock below and put in a real image -->
<a data-flickr-embed="true" href="https
Step7: Visual Inspection under Stereo Microscope
Step8: 5.a Workmanship and mechanical damage
Step9: 5.b DNP parts not installed
Step10: 5.c No missing components
Step11: 5.d Verify required jumpers installed
TODO
Step12: 5.e Component orientation (chips, polarized caps, diodes, etc)
TODO
Step13: 5.f Verify chips are correct parts (& date codes if specified in design)
TODO
Step14: 5.g Verify connector savers installed if required
TODO
Step15: Power OFF Resistance Measurements
6) With the power off and no external connections to the PCB, measure the resistance on all power lines. All measurements should be referenced to circuit ground.
Step16: 7) With power OFF, measure the resistance to ground of all pins on the JS stack connector.
Set up the dictionary to hold the results
Step17: Now enter the measurement data (replace the word "None"). Note that expected units are given
Step18: Executing the next cell will compare the recorded values above to the expected values, within a margin of +/- 10%. The output should not produce any errors.
Step19: Power On Voltage Measurements
1) Verify that voltages on power connector from DHU Emulator are correct before connecting to test assembly. The relevant FPE 1 & FPE 2 connector pins are wired as follows
Step20: Record the voltage measurements
Step21: Now see if the values are within tolerance
Step22: 2) Connect DHU power cable to test assembly J7. Measure voltages again, along with currents. Current measurements are made on the DHE Emulator front panel using a DMM. The output voltages at the measurement ports have a scale value of 1V/A.
Step23: Record the voltage and current measurements
Step24: 3) Capture FLIR images; check for hot spots. Use a service like Flickr to upload your photo and paste the URL below, in place of the placeholder image.
You will need to double-click the active area to the left of the image to see the embedded link, then delete it and replace it with your new link. For Flickr, use the link given at "Share Photo" then "Embed". Hit shift-enter to run the cell and see the image.
<!-- Delete the mock below and put in a real image -->
<a data-flickr-embed="true" href="https
Step25: Set the operating parameters to their default values
Step26: With Default Parameters Loaded, Continue Power ON Voltage Measurements
Measure voltages on all JS stack connector pins (= Driver safe-to-mate)
Step27: Executing the next cell will compare the recorded values above to the expected values, within a margin of +/- 10%. The output should not produce any errors.
Step28: Verify the image capturing function
Step29: 6) Capture an image and display it. Note that with only the Interface Board connected, the captured image should be comprised entirely of pixels with a value of -1.
Step30: 7) Issue the stop frames command.
Step31: 2) Take a set of housekeeping data.
Step32: <pre>
- FPE Test Procedures
- FPE Bring-up Procedure (Check boxes for board type? Flight or not?)
- Verify work area is ESD safe
- Setup per diagram, take photos
- Note test equipment model #'s and serial #'s
- Standard inspections for all 3 PCB types; capture images
- Weigh assembly, note non-flight configurations
- Visual inspection under stereo microscope
- Workmanship and mechanical damage
- DNP parts not installed
- No missing components
Verify req'd jumpers installed
Component orientation (chips, polarized caps, diodes, etc)
Verify chips are correct parts (& date codes if specified in design)
Verify connector savers installed if req'd
Power OFF resistance measurements (compare to reference values)
Power lines
Stack connector
CCD connectors (video only) Maybe delete this? Discuss.
Temp connector (video only)
Power ON voltage measurements (compare to reference values/images)
DHU supply voltages before connection to setup
DHU supply voltages and currents with setup connected
Capture FLIR images; check for hot spots
Program FPGA
Start frames
Take raw image as verification that FPGA was programmed OK
Stop frames
Take HK data as further verification
ref values for 3 cases
Step33: Summary
Below is a summary of test results and notes | Python Code:
import random
test_check = {}
Explanation: FPE Interface Board Bring-up Procedure
Abstract: This iPython Notebook contains instructions for the FPE Interface Board PCB Bring-up test flow. This procedure can be used for the Interface Boards, versions 6.2 and 7.0. Simliar iPython Notebooks will be created for the Driver and Video Boards.
Preamble
Below we create a small object for keeping track of which tests have passed. At the end of this document we verify this object to make sure all of the tests and procedures have been validly performed.
<span style="color:red">IF THIS OBJECT DOES NOT VALIDATE THEN THE TESTS HAVE FAILED AND WILL BE INDICATED AT THE END OF THE DOCUMENT</span>
Note that in general, if at some stage in this procedure a test fails, the operator is expected to resolve it through the appropriate NCR/ECO process before proceeding.
Where a cell indicates "None # FILL IN ..." the test conductor should replace this text with the appropriate information.
End of explanation
test_check["DATE"] = None # TODO: Fill in today's date here as a string in the form "MM/DD/YY"
Explanation: Test Start Date
End of explanation
test_check["NAME"] = None # FILL IN YOUR NAME HERE AS A STRING
test_check["EMAIL"] = None # FILL IN YOUR EMAIL HERE AS A STRING
Explanation: Test Conductor Information
Please write down your personal information for accountability purposes:
End of explanation
test_check["Part Number"] = None # FILL IN THE PART NUMBER HERE AS A STRING
test_check["Serial Number"] = None # FILL IN THE SERIAL NUMBER HERE AS A STRING
Explanation: Unit Under Test
Please record the part number and serial number of the unit under test:
End of explanation
test_check["Equipment"] = {
"Multimeter": { "Model Number": None, # TODO: Enter a string here for the model number
"Serial Number": None # TODO: Enter a string here for the serial number
},
"Oscilloscope": { "Model Number": None, # TODO: Enter a string here for the model number
"Serial Number": None # TODO: Enter a string here for the serial number
},
"DHU Emulator": { "Model Number": None, # TODO: Enter a string here for the model number
"Serial Number": None # TODO: Enter a string here for the serial number
}
}
Explanation: Test Equipment
Note test equipment model #'s and serial #'s
End of explanation
test_check["ESD_SAFE"] = None # TODO: If the area is ESD safe enter 'ESD Safe'
Explanation: Prepare for Inspections
1) Verify work area is ESD safe.
End of explanation
test_check["Assembly Weight"] = None
test_check["Non-Flight Configurations"] = None
Explanation: 2) Photograph the front of the assembly. Use a service like Flickr to upload your photo and paste the URL below, in place of the placeholder image.
You will need to double-click the active area to the left of the image to see the embedded link, then delete it and replace it with your new link. For Flickr, use the link given at "Share Photo" then "Embed". Hit shift-enter to run the cell and see the image.
<!-- Delete the mock below and put in a real image -->
<a data-flickr-embed="true" href="https://www.flickr.com/photos/135953480@N06/22504227741/in/dateposted-public/" title="TESS_Placeholder"><img src="https://farm1.staticflickr.com/627/22504227741_da029de321_m.jpg" width="218" height="218" alt="TESS_Placeholder"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
3) Photograph the back of the assembly and upload it also. Paste the URL below, in place of the placeholder image.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/135953480@N06/22504227741/in/dateposted-public/" title="TESS_Placeholder"><img src="https://farm1.staticflickr.com/627/22504227741_da029de321_m.jpg" width="218" height="218" alt="TESS_Placeholder"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Standard Inspections
Weigh assembly, note any non-flight configurations
End of explanation
test_check["Visual Inspection"] = {}
Explanation: Visual Inspection under Stereo Microscope
End of explanation
test_check["Visual Inspection"]["Mechanical Damage"] = None # TODO: If there is no damage write 'No Damage'
Explanation: 5.a Workmanship and mechanical damage
End of explanation
test_check["Visual Inspection"]["DNP parts not installed"] = None # TODO: If no parts are missing write 'None missing'
Explanation: 5.b DNP parts not installed
End of explanation
test_check["Visual Inspection"]["No Missing Components"] = None # TODO: Set to True if no components are missing
Explanation: 5.c No missing components
End of explanation
test_check["Visual Inspection"]["Required jumpers installed"] = None # TODO: Set to True if all required jumpers are installed
Explanation: 5.d Verify required jumpers installed
TODO: Make a labeled picture of the jumpers
End of explanation
test_check["Visual Inspection"]["Components are oriented correctly"] = None # TODO: Set to True if all components are oriented correctly
Explanation: 5.e Component orientation (chips, polarized caps, diodes, etc)
TODO: Make a labeled picture of component orientation
End of explanation
test_check["Visual Inspection"]["Chips are correct parts"] = None # TODO: Set to True if all chips are correct parts
Explanation: 5.f Verify chips are correct parts (& date codes if specified in design)
TODO: Make a labeled picture with part names
End of explanation
test_check["Visual Inspection"]["Connector Savers Installed"] = None # TODO: Set to True if all connector savers are installed
Explanation: 5.g Verify connector savers installed if required
TODO: Determine where connector savers will be required for flight hardware and make list of connector numbers
End of explanation
test_check["Power Off Measurements"] = {}
test_check["Power Off Measurements"]["GND"] = None
test_check["Power Off Measurements"]["+5V"] = None
test_check["Power Off Measurements"]["+15V"] = None
test_check["Power Off Measurements"]["-12V"] = None
test_check["Power Off Measurements"]["+24"] = None
test_check["Power Off Measurements"]["-50"] = None
test_check["Power Off Measurements"]["+1.8F"] = None
test_check["Power Off Measurements"]["+1F"] = None
test_check["Power Off Measurements"]["+2.5"] = None
test_check["Power Off Measurements"]["+3.3B"] = None
test_check["Power Off Measurements"]["+3.3Dac"] = None
test_check["Power Off Measurements"]["+3.3F"] = None
#TODO: Add check of power off resistance measurements.
Explanation: Power OFF Resistance Measurements
6) With the power off and no external connections to the PCB, measure the resistance on all power lines. All measurements should be referenced to circuit ground.
End of explanation
from collections import defaultdict
test_check["JS stack connector resistances"] = defaultdict(dict)
Explanation: 7) With power OFF, measure the resistance to ground of all pins on the JS stack connector.
Set up the dictionary to hold the results:
End of explanation
test_check["JS stack connector resistances"][1]["GND"] = None # Ohms
test_check["JS stack connector resistances"][2]["SDO-A-1"] = None # MOhms
test_check["JS stack connector resistances"][3]["GND"] = None # Ohms
test_check["JS stack connector resistances"][4]["SDO-B-1"] = None # MOhms
test_check["JS stack connector resistances"][5]["GND"] = None # Ohms
test_check["JS stack connector resistances"][6]["SDO-C-1"] = None # MOhms
test_check["JS stack connector resistances"][7]["GND"] = None # Ohms
test_check["JS stack connector resistances"][8]["SDO-D-1"] = None # MOhms
test_check["JS stack connector resistances"][9]["GND"] = None # Ohms
test_check["JS stack connector resistances"][10]["SCK"] = None # MOhms
test_check["JS stack connector resistances"][11]["GND"] = None # Ohms
test_check["JS stack connector resistances"][12]["CNV"] = None # MOhms
test_check["JS stack connector resistances"][13]["GND"] = None # Ohms
test_check["JS stack connector resistances"][14]["INT"] = None # MOhms
test_check["JS stack connector resistances"][15]["GND"] = None # Ohms
test_check["JS stack connector resistances"][16]["DEINT"] = None # MOhms
test_check["JS stack connector resistances"][17]["GND"] = None # Ohms
test_check["JS stack connector resistances"][18]["CLAMP"] = None # MOhms
test_check["JS stack connector resistances"][19]["GND"] = None # Ohms
test_check["JS stack connector resistances"][20]["CWCLK"] = None # MOhms
test_check["JS stack connector resistances"][21]["GND"] = None # Ohms
test_check["JS stack connector resistances"][22]["DD"] = None # MOhms
test_check["JS stack connector resistances"][23]["GND"] = None # Ohms
test_check["JS stack connector resistances"][24]["DCK"] = None # MOhms
test_check["JS stack connector resistances"][25]["GND"] = None # Ohms
test_check["JS stack connector resistances"][26]["SP1OR"] = None # MOhms
test_check["JS stack connector resistances"][27]["GND"] = None # Ohms
test_check["JS stack connector resistances"][28]["SP2OR"] = None # MOhms
test_check["JS stack connector resistances"][29]["GND"] = None # Ohms
test_check["JS stack connector resistances"][30]["SP3OR"] = None # MOhms
test_check["JS stack connector resistances"][31]["GND"] = None # Ohms
test_check["JS stack connector resistances"][32]["SRG"] = None # MOhms
test_check["JS stack connector resistances"][33]["GND"] = None # Ohms
test_check["JS stack connector resistances"][34]["SID"] = None # MOhms
test_check["JS stack connector resistances"][35]["GND"] = None # Ohms
test_check["JS stack connector resistances"][36]["SP1U"] = None # MOhms
test_check["JS stack connector resistances"][37]["GND"] = None # Ohms
test_check["JS stack connector resistances"][38]["SP2U"] = None # KOhms
test_check["JS stack connector resistances"][39]["GND"] = None # Ohms
test_check["JS stack connector resistances"][40]["SP3U"] = None # MOhms
test_check["JS stack connector resistances"][41]["GND"] = None # Ohms
test_check["JS stack connector resistances"][42]["_DS96_"] = None # Ohms
test_check["JS stack connector resistances"][43]["GND"] = None # MOhms
test_check["JS stack connector resistances"][44]["SDO-A-2"] = None # Ohms
test_check["JS stack connector resistances"][45]["GND"] = None # MOhms
test_check["JS stack connector resistances"][46]["SDO-B-2"] = None # Ohms
test_check["JS stack connector resistances"][47]["GND"] = None # MOhms
test_check["JS stack connector resistances"][48]["SDO-C-2"] = None # Ohms
test_check["JS stack connector resistances"][49]["GND"] = None # MOhms
test_check["JS stack connector resistances"][50]["SDO-D-2"] = None # MOhms
test_check["JS stack connector resistances"][51]["P1-FS-1"] = None # MOhms
test_check["JS stack connector resistances"][52]["P2-FS-1"] = None # MOhms
test_check["JS stack connector resistances"][53]["P3-FS-1"] = None # MOhms
test_check["JS stack connector resistances"][54]["P3-OR-1"] = None # MOhms
test_check["JS stack connector resistances"][55]["P2-OR-1"] = None # MOhms
test_check["JS stack connector resistances"][56]["P1-OR-1"] = None # MOhms
test_check["JS stack connector resistances"][57]["RG-1"] = None # MOhms
test_check["JS stack connector resistances"][58]["P1-IA-1"] = None # MOhms
test_check["JS stack connector resistances"][59]["P2-IA-1"] = None # MOhms
test_check["JS stack connector resistances"][60]["P3-IA-1"] = None # MOhms
test_check["JS stack connector resistances"][61]["P1-U-1"] = None # MOhms
test_check["JS stack connector resistances"][62]["P2-U-1"] = None # MOhms
test_check["JS stack connector resistances"][63]["P3-U-1"] = None # MOhms
test_check["JS stack connector resistances"][64]["ID-1"] = None # MOhms
test_check["JS stack connector resistances"][65]["SP1-IA-1"] = None # MOhms
test_check["JS stack connector resistances"][66]["SP2-IA-1"] = None # MOhms
test_check["JS stack connector resistances"][67]["SP3-IA-1"] = None # MOhms
test_check["JS stack connector resistances"][68]["SP1-FS-1"] = None # MOhms
test_check["JS stack connector resistances"][69]["SP2-FS-1"] = None # MOhms
test_check["JS stack connector resistances"][70]["SP3-FS-1"] = None # MOhms
test_check["JS stack connector resistances"][71]["HK0"] = None # MOhms
test_check["JS stack connector resistances"][72]["HK8"] = None # MOhms
test_check["JS stack connector resistances"][73]["HK16"] = None # MOhms
test_check["JS stack connector resistances"][74]["HK24"] = None # MOhms
test_check["JS stack connector resistances"][75]["HK32"] = None # MOhms
test_check["JS stack connector resistances"][76]["HK40"] = None # MOhms
test_check["JS stack connector resistances"][77]["HK48"] = None # MOhms
test_check["JS stack connector resistances"][78]["HK56"] = None # MOhms
test_check["JS stack connector resistances"][79]["HK64"] = None # MOhms
test_check["JS stack connector resistances"][80]["HK72"] = None # MOhms
test_check["JS stack connector resistances"][81]["SP1-IA-2"] = None # MOhms
test_check["JS stack connector resistances"][82]["SP2-IA-2"] = None # MOhms
test_check["JS stack connector resistances"][83]["SP3-IA-2"] = None # MOhms
test_check["JS stack connector resistances"][84]["SP1-FS-2"] = None # MOhms
test_check["JS stack connector resistances"][85]["SP2-FS-2"] = None # MOhms
test_check["JS stack connector resistances"][86]["SP3-FS-2"] = None # MOhms
test_check["JS stack connector resistances"][87]["P1-FS-2"] = None # MOhms
test_check["JS stack connector resistances"][88]["P2-FS-2"] = None # MOhms
test_check["JS stack connector resistances"][89]["P3-FS-2"] = None # MOhms
test_check["JS stack connector resistances"][90]["P3-OR-2"] = None # MOhms
test_check["JS stack connector resistances"][91]["P2-OR-2"] = None # MOhms
test_check["JS stack connector resistances"][92]["P1-OR-2"] = None # MOhms
test_check["JS stack connector resistances"][93]["RG-2"] = None # MOhms
test_check["JS stack connector resistances"][94]["P1-IA-2"] = None # MOhms
test_check["JS stack connector resistances"][95]["P2-IA-2"] = None # MOhms
test_check["JS stack connector resistances"][96]["P3-IA-2"] = None # MOhms
test_check["JS stack connector resistances"][97]["P1-U-2"] = None # MOhms
test_check["JS stack connector resistances"][98]["P2-U-2"] = None # MOhms
test_check["JS stack connector resistances"][99]["P3-U-2"] = None # MOhms
test_check["JS stack connector resistances"][100]["ID-2"] = None # MOhms
test_check["JS stack connector resistances"][101]["ID-4"] = None # OL
test_check["JS stack connector resistances"][102]["P3-U-4"] = None # OL
test_check["JS stack connector resistances"][103]["P2-U-4"] = None # OL
test_check["JS stack connector resistances"][104]["P1-U-4"] = None # OL
test_check["JS stack connector resistances"][105]["P3-IA-4"] = None # OL
test_check["JS stack connector resistances"][106]["P2-IA-4"] = None # OL
test_check["JS stack connector resistances"][107]["P1-IA-4"] = None # OL
test_check["JS stack connector resistances"][108]["RG-4"] = None # OL
test_check["JS stack connector resistances"][109]["P1-OR-4"] = None # OL
test_check["JS stack connector resistances"][110]["P2-OR-4"] = None # OL
test_check["JS stack connector resistances"][111]["P3-OR-4"] = None # OL
test_check["JS stack connector resistances"][112]["P3-FS-4"] = None # OL
test_check["JS stack connector resistances"][113]["P2-FS-4"] = None # OL
test_check["JS stack connector resistances"][114]["P1-FS-4"] = None # OL
test_check["JS stack connector resistances"][115]["SP3-FS-4"] = None # MOhms
test_check["JS stack connector resistances"][116]["SP2-FS-4"] = None # MOhms
test_check["JS stack connector resistances"][117]["SP1-FS-4"] = None # MOhms
test_check["JS stack connector resistances"][118]["SP3-IA-4"] = None # MOhms
test_check["JS stack connector resistances"][119]["SP2-IA-4"] = None # MOhms
test_check["JS stack connector resistances"][120]["SP1-IA-4"] = None # MOhms
test_check["JS stack connector resistances"][121]["HK80"] = None # MOhms
test_check["JS stack connector resistances"][122]["HK88"] = None # MOhms
test_check["JS stack connector resistances"][123]["HK96"] = None # MOhms
test_check["JS stack connector resistances"][124]["HK104"] = None # MOhms
test_check["JS stack connector resistances"][125]["HK112"] = None # MOhms
test_check["JS stack connector resistances"][126]["HK120"] = None # MOhms
test_check["JS stack connector resistances"][127]["HKA0"] = None # MOhms
test_check["JS stack connector resistances"][128]["HKA1"] = None # MOhms
test_check["JS stack connector resistances"][129]["HKA2"] = None # MOhms
test_check["JS stack connector resistances"][130]["HKCOM"] = None # MOhms
test_check["JS stack connector resistances"][131]["SP3-FS-3"] = None # MOhms
test_check["JS stack connector resistances"][132]["SP2-FS-3"] = None # MOhms
test_check["JS stack connector resistances"][133]["SP1-FS-3"] = None # MOhms
test_check["JS stack connector resistances"][134]["SP3-IA-3"] = None # MOhms
test_check["JS stack connector resistances"][135]["SP2-IA-3"] = None # MOhms
test_check["JS stack connector resistances"][136]["SP1-IA-3"] = None # MOhms
test_check["JS stack connector resistances"][137]["ID-3"] = None # OL
test_check["JS stack connector resistances"][138]["P3-U-3"] = None # OL
test_check["JS stack connector resistances"][139]["P2-U-3"] = None # OL
test_check["JS stack connector resistances"][140]["P1-U-3"] = None # OL
test_check["JS stack connector resistances"][141]["P3-IA-3"] = None # OL
test_check["JS stack connector resistances"][142]["P2-IA-3"] = None # OL
test_check["JS stack connector resistances"][143]["P1-IA-3"] = None # OL
test_check["JS stack connector resistances"][144]["RG-3"] = None # OL
test_check["JS stack connector resistances"][145]["P1-OR-3"] = None # OL
test_check["JS stack connector resistances"][146]["P2-OR-3"] = None # OL
test_check["JS stack connector resistances"][147]["P3-OR-3"] = None # OL
test_check["JS stack connector resistances"][148]["P3-FS-3"] = None # OL
test_check["JS stack connector resistances"][149]["P2-FS-3"] = None # OL
test_check["JS stack connector resistances"][150]["P1-FS-3"] = None # OL
test_check["JS stack connector resistances"][151]["GND"] = None # Ohms
test_check["JS stack connector resistances"][152]["SDO-D-4"] = None # MOhms
test_check["JS stack connector resistances"][153]["GND"] = None # Ohms
test_check["JS stack connector resistances"][154]["SDO-C-4"] = None # MOhms
test_check["JS stack connector resistances"][155]["GND"] = None # Ohms
test_check["JS stack connector resistances"][156]["SDO-B-4"] = None # MOhms
test_check["JS stack connector resistances"][157]["GND"] = None # Ohms
test_check["JS stack connector resistances"][158]["SDO-A-4"] = None # MOhms
test_check["JS stack connector resistances"][159]["GND"] = None # Ohms
test_check["JS stack connector resistances"][160]["_DS0_"] = None # MOhms
test_check["JS stack connector resistances"][161]["GND"] = None # Ohms
test_check["JS stack connector resistances"][162]["_DS8_"] = None # MOhms
test_check["JS stack connector resistances"][163]["GND"] = None # Ohms
test_check["JS stack connector resistances"][164]["_DS16_"] = None # MOhms
test_check["JS stack connector resistances"][165]["GND"] = None # Ohms
test_check["JS stack connector resistances"][166]["_DS24_"] = None # MOhms
test_check["JS stack connector resistances"][167]["GND"] = None # Ohms
test_check["JS stack connector resistances"][168]["_DS32_"] = None # MOhms
test_check["JS stack connector resistances"][169]["GND"] = None # Ohms
test_check["JS stack connector resistances"][170]["_DS40_"] = None # MOhms
test_check["JS stack connector resistances"][171]["GND"] = None # Ohms
test_check["JS stack connector resistances"][172]["Spare1"] = None # OL
test_check["JS stack connector resistances"][173]["GND"] = None # Ohms
test_check["JS stack connector resistances"][174]["RTDCOM"] = None # MOhms
test_check["JS stack connector resistances"][175]["GND"] = None # Ohms
test_check["JS stack connector resistances"][176]["15"] = None # MOhms
test_check["JS stack connector resistances"][177]["GND"] = None # Ohms
test_check["JS stack connector resistances"][178]["-12"] = None # MOhms
test_check["JS stack connector resistances"][179]["GND"] = None # Ohms
test_check["JS stack connector resistances"][180]["5"] = None # MOhms
test_check["JS stack connector resistances"][181]["GND"] = None # Ohms
test_check["JS stack connector resistances"][182]["_DS48_"] = None # MOhms
test_check["JS stack connector resistances"][183]["GND"] = None # Ohms
test_check["JS stack connector resistances"][184]["_DS56_"] = None # MOhms
test_check["JS stack connector resistances"][185]["GND"] = None # Ohms
test_check["JS stack connector resistances"][186]["_DS64_"] = None # MOhms
test_check["JS stack connector resistances"][187]["GND"] = None # Ohms
test_check["JS stack connector resistances"][188]["_DS72_"] = None # MOhms
test_check["JS stack connector resistances"][189]["GND"] = None # Ohms
test_check["JS stack connector resistances"][190]["_DS80_"] = None # MOhms
test_check["JS stack connector resistances"][191]["GND"] = None # Ohms
test_check["JS stack connector resistances"][192]["_DS88_"] = None # MOhms
test_check["JS stack connector resistances"][193]["GND"] = None # Ohms
test_check["JS stack connector resistances"][194]["SDO-D-3"] = None # MOhms
test_check["JS stack connector resistances"][195]["GND"] = None # Ohms
test_check["JS stack connector resistances"][196]["SDO-C-3"] = None # MOhms
test_check["JS stack connector resistances"][197]["GND"] = None # Ohms
test_check["JS stack connector resistances"][198]["SDO-B-3"] = None # MOhms
test_check["JS stack connector resistances"][199]["GND"] = None # Ohms
test_check["JS stack connector resistances"][200]["SDO-A-3"] = None # MOhms
Explanation: Now enter the measurement data (replace the word "None"). Note that expected units are given:
End of explanation
assert test_check["JS stack connector resistances"][1]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][2]["SDO-A-2"] <= 1.98
assert test_check["JS stack connector resistances"][3]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][4]["SDO-B-1"] <= 1.98
assert test_check["JS stack connector resistances"][5]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][6]["SDO-C-1"] <= 1.98
assert test_check["JS stack connector resistances"][7]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][8]["SDO-D-1"] <= 1.98
assert test_check["JS stack connector resistances"][9]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][10]["SCK"] <= 1.98
assert test_check["JS stack connector resistances"][11]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][12]["CNV"] <= 1.98
assert test_check["JS stack connector resistances"][13]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][14]["INT"] <= 1.98
assert test_check["JS stack connector resistances"][15]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][16]["DEINT"] <= 1.98
assert test_check["JS stack connector resistances"][17]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][18]["CLAMP"] <= 1.98
assert test_check["JS stack connector resistances"][19]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][20]["CWCLK"] <= 1.98
assert test_check["JS stack connector resistances"][21]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][22]["DD"] <= 1.98
assert test_check["JS stack connector resistances"][23]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][24]["DCK"] <= 1.98
assert test_check["JS stack connector resistances"][25]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][26]["SP1OR"] <= 1.98
assert test_check["JS stack connector resistances"][27]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][28]["SP2OR"] <= 1.98
assert test_check["JS stack connector resistances"][29]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][30]["SP3OR"] <= 1.98
assert test_check["JS stack connector resistances"][31]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][32]["SRG"] <= 1.98
assert test_check["JS stack connector resistances"][33]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][34]["SID"] <= 1.98
assert test_check["JS stack connector resistances"][35]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][36]["SP1U"] <= 1.98
assert test_check["JS stack connector resistances"][37]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][38]["SP2U"] <= 1.98
assert test_check["JS stack connector resistances"][39]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][40]["SP3U"] <= 1.98
assert test_check["JS stack connector resistances"][41]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][42]["_DS96_"] <= 1.98
assert test_check["JS stack connector resistances"][43]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][44]["SDO-A-2"] <= 1.98
assert test_check["JS stack connector resistances"][45]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][46]["SDO-B-2"] <= 1.98
assert test_check["JS stack connector resistances"][47]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][48]["SDO-C-2"] <= 1.98
assert test_check["JS stack connector resistances"][49]["GND"] <= 0.2
assert 1.62 <= test_check["JS stack connector resistances"][50]["SDO-D-2"] <= 1.98
assert test_check["JS stack connector resistances"][51]["P1-FS-1"] >= 5.4
assert test_check["JS stack connector resistances"][52]["P2-FS-1"] >= 22
assert 21.1 <= test_check["JS stack connector resistances"][53]["P3-FS-1"] <= 25.7
assert 7 <= test_check["JS stack connector resistances"][54]["P3-OR-1"] <= 8.4
assert 7 <= test_check["JS stack connector resistances"][55]["P2-OR-1"] <= 8.4
assert 7 <= test_check["JS stack connector resistances"][56]["P1-OR-1"] <= 8.4
assert 7.6 <= test_check["JS stack connector resistances"][57]["RG-1"] <= 9.3
assert test_check["JS stack connector resistances"][58]["P1-IA-1"] >= 22
assert test_check["JS stack connector resistances"][59]["P2-IA-1"] >= 22
assert test_check["JS stack connector resistances"][60]["P3-IA-1"] >= 22
assert 7.2 <= test_check["JS stack connector resistances"][61]["P1-U-1"] <= 8.8
assert 7.2 <= test_check["JS stack connector resistances"][62]["P2-U-1"] <= 8.8
assert 7.2 <= test_check["JS stack connector resistances"][63]["P3-U-1"] <= 8.8
assert 7.2 <= test_check["JS stack connector resistances"][64]["ID-1"] <= 8.8
assert 1.62 <= test_check["JS stack connector resistances"][65]["SP1-IA-1"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][66]["SP2-IA-1"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][67]["SP3-IA-1"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][68]["SP1-FS-1"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][69]["SP2-FS-1"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][70]["SP3-FS-1"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][71]["HK0"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][72]["HK8"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][73]["HK16"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][74]["HK24"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][75]["HK32"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][76]["HK40"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][77]["HK48"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][78]["HK56"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][79]["HK64"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][80]["HK72"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][81]["SP1-IA-2"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][82]["SP2-IA-2"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][83]["SP3-IA-2"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][84]["SP1-FS-2"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][85]["SP2-FS-2"] <= 1.98
assert 1.62 <= test_check["JS stack connector resistances"][86]["SP3-FS-2"] <= 1.98
assert 8 <= test_check["JS stack connector resistances"][87]["P1-FS-2"] <= 12
assert 8 <= test_check["JS stack connector resistances"][88]["P2-FS-2"] <= 12
assert 8 <= test_check["JS stack connector resistances"][89]["P3-FS-2"] <= 12
assert 2.5 <= test_check["JS stack connector resistances"][90]["P3-OR-2"] <= 3.1
assert 2.5 <= test_check["JS stack connector resistances"][91]["P2-OR-2"] <= 3.1
assert 2.5 <= test_check["JS stack connector resistances"][92]["P1-OR-2"] <= 3.1
assert 1.2 <= test_check["JS stack connector resistances"][93]["RG-2"] <=1.45
assert 8 <= test_check["JS stack connector resistances"][94]["P1-IA-2"] <= 12
assert 8 <= test_check["JS stack connector resistances"][95]["P2-IA-2"] <= 12
assert 8 <= test_check["JS stack connector resistances"][96]["P3-IA-2"] <= 12
assert 2.5 <= test_check["JS stack connector resistances"][97]["P1-U-2"] <= 3.1
assert 2.5 <= test_check["JS stack connector resistances"][98]["P2-U-2"] <= 3.1
assert 2.5 <= test_check["JS stack connector resistances"][99]["P3-U-2"] <= 3.1
assert 1.35 <= test_check["JS stack connector resistances"][100]["ID-2"] <= 1.65
assert test_check["JS stack connector resistances"][101]["ID-4"] == "OL"
assert test_check["JS stack connector resistances"][102]["P3-U-4"] == "OL"
assert test_check["JS stack connector resistances"][103]["P2-U-4"] == "OL"
assert test_check["JS stack connector resistances"][104]["P1-U-4"] == "OL"
assert test_check["JS stack connector resistances"][105]["P3-IA-4"] == "OL"
assert test_check["JS stack connector resistances"][106]["P2-IA-4"] == "OL"
assert test_check["JS stack connector resistances"][107]["P1-IA-4"] == "OL"
assert test_check["JS stack connector resistances"][108]["RG-4"] == "OL"
assert test_check["JS stack connector resistances"][109]["P1-OR-4"] == "OL"
assert test_check["JS stack connector resistances"][110]["P2-OR-4"] == "OL"
assert test_check["JS stack connector resistances"][111]["P3-OR-4"] == "OL"
assert test_check["JS stack connector resistances"][112]["P3-FS-4"] == "OL"
assert test_check["JS stack connector resistances"][113]["P2-FS-4"] == "OL"
assert test_check["JS stack connector resistances"][114]["P1-FS-4"] == "OL"
assert 1.5 <= test_check["JS stack connector resistances"][115]["SP3-FS-4"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][116]["SP2-FS-4"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][117]["SP1-FS-4"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][118]["SP3-IA-4"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][119]["SP2-IA-4"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][120]["SP1-IA-4"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][121]["HK80"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][122]["HK88"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][123]["HK96"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][124]["HK104"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][125]["HK112"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][126]["HK120"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][127]["HKA0"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][128]["HKA1"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][129]["HKA2"] <= 1.9
assert 315 <= test_check["JS stack connector resistances"][130]["HKCOM"] <= 385
assert 1.5 <= test_check["JS stack connector resistances"][131]["SP3-FS-3"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][132]["SP2-FS-3"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][133]["SP1-FS-3"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][134]["SP3-IA-3"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][135]["SP2-IA-3"] <= 1.9
assert 1.5 <= test_check["JS stack connector resistances"][136]["SP1-IA-3"] <= 1.9
assert test_check["JS stack connector resistances"][137]["ID-3"] == "OL"
assert test_check["JS stack connector resistances"][138]["P3-U-3"] == "OL"
assert test_check["JS stack connector resistances"][139]["P2-U-3"] == "OL"
assert test_check["JS stack connector resistances"][140]["P1-U-3"] == "OL"
assert test_check["JS stack connector resistances"][141]["P3-IA-3"] == "OL"
assert test_check["JS stack connector resistances"][142]["P2-IA-3"] == "OL"
assert test_check["JS stack connector resistances"][143]["P1-IA-3"] == "OL"
assert test_check["JS stack connector resistances"][144]["RG-3"] == "OL"
assert test_check["JS stack connector resistances"][145]["P1-OR-3"] == "OL"
assert test_check["JS stack connector resistances"][146]["P2-OR-3"] == "OL"
assert test_check["JS stack connector resistances"][147]["P3-OR-3"] == "OL"
assert test_check["JS stack connector resistances"][148]["P3-FS-3"] == "OL"
assert test_check["JS stack connector resistances"][149]["P2-FS-3"] == "OL"
assert test_check["JS stack connector resistances"][150]["P1-FS-3"] == "OL"
assert test_check["JS stack connector resistances"][151]["GND"] <= 0.2
assert 1.35 <= test_check["JS stack connector resistances"][152]["SDO-D-4"] <= 1.65
assert test_check["JS stack connector resistances"][153]["GND"] <= 0.2
assert 1.35 <= test_check["JS stack connector resistances"][154]["SDO-C-4"] <= 1.65
assert test_check["JS stack connector resistances"][155]["GND"] <= 0.2
assert 1.35 <= test_check["JS stack connector resistances"][156]["SDO-B-4"] <= 1.65
assert test_check["JS stack connector resistances"][157]["GND"] <= 0.2
assert 1.35 <= test_check["JS stack connector resistances"][158]["SDO-A-4"] <= 1.65
assert test_check["JS stack connector resistances"][159]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][160]["_DS0_"] <= 1.9
assert test_check["JS stack connector resistances"][161]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][162]["_DS8_"] <= 1.9
assert test_check["JS stack connector resistances"][163]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][164]["_DS16_"] <= 1.9
assert test_check["JS stack connector resistances"][165]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][166]["_DS24_"] <= 1.9
assert test_check["JS stack connector resistances"][167]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][168]["_DS32_"] <= 1.9
assert test_check["JS stack connector resistances"][169]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][170]["_DS40_"] <= 1.9
assert test_check["JS stack connector resistances"][171]["GND"] <= 0.2
assert test_check["JS stack connector resistances"][172]["Spare1"] == "OL"
assert test_check["JS stack connector resistances"][173]["GND"] <= 0.2
assert 3.6 <= test_check["JS stack connector resistances"][174]["RTDCOM"] <= 4.4
assert test_check["JS stack connector resistances"][175]["GND"] <= 0.2
assert test_check["JS stack connector resistances"][176]["15"] >= 2
assert test_check["JS stack connector resistances"][177]["GND"] <= 0.2
assert 5.4 <= test_check["JS stack connector resistances"][178]["-12"] <= 6.6
assert test_check["JS stack connector resistances"][179]["GND"] <= 0.2
assert test_check["JS stack connector resistances"][180]["5"] >=2
assert test_check["JS stack connector resistances"][181]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][182]["_DS48_"] <= 1.9
assert test_check["JS stack connector resistances"][183]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][184]["_DS56_"] <= 1.9
assert test_check["JS stack connector resistances"][185]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][186]["_DS64_"] <= 1.9
assert test_check["JS stack connector resistances"][187]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][188]["_DS72_"] <= 1.9
assert test_check["JS stack connector resistances"][189]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][190]["_DS80_"] <= 1.9
assert test_check["JS stack connector resistances"][191]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][192]["_DS88_"] <= 1.9
assert test_check["JS stack connector resistances"][193]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][194]["SDO-D-3"] <= 1.9
assert test_check["JS stack connector resistances"][195]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][196]["SDO-C-3"] <= 1.9
assert test_check["JS stack connector resistances"][197]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][198]["SDO-B-3"] <= 1.9
assert test_check["JS stack connector resistances"][199]["GND"] <= 0.2
assert 1.5 <= test_check["JS stack connector resistances"][200]["SDO-A-3"] <= 1.9
Explanation: Executing the next cell will compare the recorded values above to the expected values, within a margin of +/- 10%. The output should not produce any errors.
End of explanation
test_check["Power On Measurements"]={}
Explanation: Power On Voltage Measurements
1) Verify that voltages on power connector from DHU Emulator are correct before connecting to test assembly. The relevant FPE 1 & FPE 2 connector pins are wired as follows:
pin 1 & 5: GND
pin 2 & 7: +15V
pin 3 & 8: +5V
pin 4 & 9: -12V
(pin 6 is NC in 6.2 Interface design, but DHU FPE1 connector pin 6 is wired to FPE2 connector pin 6.)
Set up the dictionary to hold the results:
End of explanation
test_check["Power On Measurements"]["+5V"] = None # Volts
test_check["Power On Measurements"]["+15V"] = None # Volts
test_check["Power On Measurements"]["-12V"] = None # Volts
Explanation: Record the voltage measurements:
End of explanation
assert 4.75 <= test_check["Power On Measurements"]["+5V"] <= 5.25
assert 14.25 <= test_check["Power On Measurements"]["+15V"] <= 15.75
assert -12.6 <= test_check["Power On Measurements"]["-12V"] <= -11.4
Explanation: Now see if the values are within tolerance:
End of explanation
test_check["Power On Measurements Connected"]={}
Explanation: 2) Connect DHU power cable to test assembly J7. Measure voltages again, along with currents. Current measurements are made on the DHE Emulator front panel using a DMM. The output voltages at the measurement ports have a scale value of 1V/A.
End of explanation
test_check["Power On Measurements Connected"]["+5V"] = None # Volts
test_check["Power On Measurements Connected"]["+5V Current"] = None # milliamps
test_check["Power On Measurements Connected"]["+15V"] = None # Volts
test_check["Power On Measurements Connected"]["+15V Current"] = None # milliamps
test_check["Power On Measurements Connected"]["-12V"] = None # Volts
test_check["Power On Measurements Connected"]["12V Current"] = None # milliamps
assert 4.75 <= test_check["Power On Measurements"]["+5V"] <= 5.25
assert 187 <= test_check["Power On Measurements Connected"]["+5V Current"] <= 207
assert 14.25 <= test_check["Power On Measurements"]["+15V"] <= 15.75
assert 12.8 <= test_check["Power On Measurements Connected"]["+15V Current"] <= 14.2
assert -12.6 <= test_check["Power On Measurements"]["-12V"] <= -11.4
assert 24.1 <= test_check["Power On Measurements Connected"]["12V Current"] <= 26.7
Explanation: Record the voltage and current measurements:
End of explanation
from tessfpe.dhu.fpe import FPE
from tessfpe.dhu.unit_tests import check_house_keeping_voltages
fpe1 = FPE(1, debug=False, preload=True, FPE_Wrapper_version='6.1.1')
print fpe1.version
if check_house_keeping_voltages(fpe1):
print "Wrapper load complete. Interface voltages OK."
Explanation: 3) Capture FLIR images; check for hot spots. Use a service like Flickr to upload your photo and paste the URL below, in place of the placeholder image.
You will need to double-click the active area to the left of the image to see the embedded link, then delete it and replace it with your new link. For Flickr, use the link given at "Share Photo" then "Embed". Hit shift-enter to run the cell and see the image.
<!-- Delete the mock below and put in a real image -->
<a data-flickr-embed="true" href="https://www.flickr.com/photos/135953480@N06/22504227741/in/dateposted-public/" title="TESS_Placeholder"><img src="https://farm1.staticflickr.com/627/22504227741_da029de321_m.jpg" width="218" height="218" alt="TESS_Placeholder"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Testing basic FPGA functions and Loading Default Parameters
Start the Observatory Simulator and Load the FPE FPGA
Remember that whenever you power-cycle the Observatory Simulator, you should set preload=True below.
When you are running this notebook and it has not been power cycled, you should set preload=False.
Make sure that FPE has both power and data cables connected, and make sure that the FPE is connected to the "FPE 1" socket on the observatory simulator. Execute the cell below to program the FPGA:
End of explanation
from tessfpe.data.operating_parameters import operating_parameters
def set_fpe_defaults(fpe):
"Set the FPE to the default operating parameters and return a list of the default values"
defaults = {}
for k in range(len(fpe.ops.address)):
if fpe.ops.address[k] is None:
continue
fpe.ops.address[k].value = fpe.ops.address[k].default
defaults[fpe.ops.address[k].name] = fpe.ops.address[k].default
return defaults
# Print out the default values
set_fpe_defaults(fpe1)
Explanation: Set the operating parameters to their default values:
End of explanation
from collections import defaultdict
test_check["JS stack connector voltages"] = defaultdict(dict)
test_check["JS stack connector voltages"][1]["GND"] = None
test_check["JS stack connector voltages"][2]["SDO-A-1"] = None
test_check["JS stack connector voltages"][3]["GND"] = None
test_check["JS stack connector voltages"][4]["SDO-B-1"] = None
test_check["JS stack connector voltages"][5]["GND"] = None
test_check["JS stack connector voltages"][6]["SDO-C-1"] = None
test_check["JS stack connector voltages"][7]["GND"] = None
test_check["JS stack connector voltages"][8]["SDO-D-1"] = None
test_check["JS stack connector voltages"][9]["GND"] = None
test_check["JS stack connector voltages"][10]["SCK"] = None
test_check["JS stack connector voltages"][11]["GND"] = None
test_check["JS stack connector voltages"][12]["CNV"] = None
test_check["JS stack connector voltages"][13]["GND"] = None
test_check["JS stack connector voltages"][14]["INT"] = None
test_check["JS stack connector voltages"][15]["GND"] = None
test_check["JS stack connector voltages"][16]["DEINT"] = None
test_check["JS stack connector voltages"][17]["GND"] = None
test_check["JS stack connector voltages"][18]["CLAMP"] = None
test_check["JS stack connector voltages"][19]["GND"] = None
test_check["JS stack connector voltages"][20]["CWCLK"] = None
test_check["JS stack connector voltages"][21]["GND"] = None
test_check["JS stack connector voltages"][22]["DD"] = None
test_check["JS stack connector voltages"][23]["GND"] = None
test_check["JS stack connector voltages"][24]["DCK"] = None
test_check["JS stack connector voltages"][25]["GND"] = None
test_check["JS stack connector voltages"][26]["SP1OR"] = None
test_check["JS stack connector voltages"][27]["GND"] = None
test_check["JS stack connector voltages"][28]["SP2OR"] = None
test_check["JS stack connector voltages"][29]["GND"] = None
test_check["JS stack connector voltages"][30]["SP3OR"] = None
test_check["JS stack connector voltages"][31]["GND"] = None
test_check["JS stack connector voltages"][32]["SRG"] = None
test_check["JS stack connector voltages"][33]["GND"] = None
test_check["JS stack connector voltages"][34]["SID"] = None
test_check["JS stack connector voltages"][35]["GND"] = None
test_check["JS stack connector voltages"][36]["SP1U"] = None
test_check["JS stack connector voltages"][37]["GND"] = None
test_check["JS stack connector voltages"][38]["SP2U"] = None
test_check["JS stack connector voltages"][39]["GND"] = None
test_check["JS stack connector voltages"][40]["SP3U"] = None
test_check["JS stack connector voltages"][41]["GND"] = None
test_check["JS stack connector voltages"][42]["_DS96_"] = None
test_check["JS stack connector voltages"][43]["GND"] = None
test_check["JS stack connector voltages"][44]["SDO-A-2"] = None
test_check["JS stack connector voltages"][45]["GND"] = None
test_check["JS stack connector voltages"][46]["SDO-B-2"] = None
test_check["JS stack connector voltages"][47]["GND"] = None
test_check["JS stack connector voltages"][48]["SDO-C-2"] = None
test_check["JS stack connector voltages"][49]["GND"] = None
test_check["JS stack connector voltages"][50]["SDO-D-2"] = None
test_check["JS stack connector voltages"][51]["P1-FS-1"] = None
test_check["JS stack connector voltages"][52]["P2-FS-1"] = None
test_check["JS stack connector voltages"][53]["P3-FS-1"] = None
test_check["JS stack connector voltages"][54]["P3-OR-1"] = None
test_check["JS stack connector voltages"][55]["P2-OR-1"] = None
test_check["JS stack connector voltages"][56]["P1-OR-1"] = None
test_check["JS stack connector voltages"][57]["RG-1"] = None
test_check["JS stack connector voltages"][58]["P1-IA-1"] = None
test_check["JS stack connector voltages"][59]["P2-IA-1"] = None
test_check["JS stack connector voltages"][60]["P3-IA-1"] = None
test_check["JS stack connector voltages"][61]["P1-U-1"] = None
test_check["JS stack connector voltages"][62]["P2-U-1"] = None
test_check["JS stack connector voltages"][63]["P3-U-1"] = None
test_check["JS stack connector voltages"][64]["ID-1"] = None
test_check["JS stack connector voltages"][65]["SP1-IA-1"] = None
test_check["JS stack connector voltages"][66]["SP2-IA-1"] = None
test_check["JS stack connector voltages"][67]["SP3-IA-1"] = None
test_check["JS stack connector voltages"][68]["SP1-FS-1"] = None
test_check["JS stack connector voltages"][69]["SP2-FS-1"] = None
test_check["JS stack connector voltages"][70]["SP3-FS-1"] = None
test_check["JS stack connector voltages"][71]["HK0"] = None
test_check["JS stack connector voltages"][72]["HK8"] = None
test_check["JS stack connector voltages"][73]["HK16"] = None
test_check["JS stack connector voltages"][74]["HK24"] = None
test_check["JS stack connector voltages"][75]["HK32"] = None
test_check["JS stack connector voltages"][76]["HK40"] = None
test_check["JS stack connector voltages"][77]["HK48"] = None
test_check["JS stack connector voltages"][78]["HK56"] = None
test_check["JS stack connector voltages"][79]["HK64"] = None
test_check["JS stack connector voltages"][80]["HK72"] = None
test_check["JS stack connector voltages"][81]["SP1-IA-2"] = None
test_check["JS stack connector voltages"][82]["SP2-IA-2"] = None
test_check["JS stack connector voltages"][83]["SP3-IA-2"] = None
test_check["JS stack connector voltages"][84]["SP1-FS-2"] = None
test_check["JS stack connector voltages"][85]["SP2-FS-2"] = None
test_check["JS stack connector voltages"][86]["SP3-FS-2"] = None
test_check["JS stack connector voltages"][87]["P1-FS-2"] = None
test_check["JS stack connector voltages"][88]["P2-FS-2"] = None
test_check["JS stack connector voltages"][89]["P3-FS-2"] = None
test_check["JS stack connector voltages"][90]["P3-OR-2"] = None
test_check["JS stack connector voltages"][91]["P2-OR-2"] = None
test_check["JS stack connector voltages"][92]["P1-OR-2"] = None
test_check["JS stack connector voltages"][93]["RG-2"] = None
test_check["JS stack connector voltages"][94]["P1-IA-2"] = None
test_check["JS stack connector voltages"][95]["P2-IA-2"] = None
test_check["JS stack connector voltages"][96]["P3-IA-2"] = None
test_check["JS stack connector voltages"][97]["P1-U-2"] = None
test_check["JS stack connector voltages"][98]["P2-U-2"] = None
test_check["JS stack connector voltages"][99]["P3-U-2"] = None
test_check["JS stack connector voltages"][100]["ID-2"] = None
test_check["JS stack connector voltages"][101]["ID-4"] = None
test_check["JS stack connector voltages"][102]["P3-U-4"] = None
test_check["JS stack connector voltages"][103]["P2-U-4"] = None
test_check["JS stack connector voltages"][104]["P1-U-4"] = None
test_check["JS stack connector voltages"][105]["P3-IA-4"] = None
test_check["JS stack connector voltages"][106]["P2-IA-4"] = None
test_check["JS stack connector voltages"][107]["P1-IA-4"] = None
test_check["JS stack connector voltages"][108]["RG-4"] = None
test_check["JS stack connector voltages"][109]["P1-OR-4"] = None
test_check["JS stack connector voltages"][110]["P2-OR-4"] = None
test_check["JS stack connector voltages"][111]["P3-OR-4"] = None
test_check["JS stack connector voltages"][112]["P3-FS-4"] = None
test_check["JS stack connector voltages"][113]["P2-FS-4"] = None
test_check["JS stack connector voltages"][114]["P1-FS-4"] = None
test_check["JS stack connector voltages"][115]["SP3-FS-4"] = None
test_check["JS stack connector voltages"][116]["SP2-FS-4"] = None
test_check["JS stack connector voltages"][117]["SP1-FS-4"] = None
test_check["JS stack connector voltages"][118]["SP3-IA-4"] = None
test_check["JS stack connector voltages"][119]["SP2-IA-4"] = None
test_check["JS stack connector voltages"][120]["SP1-IA-4"] = None
test_check["JS stack connector voltages"][121]["HK80"] = None
test_check["JS stack connector voltages"][122]["HK88"] = None
test_check["JS stack connector voltages"][123]["HK96"] = None
test_check["JS stack connector voltages"][124]["HK104"] = None
test_check["JS stack connector voltages"][125]["HK112"] = None
test_check["JS stack connector voltages"][126]["HK120"] = None
test_check["JS stack connector voltages"][127]["HKA0"] = None
test_check["JS stack connector voltages"][128]["HKA1"] = None
test_check["JS stack connector voltages"][129]["HKA2"] = None
test_check["JS stack connector voltages"][130]["HKCOM"] = None
test_check["JS stack connector voltages"][131]["SP3-FS-3"] = None
test_check["JS stack connector voltages"][132]["SP2-FS-3"] = None
test_check["JS stack connector voltages"][133]["SP1-FS-3"] = None
test_check["JS stack connector voltages"][134]["SP3-IA-3"] = None
test_check["JS stack connector voltages"][135]["SP2-IA-3"] = None
test_check["JS stack connector voltages"][136]["SP1-IA-3"] = None
test_check["JS stack connector voltages"][137]["ID-3"] = None
test_check["JS stack connector voltages"][138]["P3-U-3"] = None
test_check["JS stack connector voltages"][139]["P2-U-3"] = None
test_check["JS stack connector voltages"][140]["P1-U-3"] = None
test_check["JS stack connector voltages"][141]["P3-IA-3"] = None
test_check["JS stack connector voltages"][142]["P2-IA-3"] = None
test_check["JS stack connector voltages"][143]["P1-IA-3"] = None
test_check["JS stack connector voltages"][144]["RG-3"] = None
test_check["JS stack connector voltages"][145]["P1-OR-3"] = None
test_check["JS stack connector voltages"][146]["P2-OR-3"] = None
test_check["JS stack connector voltages"][147]["P3-OR-3"] = None
test_check["JS stack connector voltages"][148]["P3-FS-3"] = None
test_check["JS stack connector voltages"][149]["P2-FS-3"] = None
test_check["JS stack connector voltages"][150]["P1-FS-3"] = None
test_check["JS stack connector voltages"][151]["GND"] = None
test_check["JS stack connector voltages"][152]["SDO-D-4"] = None
test_check["JS stack connector voltages"][153]["GND"] = None
test_check["JS stack connector voltages"][154]["SDO-C-4"] = None
test_check["JS stack connector voltages"][155]["GND"] = None
test_check["JS stack connector voltages"][156]["SDO-B-4"] = None
test_check["JS stack connector voltages"][157]["GND"] = None
test_check["JS stack connector voltages"][158]["SDO-A-4"] = None
test_check["JS stack connector voltages"][159]["GND"] = None
test_check["JS stack connector voltages"][160]["_DS0_"] = None
test_check["JS stack connector voltages"][161]["GND"] = None
test_check["JS stack connector voltages"][162]["_DS8_"] = None
test_check["JS stack connector voltages"][163]["GND"] = None
test_check["JS stack connector voltages"][164]["_DS16_"] = None
test_check["JS stack connector voltages"][165]["GND"] = None
test_check["JS stack connector voltages"][166]["_DS24_"] = None
test_check["JS stack connector voltages"][167]["GND"] = None
test_check["JS stack connector voltages"][168]["_DS32_"] = None
test_check["JS stack connector voltages"][169]["GND"] = None
test_check["JS stack connector voltages"][170]["_DS40_"] = None
test_check["JS stack connector voltages"][171]["GND"] = None
test_check["JS stack connector voltages"][172][""] = None
test_check["JS stack connector voltages"][173]["GND"] = None
test_check["JS stack connector voltages"][174]["RTDCOM"] = None
test_check["JS stack connector voltages"][175]["GND"] = None
test_check["JS stack connector voltages"][176]["15"] = None
test_check["JS stack connector voltages"][177]["GND"] = None
test_check["JS stack connector voltages"][178]["-12"] = None
test_check["JS stack connector voltages"][179]["GND"] = None
test_check["JS stack connector voltages"][180]["5"] = None
test_check["JS stack connector voltages"][181]["GND"] = None
test_check["JS stack connector voltages"][182]["_DS48_"] = None
test_check["JS stack connector voltages"][183]["GND"] = None
test_check["JS stack connector voltages"][184]["_DS56_"] = None
test_check["JS stack connector voltages"][185]["GND"] = None
test_check["JS stack connector voltages"][186]["_DS64_"] = None
test_check["JS stack connector voltages"][187]["GND"] = None
test_check["JS stack connector voltages"][188]["_DS72_"] = None
test_check["JS stack connector voltages"][189]["GND"] = None
test_check["JS stack connector voltages"][190]["_DS80_"] = None
test_check["JS stack connector voltages"][191]["GND"] = None
test_check["JS stack connector voltages"][192]["_DS88_"] = None
test_check["JS stack connector voltages"][193]["GND"] = None
test_check["JS stack connector voltages"][194]["SDO-D-3"] = None
test_check["JS stack connector voltages"][195]["GND"] = None
test_check["JS stack connector voltages"][196]["SDO-C-3"] = None
test_check["JS stack connector voltages"][197]["GND"] = None
test_check["JS stack connector voltages"][198]["SDO-B-3"] = None
test_check["JS stack connector voltages"][199]["GND"] = None
test_check["JS stack connector voltages"][200]["SDO-A-3"] = None
Explanation: With Default Parameters Loaded, Continue Power ON Voltage Measurements
Measure voltages on all JS stack connector pins (= Driver safe-to-mate)
End of explanation
#TODO check values against reference values
Explanation: Executing the next cell will compare the recorded values above to the expected values, within a margin of +/- 10%. The output should not produce any errors.
End of explanation
# TODO Insert start_frames command
fpe1.cmd_stop_frames()
Explanation: Verify the image capturing function:
5) Issue the start frames command.
End of explanation
# TODO Insert capture image commands
Explanation: 6) Capture an image and display it. Note that with only the Interface Board connected, the captured image should be comprised entirely of pixels with a value of -1.
End of explanation
# TODO Insert stop_frames command
Explanation: 7) Issue the stop frames command.
End of explanation
from tessfpe.data.housekeeping_channels import housekeeping_channels
housekeeping_channels
# TODO compare measured to reference values
Explanation: 2) Take a set of housekeeping data.
End of explanation
# TODO: Recursively search test_check to make sure that no value is `None`
def url_exists(site, path):
# TODO: Fix me so that site is broken up from path
import httplib
conn = httplib.HTTPConnection(site)
conn.request('HEAD', path)
response = conn.getresponse()
conn.close()
return response.status == 200
assert type(test_check["NAME"]) is str, "Name should be entered as a string"
assert type(test_check["EMAIL"]) is str, "Email should be entered as a string"
# TODO: check that email is a valid email with a regex
# assert type(test_check["EMAIL"]) is str, "Email should be <blah>@<blah>.(com|edu|org|net|gov)"
assert type(test_check["Part Number"]) is str
assert type(test_check["Serial Number"]) is str
# TODO: check that "DATE" is valid time stamp of the form "MM/DD/YY"
# TODO: check that "ESD_SAFE" is the string 'ESD Safe'.
# assert type(test_check["EMAIL"]) is str, "Email should be <blah>@<blah>.(com|edu|org|net|gov)"
assert test_check["FRONT_ASSEMBLY_PHOTO"] not in placeholder_photos, "Front assembly photo should not be a stock photo"
#assert url_exists(test_check["FRONT_ASSEMBLY_PHOTO"]), "URL for front assembly photo should exist"
assert test_check["BACK_ASSEMBLY_PHOTO"] not in placeholder_photos, "Back assembly photo should not be a stock photo"
#assert url_exists(test_check["BACK_ASSEMBLY_PHOTO"]), "URL for back assembly photo should exist
assert 'Multimeter' in test_check["Equipment"], "'Multimeter' should be in test_check['Equipment']"
assert 'Oscilloscope' in test_check["Equipment"], "'Oscilloscope' should be in test_check['Equipment']"
assert 'DHU Emulator' in test_check["Equipment"], "'DHU Emulator' should be in test_check['Equipment']"
assert len(test_check["Equipment"]) == 3, "test_check['Equipment'] should not contain superfluous information"
assert type(test_check["Equipment"]["Multimeter"]["Model Number"]) is str, 'Multimeter model number should be a string'
assert type(test_check["Equipment"]["Multimeter"]["Serial Number"]) is str, 'Multimeter serial number should be a string'
assert type(test_check["Equipment"]["Oscilloscope"]["Model Number"]) is str, 'Oscilloscope model number should be a string'
assert type(test_check["Equipment"]["Oscilloscope"]["Serial Number"]) is str, 'Oscilloscope serial number should be a string'
assert type(test_check["Equipment"]["DHU Emulator"]["Model Number"]) is str, 'DHU Emulator model number should be a string'
assert type(test_check["Equipment"]["DHU Emulator"]["Serial Number"]) is str, 'DHU Emulator serial number should be a string'
assert type(test_check["Assembly Weight"]) is float
assert type(test_check["Non-Flight Configurations"]) is str
type(1.0)
type(False)
Explanation: <pre>
- FPE Test Procedures
- FPE Bring-up Procedure (Check boxes for board type? Flight or not?)
- Verify work area is ESD safe
- Setup per diagram, take photos
- Note test equipment model #'s and serial #'s
- Standard inspections for all 3 PCB types; capture images
- Weigh assembly, note non-flight configurations
- Visual inspection under stereo microscope
- Workmanship and mechanical damage
- DNP parts not installed
- No missing components
Verify req'd jumpers installed
Component orientation (chips, polarized caps, diodes, etc)
Verify chips are correct parts (& date codes if specified in design)
Verify connector savers installed if req'd
Power OFF resistance measurements (compare to reference values)
Power lines
Stack connector
CCD connectors (video only) Maybe delete this? Discuss.
Temp connector (video only)
Power ON voltage measurements (compare to reference values/images)
DHU supply voltages before connection to setup
DHU supply voltages and currents with setup connected
Capture FLIR images; check for hot spots
Program FPGA
Start frames
Take raw image as verification that FPGA was programmed OK
Stop frames
Take HK data as further verification
ref values for 3 cases: Interface only, Interface + Driver, full stack)
Measure voltages on all open connector pins (= PCB safe-to-mate)
Interface: stack connector only
Driver: stack connector only
Video:
Stack connector
CCD connectors (discuss need for this)
Temp connector
*Bring up complete*
FPE Functional Test and calibration
(Assume for now we test full stack, not separate boards)
Setup per diagram, take photos
Note test equipment model #'s and serial #'s
Power ON voltage measurements (compare to reference values/images)
DHU supply voltages before connection to setup
DHU supply voltages and currents with setup connected
Program FPGA
Verify communication
Load Wrapper and MemFiles (record version numbers)
Housekeeping calibration
Start frames, Stop frames (or otherwise set DACs to default values)
Do HK calibration process
Get HK bias value(s)
Etc…
Very with DMM measurements
Supply voltages
Others? All 128? HK ADC bias?
Capture calibrated HK set
Bias groups
Clock Driver groups
Interface group
DAC Calibration
Do DAC calibration process
Capture HK set over full range of DAC settings (frames stopped)
Start frames
Capture HK set (frames running)
CCD signal verification and CCD safe-to-mate
Measure signals at CCD connectors
Verify clock voltages, timing, wave shapes on scope for each CCD connector
RTD Functional test and calibration
Connect 12 x 1000 Ohm 0.1% resistors to Temp connector
(Connect calibration R's to AlCu sensor connections somewhere)??
Capture HK set for RTD's
Do RTD calibration process
Capture calibrated HK set for Thermal group
Heater Functional test and calibration
Connect three heater calibration resistors to HTR outputs
Capture HK set for Heater group
Do Heater calibration process
Capture calibrated HK set for Heater group
*Functional test and calibration complete*
*FPE Test Procedure complete*
</pre>
Test Check
Below we validate that all of the tests and procedures above have been performed properly.
If some step has not been carried out properly, an Exception will be thrown.
End of explanation
# TODO: pretty print summary of data, consider incorporating
# - Visual inspection results
# - Operator Notes
Explanation: Summary
Below is a summary of test results and notes:
End of explanation |
10,211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions
Step2: Expected output
Step3: Expected Output
Step4: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be
Step5: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
Step7: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step16: Expected Output
Step18: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise
Step20: Expected Output | Python Code:
### START CODE HERE ### (≈ 1 line of code)
test = 'Hello World'
### END CODE HERE ###
print ("test: " + test)
Explanation: Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions:
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
After this assignment you will:
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
About iPython Notebooks
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
Exercise: Set test to "Hello World" in the cell below to print "Hello World" and run the two cells below.
End of explanation
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+ math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
Explanation: Expected output:
test: Hello World
<font color='blue'>
What you need to remember:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
1 - Building basic functions with numpy
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
1.1 - sigmoid function, np.exp()
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
Reminder:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
End of explanation
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
Explanation: Expected Output:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
End of explanation
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
Explanation: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
End of explanation
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
Explanation: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
End of explanation
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s =1/(1+ np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
Explanation: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise: Implement the sigmoid function using numpy.
Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \
x_2 \
... \
x_n \
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \
\frac{1}{1+e^{-x_2}} \
... \
\frac{1}{1+e^{-x_n}} \
\end{pmatrix}\tag{1} $$
End of explanation
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
End of explanation
# GRADED FUNCTION: image2vector
def image2vector(image):
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape(image.shape[0]*image.shape[1]*image.shape[2],1)
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
1.3 - Reshaping arrays
Two common numpy functions used in deep learning are np.shape and np.reshape().
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.
End of explanation
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, ord = 2, axis = 1, keepdims = True)
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
Explanation: Expected Output:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \
2 & 6 & 4 \
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \
\sqrt{56} \
\end{bmatrix}\tag{4} $$and $$ x_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
End of explanation
# GRADED FUNCTION: softmax
def softmax(x):
Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp,axis=1, keepdims=True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
Note:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
1.5 - Broadcasting and the softmax function
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.
Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
Instructions:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
$\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \
\vdots & \vdots & \vdots & \ddots & \vdots \
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \
\vdots & \vdots & \vdots & \ddots & \vdots \
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \
softmax\text{(second row of x)} \
... \
softmax\text{(last row of x)} \
\end{pmatrix} $$
End of explanation
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
Note:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
What you need to remember:
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
End of explanation
# GRADED FUNCTION: L1
def L1(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(y-yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
Explanation: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
Reminder:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align}\tag{6}$$
End of explanation
# GRADED FUNCTION: L2
def L2(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
#loss = np.sum(np.square(y-yhat)) ## working
loss = np.dot((y-yhat),(y-yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\sum_{j=0}^n x_j^{2}$.
L2 loss is defined as $$\begin{align} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align}\tag{7}$$
End of explanation |
10,212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: MLE fit for three component binding - simulated data
In this notebook we will see how well we can reproduce Kd of a non-fluorescent ligand from simulated experimental data with a maximum likelihood function.
Step2: Now make this a fluorescence experiment.
Step3: First let's see if we can find Kd of our fluorescent ligand from the three component binding model, if we know there's no competitive ligand
Step4: Okay, cool now let's try to fit our competitive ligand
Step5: Let's plot all these results with each other and see how we did | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
import seaborn as sns
%pylab inline
#Competitive binding function
#This function and its assumptions are defined in greater detail in this notebook:
## modelling-CompetitiveBinding-ThreeComponentBinding.ipynb
def three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A):
Parameters
----------
Ptot : float
Total protein concentration
Ltot : float
Total tracer(fluorescent) ligand concentration
Kd_L : float
Dissociation constant of the fluorescent ligand
Atot : float
Total competitive ligand concentration
Kd_A : float
Dissociation constant of the competitive ligand
Returns
-------
P : float
Free protein concentration
L : float
Free ligand concentration
A : float
Free ligand concentration
PL : float
Complex concentration
Kd_L_app : float
Apparent dissociation constant of L in the presence of A
Usage
-----
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot, Ltot, Kd_L, Atot, Kd_A)
Kd_L_app = Kd_L*(1+Atot/Kd_A)
PL = 0.5 * ((Ptot + Ltot + Kd_L_app) - np.sqrt((Ptot + Ltot + Kd_L_app)**2 - 4*Ptot*Ltot)) # complex concentration (uM)
P = Ptot - PL; # free protein concentration in sample cell after n injections (uM)
L = Ltot - PL; # free tracer ligand concentration in sample cell after n injections (uM)
A = Atot - PL; # free competitive ligand concentration in sample cell after n injections (uM)
return [P, L, A, PL, Kd_L_app]
#Let's define our parameters
Kd = 3800e-9 # M
Kd_Competitor = 3000e-9 # M
Ptot = 1e-9 * np.ones([12],np.float64) # M
Ltot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)]) # M
L_Competitor = 10e-6 # M
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot, Ltot, Kd, L_Competitor, Kd_Competitor)
#usin
[P_base, L_base, A_base, PL_base, Kd_L_app_base] = three_component_competitive_binding(Ptot, Ltot, Kd, 0, Kd_Competitor)
# y will be complex concentration
# x will be total ligand concentration
plt.title('Competition assay')
plt.semilogx(Ltot,PL_base,'green', marker='o', label = 'Fluorescent Ligand')
plt.semilogx(Ltot,PL,'cyan', marker='o', label = 'Fluorescent Ligand + Competitor')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$[PL]$')
plt.legend(loc=0);
#What if we change our Kd's a little
#Kd_Gef_Abl
Kd = 480e-9 # M
#Kd_Ima_Abl
Kd_Competitor = 21.0e-9 # M
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot, Ltot, Kd, L_Competitor, Kd_Competitor)
#using _base as a subscript to define when we have no competitive ligand
[P_base, L_base, A_base, PL_base, Kd_L_app_base] = three_component_competitive_binding(Ptot, Ltot, Kd, 0, Kd_Competitor)
# y will be complex concentration
# x will be total ligand concentration
plt.title('Competition assay')
plt.semilogx(Ltot,PL_base,'green', marker='o', label = 'Fluorescent Ligand')
plt.semilogx(Ltot,PL,'cyan', marker='o', label = 'Fluorescent Ligand + Competitor')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$[PL]$')
plt.legend(loc=0);
Explanation: MLE fit for three component binding - simulated data
In this notebook we will see how well we can reproduce Kd of a non-fluorescent ligand from simulated experimental data with a maximum likelihood function.
End of explanation
# Making max 400 relative fluorescence units, and scaling all of PL to that
npoints = len(Ltot)
sigma = 10.0 # size of noise
F_i = (400/1e-9)*PL + sigma * np.random.randn(npoints)
F_i_base = (400/1e-9)*PL_base + sigma * np.random.randn(npoints)
# y will be complex concentration
# x will be total ligand concentration
plt.title('Competition assay')
plt.semilogx(Ltot,F_i_base,'green', marker='o', label = 'Fluorescent Ligand')
plt.semilogx(Ltot,F_i,'cyan', marker='o', label = 'Fluorescent Ligand + Competitor')
plt.xlabel('$[L]_{tot}$')
plt.ylabel('$Fluorescence$')
plt.legend(loc=0);
#And makeup an F_L
F_L = 0.3
Explanation: Now make this a fluorescence experiment.
End of explanation
# This function fits Kd_L when L_Competitor is 0
def find_Kd_from_fluorescence_base(params):
[F_background, F_PL, Kd_L] = params
N = len(Ltot)
Fmodel_i = np.zeros([N])
for i in range(N):
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot[0], Ltot[i], Kd_L, 0, Kd_Competitor)
Fmodel_i[i] = (F_PL*PL + F_L*L) + F_background
return Fmodel_i
initial_guess = [1,400/1e-9,3800e-9]
prediction = find_Kd_from_fluorescence_base(initial_guess)
plt.semilogx(Ltot,prediction,color='k')
plt.semilogx(Ltot,F_i_base, 'o')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescence$')
plt.legend();
def sumofsquares(params):
prediction = find_Kd_from_fluorescence_base(params)
return np.sum((prediction - F_i_base)**2)
initial_guess = [0,3E11,2000E-9]
fit = optimize.minimize(sumofsquares,initial_guess,method='Nelder-Mead')
print "The predicted parameters are", fit.x
fit_prediction = find_Kd_from_fluorescence_base(fit.x)
plt.semilogx(Ltot,fit_prediction,color='k')
plt.semilogx(Ltot,F_i_base, 'o')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescence$')
plt.legend();
Kd_L_MLE = fit.x[2]
def Kd_format(Kd):
if (Kd < 1e-12):
Kd_summary = "Kd = %.1f nM " % (Kd/1e-15)
elif (Kd < 1e-9):
Kd_summary = "Kd = %.1f pM " % (Kd/1e-12)
elif (Kd < 1e-6):
Kd_summary = "Kd = %.1f nM " % (Kd/1e-9)
elif (Kd < 1e-3):
Kd_summary = "Kd = %.1f uM " % (Kd/1e-6)
elif (Kd < 1):
Kd_summary = "Kd = %.1f mM " % (Kd/1e-3)
else:
Kd_summary = "Kd = %.3e M " % (Kd)
return Kd_summary
Kd_format(Kd_L_MLE)
delG_summary = "delG = %s kT" %np.log(Kd_L_MLE)
delG_summary
Explanation: First let's see if we can find Kd of our fluorescent ligand from the three component binding model, if we know there's no competitive ligand
End of explanation
# This function fits Kd_A when Kd_L already has an estimate
def find_Kd_from_fluorescence_competitor(params):
[F_background, F_PL, Kd_Competitor] = params
N = len(Ltot)
Fmodel_i = np.zeros([N])
for i in range(N):
[P, L, A, PL, Kd_L_app] = three_component_competitive_binding(Ptot[0], Ltot[i], Kd_L_MLE, L_Competitor, Kd_Competitor)
Fmodel_i[i] = (F_PL*PL + F_L*L) + F_background
return Fmodel_i
initial_guess = [0,400/1e-9,3800e-9]
prediction = find_Kd_from_fluorescence_competitor(initial_guess)
plt.semilogx(Ltot,prediction,color='k')
plt.semilogx(Ltot,F_i, 'o')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescence$')
plt.legend();
def sumofsquares(params):
prediction = find_Kd_from_fluorescence_competitor(params)
return np.sum((prediction - F_i)**2)
initial_guess = [0,3E11,2000E-9]
fit_comp = optimize.minimize(sumofsquares,initial_guess,method='Nelder-Mead')
print "The predicted parameters are", fit_comp.x
fit_prediction_comp = find_Kd_from_fluorescence_competitor(fit_comp.x)
plt.semilogx(Ltot,fit_prediction_comp,color='k')
plt.semilogx(Ltot,F_i, 'o')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescence$')
plt.legend();
Kd_A_MLE = fit_comp.x[2]
Kd_format(Kd_A_MLE)
delG_summary = "delG = %s kT" %np.log(Kd_A_MLE)
delG_summary
Explanation: Okay, cool now let's try to fit our competitive ligand
End of explanation
plt.semilogx(Ltot,fit_prediction_comp,color='k')
plt.semilogx(Ltot,F_i, 'o')
plt.axvline(Kd_A_MLE,color='k',label='%s (MLE)'%Kd_format(Kd_A_MLE))
plt.axvline(Kd_Competitor,color='b',label='%s'%Kd_format(Kd_Competitor))
plt.semilogx(Ltot,fit_prediction,color='k')
plt.semilogx(Ltot,F_i_base, 'o')
plt.axvline(Kd_L_MLE,color='k',label='%s (MLE)'%Kd_format(Kd_L_MLE))
plt.axvline(Kd,color='g',label='%s'%Kd_format(Kd))
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescence$')
plt.legend(loc=0);
#Awesome
Explanation: Let's plot all these results with each other and see how we did
End of explanation |
10,213 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to Time Series!
Forecasting is perhaps the most common application of machine learning in the real world. Businesses forecast product demand, governments forecast economic and population growth, meteorologists forecast the weather. The understanding of things to come is a pressing need across science, government, and industry (not to mention our personal lives!), and practitioners in these fields are increasingly applying machine learning to address this need.
Time series forecasting is a broad field with a long history. This course focuses on the application of modern machine learning methods to time series data with the goal of producing the most accurate predictions. The lessons in this course were inspired by winning solutions from past Kaggle forecasting competitions but will be applicable whenever accurate forecasts are a priority.
After finishing this course, you'll know how to
Step1: This series records the number of hardcover book sales at a retail store over 30 days. Notice that we have a single column of observations Hardcover with a time index Date.
Linear Regression with Time Series
For the first part of this course, we'll use the linear regression algorithm to construct forecasting models. Linear regression is widely used in practice and adapts naturally to even complex forecasting tasks.
The linear regression algorithm learns how to make a weighted sum from its input features. For two features, we would have
Step2: Linear regression with the time dummy produces the model
Step3: Time-step features let you model time dependence. A series is time dependent if its values can be predicted from the time they occured. In the Hardcover Sales series, we can predict that sales later in the month are generally higher than sales earlier in the month.
Lag features
To make a lag feature we shift the observations of the target series so that they appear to have occured later in time. Here we've created a 1-step lag feature, though shifting by multiple steps is possible too.
Step4: Linear regression with a lag feature produces the model
Step5: You can see from the lag plot that sales on one day (Hardcover) are correlated with sales from the previous day (Lag_1). When you see a relationship like this, you know a lag feature will be useful.
More generally, lag features let you model serial dependence. A time series has serial dependence when an observation can be predicted from previous observations. In Hardcover Sales, we can predict that high sales on one day usually mean high sales the next day.
Adapting machine learning algorithms to time series problems is largely about feature engineering with the time index and lags. For most of the course, we use linear regression for its simplicity, but these features will be useful whichever algorithm you choose for your forecasting task.
Example - Tunnel Traffic
Tunnel Traffic is a time series describing the number of vehicles traveling through the Baregg Tunnel in Switzerland each day from November 2003 to November 2005. In this example, we'll get some practice applying linear regression to time-step features and lag features.
The hidden cell sets everything up.
Step6: Time-step feature
Provided the time series doesn't have any missing dates, we can create a time dummy by counting out the length of the series.
Step7: The procedure for fitting a linear regression model follows the standard steps for scikit-learn.
Step8: The model actually created is (approximately)
Step9: Lag feature
Pandas provides us a simple method to lag a series, the shift method.
Step10: When creating lag features, we need to decide what to do with the missing values produced. Filling them in is one option, maybe with 0.0 or "backfilling" with the first known value. Instead, we'll just drop the missing values, making sure to also drop values in the target from corresponding dates.
Step11: The lag plot shows us how well we were able to fit the relationship between the number of vehicles one day and the number the previous day.
Step12: What does this prediction from a lag feature mean about how well we can predict the series across time? The following time plot shows us how our forecasts now respond to the behavior of the series in the recent past. | Python Code:
#$HIDE_INPUT$
import pandas as pd
df = pd.read_csv(
"../input/ts-course-data/book_sales.csv",
index_col='Date',
parse_dates=['Date'],
).drop('Paperback', axis=1)
df.head()
Explanation: Welcome to Time Series!
Forecasting is perhaps the most common application of machine learning in the real world. Businesses forecast product demand, governments forecast economic and population growth, meteorologists forecast the weather. The understanding of things to come is a pressing need across science, government, and industry (not to mention our personal lives!), and practitioners in these fields are increasingly applying machine learning to address this need.
Time series forecasting is a broad field with a long history. This course focuses on the application of modern machine learning methods to time series data with the goal of producing the most accurate predictions. The lessons in this course were inspired by winning solutions from past Kaggle forecasting competitions but will be applicable whenever accurate forecasts are a priority.
After finishing this course, you'll know how to:
- engineer features to model the major time series components (trends, seasons, and cycles),
- visualize time series with many kinds of time series plots,
- create forecasting hybrids that combine the strengths of complementary models, and
- adapt machine learning methods to a variety of forecasting tasks.
As part of the exercises, you'll get a chance to participate in our Store Sales - Time Series Forecasting Getting Started competition. In this competition, you're tasked with forecasting sales for Corporación Favorita (a large Ecuadorian-based grocery retailer) in almost 1800 product categories.
What is a Time Series?
The basic object of forecasting is the time series, which is a set of observations recorded over time. In forecasting applications, the observations are typically recorded with a regular frequency, like daily or monthly.
End of explanation
#$HIDE_INPUT$
import numpy as np
df['Time'] = np.arange(len(df.index))
df.head()
Explanation: This series records the number of hardcover book sales at a retail store over 30 days. Notice that we have a single column of observations Hardcover with a time index Date.
Linear Regression with Time Series
For the first part of this course, we'll use the linear regression algorithm to construct forecasting models. Linear regression is widely used in practice and adapts naturally to even complex forecasting tasks.
The linear regression algorithm learns how to make a weighted sum from its input features. For two features, we would have:
target = weight_1 * feature_1 + weight_2 * feature_2 + bias
During training, the regression algorithm learns values for the parameters weight_1, weight_2, and bias that best fit the target. (This algorithm is often called ordinary least squares since it chooses values that minimize the squared error between the target and the predictions.) The weights are also called regression coefficients and the bias is also called the intercept because it tells you where the graph of this function crosses the y-axis.
Time-step features
There are two kinds of features unique to time series: time-step features and lag features.
Time-step features are features we can derive directly from the time index. The most basic time-step feature is the time dummy, which counts off time steps in the series from beginning to end.
End of explanation
#$HIDE_INPUT$
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use("seaborn-whitegrid")
plt.rc(
"figure",
autolayout=True,
figsize=(11, 4),
titlesize=18,
titleweight='bold',
)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=16,
titlepad=10,
)
%config InlineBackend.figure_format = 'retina'
fig, ax = plt.subplots()
ax.plot('Time', 'Hardcover', data=df, color='0.75')
ax = sns.regplot(x='Time', y='Hardcover', data=df, ci=None, scatter_kws=dict(color='0.25'))
ax.set_title('Time Plot of Hardcover Sales');
Explanation: Linear regression with the time dummy produces the model:
target = weight * time + bias
The time dummy then lets us fit curves to time series in a time plot, where Time forms the x-axis.
End of explanation
#$HIDE_INPUT$
df['Lag_1'] = df['Hardcover'].shift(1)
df = df.reindex(columns=['Hardcover', 'Lag_1'])
df.head()
Explanation: Time-step features let you model time dependence. A series is time dependent if its values can be predicted from the time they occured. In the Hardcover Sales series, we can predict that sales later in the month are generally higher than sales earlier in the month.
Lag features
To make a lag feature we shift the observations of the target series so that they appear to have occured later in time. Here we've created a 1-step lag feature, though shifting by multiple steps is possible too.
End of explanation
#$HIDE_INPUT$
fig, ax = plt.subplots()
ax = sns.regplot(x='Lag_1', y='Hardcover', data=df, ci=None, scatter_kws=dict(color='0.25'))
ax.set_aspect('equal')
ax.set_title('Lag Plot of Hardcover Sales');
Explanation: Linear regression with a lag feature produces the model:
target = weight * lag + bias
So lag features let us fit curves to lag plots where each observation in a series is plotted against the previous observation.
End of explanation
#$HIDE_INPUT$
from pathlib import Path
from warnings import simplefilter
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
simplefilter("ignore") # ignore warnings to clean up output cells
# Set Matplotlib defaults
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True, figsize=(11, 4))
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=14,
titlepad=10,
)
plot_params = dict(
color="0.75",
style=".-",
markeredgecolor="0.25",
markerfacecolor="0.25",
legend=False,
)
%config InlineBackend.figure_format = 'retina'
# Load Tunnel Traffic dataset
data_dir = Path("../input/ts-course-data")
tunnel = pd.read_csv(data_dir / "tunnel.csv", parse_dates=["Day"])
# Create a time series in Pandas by setting the index to a date
# column. We parsed "Day" as a date type by using `parse_dates` when
# loading the data.
tunnel = tunnel.set_index("Day")
# By default, Pandas creates a `DatetimeIndex` with dtype `Timestamp`
# (equivalent to `np.datetime64`, representing a time series as a
# sequence of measurements taken at single moments. A `PeriodIndex`,
# on the other hand, represents a time series as a sequence of
# quantities accumulated over periods of time. Periods are often
# easier to work with, so that's what we'll use in this course.
tunnel = tunnel.to_period()
tunnel.head()
Explanation: You can see from the lag plot that sales on one day (Hardcover) are correlated with sales from the previous day (Lag_1). When you see a relationship like this, you know a lag feature will be useful.
More generally, lag features let you model serial dependence. A time series has serial dependence when an observation can be predicted from previous observations. In Hardcover Sales, we can predict that high sales on one day usually mean high sales the next day.
Adapting machine learning algorithms to time series problems is largely about feature engineering with the time index and lags. For most of the course, we use linear regression for its simplicity, but these features will be useful whichever algorithm you choose for your forecasting task.
Example - Tunnel Traffic
Tunnel Traffic is a time series describing the number of vehicles traveling through the Baregg Tunnel in Switzerland each day from November 2003 to November 2005. In this example, we'll get some practice applying linear regression to time-step features and lag features.
The hidden cell sets everything up.
End of explanation
df = tunnel.copy()
df['Time'] = np.arange(len(tunnel.index))
df.head()
Explanation: Time-step feature
Provided the time series doesn't have any missing dates, we can create a time dummy by counting out the length of the series.
End of explanation
from sklearn.linear_model import LinearRegression
# Training data
X = df.loc[:, ['Time']] # features
y = df.loc[:, 'NumVehicles'] # target
# Train the model
model = LinearRegression()
model.fit(X, y)
# Store the fitted values as a time series with the same time index as
# the training data
y_pred = pd.Series(model.predict(X), index=X.index)
Explanation: The procedure for fitting a linear regression model follows the standard steps for scikit-learn.
End of explanation
#$HIDE_INPUT$
ax = y.plot(**plot_params)
ax = y_pred.plot(ax=ax, linewidth=3)
ax.set_title('Time Plot of Tunnel Traffic');
Explanation: The model actually created is (approximately): Vehicles = 22.5 * Time + 98176. Plotting the fitted values over time shows us how fitting linear regression to the time dummy creates the trend line defined by this equation.
End of explanation
df['Lag_1'] = df['NumVehicles'].shift(1)
df.head()
Explanation: Lag feature
Pandas provides us a simple method to lag a series, the shift method.
End of explanation
from sklearn.linear_model import LinearRegression
X = df.loc[:, ['Lag_1']]
X.dropna(inplace=True) # drop missing values in the feature set
y = df.loc[:, 'NumVehicles'] # create the target
y, X = y.align(X, join='inner') # drop corresponding values in target
model = LinearRegression()
model.fit(X, y)
y_pred = pd.Series(model.predict(X), index=X.index)
Explanation: When creating lag features, we need to decide what to do with the missing values produced. Filling them in is one option, maybe with 0.0 or "backfilling" with the first known value. Instead, we'll just drop the missing values, making sure to also drop values in the target from corresponding dates.
End of explanation
#$HIDE_INPUT$
fig, ax = plt.subplots()
ax.plot(X['Lag_1'], y, '.', color='0.25')
ax.plot(X['Lag_1'], y_pred)
ax.set_aspect('equal')
ax.set_ylabel('NumVehicles')
ax.set_xlabel('Lag_1')
ax.set_title('Lag Plot of Tunnel Traffic');
Explanation: The lag plot shows us how well we were able to fit the relationship between the number of vehicles one day and the number the previous day.
End of explanation
#$HIDE_INPUT$
ax = y.plot(**plot_params)
ax = y_pred.plot()
Explanation: What does this prediction from a lag feature mean about how well we can predict the series across time? The following time plot shows us how our forecasts now respond to the behavior of the series in the recent past.
End of explanation |
10,214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: RASP Diabetes Rates
Diabetes rates from CHIS surveys for 2015, 2016 and 2017, segmented by race, age, sex and poverty status
Step3: Poverty, Age and Race
Step4: Compare to CHIS
Here is the AskCHIS page for
Step5: AskCHIS, By Race, 55-64
Step6: AskCHIS, By Race, 55-64, Male
Step7: AskChis, By Race, 55-64, In Poverty | Python Code:
import seaborn as sns
import metapack as mp
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display
from publicdata.chis import *
%matplotlib inline
sns.set_context('notebook')
idx = pd.IndexSlice # Convenience redefinition.
#pkg = mp.jupyter.open_package()
pkg = mp.jupyter.open_source_package()
pkg
def recode(df):
Recode to a simpler group of races. For a lot of health outcomes, the major divisions are
* White + Asian ( whasian )
* Black + Latino Afrotino
* Others
from pandas.api.types import CategoricalDtype
df['race_recode'] = df.racedf_p1
df.replace({'race_recode':{
'NON-LATINO WHITE':'nhwhite',
'NON-LATINO ASIAN':'asian',
'NON-LATINO AMERICAN INDIAN/ALASKAN NATIVE': 'other',
'NON-LATINO AFR. AMER.': 'black',
'LATINO': 'hisp',
'NON-LATINO, TWO+ RACES': 'other',
'NON-LATINO OTHER, ONE RACE': 'other'
}}, inplace=True)
df.race_recode = df.race_recode.astype('category')
df['minority'] = (df['race_recode'] != 'white_asian').astype(int)
df['old'] = (df.srage_p1 < '45-49 YEARS').astype(CategoricalDtype(categories=[False, True],ordered=True))
df.old.cat.rename_categories(['OLD','YOUNG'], inplace=True)
df['poor'] = (df.povll.isin(('200-299% FPL', '300% FPL AND ABOVE')) )\
.astype(CategoricalDtype(categories=[True, False],ordered=True))
df.poor.cat.rename_categories(['NPOV','POV'], inplace=True)
return df
df17 = pkg.reference('adult_2017').dataframe()
df16 = pkg.reference('adult_2016').dataframe()
df15 = pkg.reference('adult_2015').dataframe()
df14 = pkg.reference('adult_2014').dataframe()
df13 = pkg.reference('adult_2013').dataframe()
# Rename some categories. 2016 and 2015 have "ALASKA" where the others have "ALASKAN", which
# causes the concat() operation to convert categories to strings
cats_17 = df17.racedf_p1.cat.categories
cat_map = dict(zip(df16.racedf_p1.cat.categories, df17.racedf_p1.cat.categories))
for e in [df13,df14,df15,df16,df17]:
e.racedf_p1.cat.rename_categories(cat_map, inplace=True)
for df, year in zip([df13, df14, df15, df16, df17], range(2013, 2018)):
df['year'] = year
df = recode(df)
Explanation: RASP Diabetes Rates
Diabetes rates from CHIS surveys for 2015, 2016 and 2017, segmented by race, age, sex and poverty status
End of explanation
n_years, df1517 = chis_concat([df17,df16,df15], ['diabetes', 'srsex', 'srage_p1', 'racedf_p1', 'race_recode', 'povll', 'minority', 'poor', 'old'])
def age_group_parts(v):
try:
y1, y2, _ = v.replace('-',' ').split()
return y1,y2
except ValueError:
# Probably '85+ YEARS'
return 85, 120
def age_group_to_age(v):
y1, y2 = age_group_parts(v)
if y1 == 85:
return 85
else:
return (int(y1)+int(y2))/2
# Convert to census age ranges
census_age_ranges = [
pd.Interval(18, 24, closed='both'),
pd.Interval(25, 34, closed='both'),
pd.Interval(35, 44, closed='both'),
pd.Interval(45, 54, closed='both'),
pd.Interval(55, 64, closed='both'),
pd.Interval(65, 74, closed='both'),
pd.Interval(75, 85, closed='both') # Actualy range is 75:120, but want a lower mean for prediction
]
pov_status_map = {
"0-99% FPL": 1,
"100-199% FPL": 0,
"200-299% FPL":0,
"300% FPL AND ABOVE": 0
}
dflr = pd.get_dummies(df1517, columns=['race_recode'], prefix='', prefix_sep = '')
dflr['race_recode'] = df1517.race_recode
dflr['diabetes_bool'] = (dflr.diabetes == 'YES').astype(int)
dflr['group_age_mean'] = dflr.srage_p1.apply(lambda v:age_group_to_age(v)).astype(float)
dflr['group_age_min'] = dflr.srage_p1.apply(lambda v:age_group_parts(v)[0]).astype(int)
dflr['group_age_max'] = dflr.srage_p1.apply(lambda v:age_group_parts(v)[1]).astype(int)
dflr['census_age_group'] = pd.cut(dflr.group_age_mean, pd.IntervalIndex(census_age_ranges))
#dflr['age_group_name'] = dflr.apply(lambda r: '{:02d}-{:03d}'.format(r.group_age_min, r.group_age_max), axis=1)
dflr['age_group'] = dflr.census_age_group.apply(lambda v: '{:02d}-{:03d}'.format(v.left, v.right))
dflr['pov'] = dflr.povll.apply(lambda v: pov_status_map[v])
dflr['is_male'] = (dflr.srsex == 'MALE').astype(int)
dflr.head()
raked_col = [c for c in df.columns if c.startswith('rake')]
index_cols = ['race_recode','is_male', 'age_group', 'pov' ]
value_cols = ['diabetes_bool', 'rec_count']
dflr['rec_count'] = 1
dflr['census_age_group'] = pd.cut(dflr.group_age_mean, pd.IntervalIndex(census_age_ranges))
t = dflr[index_cols+value_cols+raked_col].copy()
# Couldn't figure out the idomatic way to do this.
t['group_pop'] = t.rakedw0
for c in raked_col:
t[c] *= t.diabetes_bool
# Now the raked columns are the estimates for the number of people who have diabetes,
# so we just have to sum up everything
t2 = t.groupby(index_cols).sum().reset_index()
t2['estimate'] = t2.rakedw0
t2['repl_mean'] = t2[raked_col[1:]].mean(axis=1)
t2['repl_std'] = t2[raked_col[1:]].std(axis=1)
t2['rse'] = t2['repl_std']/t2.estimate
t2['rate'] = t2.estimate / t2.group_pop
#t2['rate_std'] = t2['rate'] * t2['rse']
rasp_d = t2.sort_values('rate',ascending=False)[list(c for c in t2.columns if c not in raked_col)]
rasp_d.rename(columns={'race_recode': 'raceeth','repl_std':'std'}, inplace=True)
rasp_d['sex'] = rasp_d.is_male
#rasp_d['pov'] = rasp_d.poor.apply(lambda v: 1 if v == 'POV' else 0)
rasp_d.head()
rasp_d.estimate.sum(), rasp_d.group_pop.sum(), rasp_d.estimate.sum()/rasp_d.group_pop.sum()
index_cols = ['raceeth','age_group', 'sex','pov' ]
val_cols = ['group_pop','estimate','std','rate']
rasp_diabetes = rasp_d[index_cols+val_cols].reset_index(drop=True).sort_values(index_cols)[index_cols+val_cols]
re_group = rasp_diabetes[rasp_diabetes.rate != 0].groupby(['raceeth', 'age_group']).mean()
# There are a lot of entries, particularly for yound asians, where the group won't have any estimate
# for diabetes because the sample group is too small to have even one case, and all values are integers.
# So, we impute these values from others in the same raceeth+age_group
def impute_rate(r, re_group):
Impute rates from other values in the similar group
if r.rate == 0:
return re_group.loc[(r.raceeth,r.age_group)].rate
else:
return r.rate
rasp_diabetes['imputed_rate'] = rasp_diabetes.apply(impute_rate,re_group=re_group, axis=1)
rasp_diabetes['imputed_estimate'] = (rasp_diabetes.imputed_rate * rasp_diabetes.group_pop).round(0).astype(int)
rasp_diabetes['group_pop'] = rasp_diabetes['group_pop'].round(0).astype(int)
rasp_diabetes['estimate'] = rasp_diabetes['estimate'].round(0).astype(int)
rasp_diabetes.head()
t = rasp_diabetes.reset_index().groupby(['raceeth','age_group']).sum()
t['rate'] = t.estimate/t.group_pop
t[['rate']].unstack(0).plot()
t = rasp_diabetes.reset_index().groupby(['raceeth','pov']).sum()
t['rate'] = t.estimate/t.group_pop
t[['rate']].unstack(0).plot(kind='bar')
Explanation: Poverty, Age and Race
End of explanation
t = rasp_diabetes.groupby('raceeth').sum().copy()
t['rate'] = (t.estimate / t.group_pop * 100).round(1)
t['rate']
Explanation: Compare to CHIS
Here is the AskCHIS page for:
* "Ever Diagnosed with diabetes"
* by OMB Race
* Pooling 2015, 2016 and 2017
Below is the same values from this dataset, which agree very closely. Note that to get the same values, you must use the estimate column, not the imputed_estimate.
End of explanation
t = rasp_diabetes[ (rasp_diabetes.age_group=='55-064') ].groupby('raceeth').sum().copy()
t['rate'] = (t.estimate / t.group_pop * 100).round(1)
assert round(t.loc['asian'].estimate,-3) == 96_000
t[['estimate','group_pop','rate']]
Explanation: AskCHIS, By Race, 55-64
End of explanation
t = rasp_diabetes[ (rasp_diabetes.age_group=='55-064') & (rasp_diabetes.sex == 1) ].groupby('raceeth').sum().copy()
t['rate'] = (t.estimate / t.group_pop * 100).round(1)
assert round(t.loc['asian'].estimate,-3) == 47_000
t[['estimate','rate']]
Explanation: AskCHIS, By Race, 55-64, Male
End of explanation
t = rasp_diabetes[ (rasp_diabetes.age_group=='55-064') & (rasp_diabetes.pov == 1) ].groupby('raceeth').sum().copy()
t['rate'] = (t.estimate / t.group_pop * 100).round(1)
assert round(t.loc['asian'].estimate,-3) == 30_000
t[['estimate','rate', 'group_pop']]
Explanation: AskChis, By Race, 55-64, In Poverty
End of explanation |
10,215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Input
To do any computation, you need to have data. Getting the data in the framework of a workflow is therefore the first step of every analysis. Nipype provides many different modules to grab or select the data
Step1: Second, we know that the two files we desire are the the following location
Step2: Now, comes the most important part of DataGrabber. We need to specify the template structure to find the specific data. This can be done as follows.
Step3: You'll notice that we use %s, %02d and * for placeholders in the data paths. %s is a placeholder for a string and is filled out by subject_id. %02d is a placeholder for a integer number and is filled out by task_id. * is used as a wild card, e.g. a placeholder for any possible string combination. This is all to set up the DataGrabber node.
Now it is up to you how you want to feed the dynamic parameters into the node. You can either do this by using another node (e.g. IdentityInterface) and feed subject_id and task_id as connections to the DataGrabber node or specify them directly as node inputs.
Step4: Now you only have to connect infosource with your DataGrabber and run the workflow to iterate over subjects 1, 2 and 3.
If you specify the inputs to the DataGrabber node directly, you can do this as follows
Step5: Now let's run the DataGrabber node and let's look at the output
Step6: SelectFiles
SelectFiles is a more flexible alternative to DataGrabber. It uses the {}-based string formating syntax to plug values into string templates and collect the data. These templates can also be combined with glob wild cards. The field names in the formatting template (i.e. the terms in braces) will become inputs fields on the interface, and the keys in the templates dictionary will form the output fields.
Let's focus again on the data we want to import
Step7: Let's check if we get what we wanted.
Step8: Perfect! But why is SelectFiles more flexible than DataGrabber? First, you perhaps noticed that with the {}-based string, we can reuse the same input (e.g. subject_id) multiple time in the same string, without feeding it multiple times into the template.
Additionally, you can also select multiple files without the need of an iterable node. For example, let's assume we want to select both functional images ('run-1' and 'run-2') at once. We can do this by using the following file template
Step9: As you can see, now func contains two file paths, one for the first and one for the second run. As a side node, you could have also gotten them same thing with the wild card *
Step10: To create the FreeSurferSource node, do as follows
Step11: Let's now run it for a specific subject.
Step12: Did it work? Let's try to access multiple FreeSurfer outputs
Step13: It seems to be working as it should. But as you can see, the inflated output actually contains the file location for both hemispheres. With FreeSurferSource we can also restrict the file selection to a single hemisphere. To do this, we use the hemi input filed
Step14: Let's take a look again at the inflated output. | Python Code:
from nipype import DataGrabber, Node
# Create DataGrabber node
dg = Node(DataGrabber(infields=['subject_id', 'task_id'],
outfields=['anat', 'func']),
name='datagrabber')
# Location of the dataset folder
dg.inputs.base_directory = '/data/ds102'
# Necessary default parameters
dg.inputs.template = '*'
dg.inputs.sort_filelist = True
Explanation: Data Input
To do any computation, you need to have data. Getting the data in the framework of a workflow is therefore the first step of every analysis. Nipype provides many different modules to grab or select the data:
DataFinder
DataGrabber
FreeSurferSource
JSONFileGrabber
S3DataGrabber
SSHDataGrabber
SelectFiles
XNATSource
This tutorial will only cover some of them. For the rest, see the section interfaces.io on the official homepage.
Dataset structure
To be able to import data, you first need to be aware about the structure of your dataset. The structure of the dataset for this tutorial is according to BIDS, and looks as follows:
ds102
├── CHANGES
├── dataset_description.json
├── participants.tsv
├── README
├── sub-01
│ ├── anat
│ │ └── sub-01_T1w.nii.gz
│ └── func
│ ├── sub-01_task-flanker_run-1_bold.nii.gz
│ ├── sub-01_task-flanker_run-1_events.tsv
│ ├── sub-01_task-flanker_run-2_bold.nii.gz
│ └── sub-01_task-flanker_run-2_events.tsv
├── sub-02
│ ├── anat
│ │ └── sub-02_T1w.nii.gz
│ └── func
│ ├── sub-02_task-flanker_run-1_bold.nii.gz
│ ├── sub-02_task-flanker_run-1_events.tsv
│ ├── sub-02_task-flanker_run-2_bold.nii.gz
│ └── sub-02_task-flanker_run-2_events.tsv
├── sub-03
│ ├── anat
│ │ └── sub-03_T1w.nii.gz
│ └── func
│ ├── sub-03_task-flanker_run-1_bold.nii.gz
│ ├── sub-03_task-flanker_run-1_events.tsv
│ ├── sub-03_task-flanker_run-2_bold.nii.gz
│ └── sub-03_task-flanker_run-2_events.tsv
├── ...
.
└── task-flanker_bold.json
DataGrabber
DataGrabber is a generic data grabber module that wraps around glob to select your neuroimaging data in an intelligent way. As an example, let's assume we want to grab the anatomical and functional images of a certain subject.
First, we need to create the DataGrabber node. This node needs to have some input fields for all dynamic parameters (e.g. subject identifier, task identifier), as well as the two desired output fields anat and func.
End of explanation
dg.inputs.template_args = {'anat': [['subject_id']],
'func': [['subject_id', 'task_id']]}
Explanation: Second, we know that the two files we desire are the the following location:
anat = /data/ds102/sub-01/anat/sub-01_T1w.nii.gz
func = /data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz
We see that the two files only have two dynamic parameters between subjects and conditions:
subject_id: in this case 'sub-01'
task_id: in this case 1
This means that we can rewrite the paths as follows:
anat = /data/ds102/[subject_id]/anat/[subject_id]_T1w.nii.gz
func = /data/ds102/[subject_id]/func/[subject_id]_task-flanker_run-[task_id]_bold.nii.gz
Therefore, we need the parameter subject_id for the anatomical image and the parameter subject_id and task_id for the functional image. In the context of DataGabber, this is specified as follows:
End of explanation
dg.inputs.field_template = {'anat': '%s/anat/*_T1w.nii.gz',
'func': '%s/func/*run-%d_bold.nii.gz'}
Explanation: Now, comes the most important part of DataGrabber. We need to specify the template structure to find the specific data. This can be done as follows.
End of explanation
# Using the IdentityInterface
from nipype import IdentityInterface
infosource = Node(IdentityInterface(fields=['subject_id', 'contrasts']),
name="infosource")
infosource.inputs.contrasts = 1
subject_list = ['sub-01',
'sub-02',
'sub-03',
'sub-04',
'sub-05']
infosource.iterables = [('subject_id', subject_list)]
Explanation: You'll notice that we use %s, %02d and * for placeholders in the data paths. %s is a placeholder for a string and is filled out by subject_id. %02d is a placeholder for a integer number and is filled out by task_id. * is used as a wild card, e.g. a placeholder for any possible string combination. This is all to set up the DataGrabber node.
Now it is up to you how you want to feed the dynamic parameters into the node. You can either do this by using another node (e.g. IdentityInterface) and feed subject_id and task_id as connections to the DataGrabber node or specify them directly as node inputs.
End of explanation
# Specifying the input fields of DataGrabber directly
dg.inputs.subject_id = 'sub-01'
dg.inputs.task_id = 1
Explanation: Now you only have to connect infosource with your DataGrabber and run the workflow to iterate over subjects 1, 2 and 3.
If you specify the inputs to the DataGrabber node directly, you can do this as follows:
End of explanation
print dg.run().outputs
Explanation: Now let's run the DataGrabber node and let's look at the output:
End of explanation
from nipype import SelectFiles, Node
# String template with {}-based strings
templates = {'anat': '{subject_id}/anat/{subject_id}_T1w.nii.gz',
'func': '{subject_id}/func/{subject_id}_task-flanker_run-{task_id}_bold.nii.gz'}
# Create SelectFiles node
sf = Node(SelectFiles(templates),
name='selectfiles')
# Location of the dataset folder
sf.inputs.base_directory = '/data/ds102'
# Feed {}-based placeholder strings with values
sf.inputs.subject_id = 'sub-01'
sf.inputs.task_id = '1'
Explanation: SelectFiles
SelectFiles is a more flexible alternative to DataGrabber. It uses the {}-based string formating syntax to plug values into string templates and collect the data. These templates can also be combined with glob wild cards. The field names in the formatting template (i.e. the terms in braces) will become inputs fields on the interface, and the keys in the templates dictionary will form the output fields.
Let's focus again on the data we want to import:
anat = /data/ds102/sub-01/anat/sub-01_T1w.nii.gz
func = /data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz
Now, we can replace those paths with the accoridng {}-based strings.
anat = /data/ds102/{subject_id}/anat/{subject_id}_T1w.nii.gz
func = /data/ds102/{subject_id}/func/{subject_id}_task-flanker_run-{task_id}_bold.nii.gz
How would this look like as a SelectFiles node?
End of explanation
print sf.run().outputs
Explanation: Let's check if we get what we wanted.
End of explanation
from nipype import SelectFiles, Node
from os.path import abspath as opap
# String template with {}-based strings
templates = {'anat': '{subject_id}/anat/{subject_id}_T1w.nii.gz',
'func': '{subject_id}/func/{subject_id}_task-flanker_run-[1,2]_bold.nii.gz'}
# Create SelectFiles node
sf = Node(SelectFiles(templates),
name='selectfiles')
# Location of the dataset folder
sf.inputs.base_directory = '/data/ds102'
# Feed {}-based placeholder strings with values
sf.inputs.subject_id = 'sub-01'
# Print SelectFiles output
print sf.run().outputs
Explanation: Perfect! But why is SelectFiles more flexible than DataGrabber? First, you perhaps noticed that with the {}-based string, we can reuse the same input (e.g. subject_id) multiple time in the same string, without feeding it multiple times into the template.
Additionally, you can also select multiple files without the need of an iterable node. For example, let's assume we want to select both functional images ('run-1' and 'run-2') at once. We can do this by using the following file template:
{subject_id}_task-flanker_run-[1,2]_bold.nii.gz'
Let's see how this works:
End of explanation
from nipype.interfaces.freesurfer import FSCommand
from os.path import abspath as opap
# Path to your freesurfer output folder
fs_dir = opap('/data/ds102/freesurfer')
# Set SUBJECTS_DIR
FSCommand.set_default_subjects_dir(fs_dir)
Explanation: As you can see, now func contains two file paths, one for the first and one for the second run. As a side node, you could have also gotten them same thing with the wild card *:
{subject_id}_task-flanker_run-*_bold.nii.gz'
FreeSurferSource
Note: FreeSurfer and the recon-all output is not included in this tutorial.
FreeSurferSource is a specific case of a file grabber that felicitates the data import of outputs from the FreeSurfer recon-all algorithm. This of course requires that you've already run recon-all on your subject.
Before you can run FreeSurferSource, you first have to specify the path to the FreeSurfer output folder, i.e. you have to specify the SUBJECTS_DIR variable. This can be done as follows:
End of explanation
from nipype import Node
from nipype.interfaces.io import FreeSurferSource
# Create FreeSurferSource node
fssource = Node(FreeSurferSource(subjects_dir=fs_dir),
name='fssource')
Explanation: To create the FreeSurferSource node, do as follows:
End of explanation
fssource.inputs.subject_id = 'sub001'
result = fssource.run()
Explanation: Let's now run it for a specific subject.
End of explanation
print 'aparc_aseg: %s\n' % result.outputs.aparc_aseg
print 'brainmask: %s\n' % result.outputs.brainmask
print 'inflated: %s\n' % result.outputs.inflated
Explanation: Did it work? Let's try to access multiple FreeSurfer outputs:
End of explanation
fssource.inputs.hemi = 'lh'
result = fssource.run()
Explanation: It seems to be working as it should. But as you can see, the inflated output actually contains the file location for both hemispheres. With FreeSurferSource we can also restrict the file selection to a single hemisphere. To do this, we use the hemi input filed:
End of explanation
result.outputs.inflated
Explanation: Let's take a look again at the inflated output.
End of explanation |
10,216 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32,[None,real_dim])
inputs_z = tf.placeholder(tf.float32,[None,z_dim])
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse) as scope: # finish this
# Hidden layer
h1 = tf.layers.dense(z,n_units)
# Leaky ReLU
h1 = tf.maximum(alpha*h1,h1)
# Logits and tanh output
logits = tf.layers.dense(h1,out_dim)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse) as scope: # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
logits = tf.layers.dense(h1,1)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(real_dim=input_size,z_dim=z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, reuse=False, alpha = alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units= d_hidden_size, reuse=False, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse = True, alpha=alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
real_labels = tf.ones_like(d_logits_real) * (1 - smooth)
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels= real_labels))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if 'generator' in var.name]
d_vars = [var for var in t_vars if 'discriminator' in var.name]
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
10,217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import directives
Step1: Ordered dictionaries
See https | Python Code:
import collections
Explanation: Import directives
End of explanation
d = collections.OrderedDict()
d["2"] = 2
d["3"] = 3
d["1"] = 1
print(d)
print(type(d.keys()))
print(list(d.keys()))
print(type(d.values()))
print(list(d.values()))
for k, v in d.items():
print(k, v)
Explanation: Ordered dictionaries
See https://docs.python.org/3/library/collections.html#collections.OrderedDict
End of explanation |
10,218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2> 4ppm </h2>
Enough retcor groups, and fewer peak insertion problems than 4.5 or 5ppm.
Step1: <h2> Import the dataframe and remove any features that are all zero </h2>
Step2: <h2> Get mappings between sample names, file names, and sample classes </h2>
Step3: <h2> Plot the distribution of classification accuracy across multiple cross-validation splits - Kinda Dumb</h2>
Turns out doing this is kind of dumb, because you're not taking into account the prediction score your classifier assigned. Use AUC's instead. You want to give your classifier a lower score if it is really confident and wrong, than vice-versa
Step4: <h2> pqn normalize your features </h2>
Step5: <h2>Random Forest & adaBoost with PQN-normalized data</h2>
Step6: <h2> RF & adaBoost with PQN-normalized, log-transformed data </h2>
Turns out a monotonic transformation doesn't really affect any of these things.
I guess they're already close to unit varinace...?
Step7: <h2> Great, you can classify things. But make null models and do a sanity check to make
sure you arent just classifying garbage </h2>
Step8: <h2> Let's look at a slice of retention time </h2> | Python Code:
import time
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from sklearn import preprocessing
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.cross_validation import cross_val_score
#from sklearn.model_selection import StratifiedShuffleSplit
#from sklearn.model_selection import cross_val_score
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import roc_curve, auc
from sklearn.utils import shuffle
from scipy import interp
%matplotlib inline
def remove_zero_columns(X, threshold=1e-20):
# convert zeros to nan, drop all nan columns, the replace leftover nan with zeros
X_non_zero_colum = X.replace(0, np.nan).dropna(how='all', axis=1).replace(np.nan, 0)
#.dropna(how='all', axis=0).replace(np.nan,0)
return X_non_zero_colum
def zero_fill_half_min(X, threshold=1e-20):
# Fill zeros with 1/2 the minimum value of that column
# input dataframe. Add only to zero values
# Get a vector of 1/2 minimum values
half_min = X[X > threshold].min(axis=0)*0.5
# Add the half_min values to a dataframe where everything that isn't zero is NaN.
# then convert NaN's to 0
fill_vals = (X[X < threshold] + half_min).fillna(value=0)
# Add the original dataframe to the dataframe of zeros and fill-values
X_zeros_filled = X + fill_vals
return X_zeros_filled
toy = pd.DataFrame([[1,2,3,0],
[0,0,0,0],
[0.5,1,0,0]], dtype=float)
toy_no_zeros = remove_zero_columns(toy)
toy_filled_zeros = zero_fill_half_min(toy_no_zeros)
print toy
print toy_no_zeros
print toy_filled_zeros
Explanation: <h2> 4ppm </h2>
Enough retcor groups, and fewer peak insertion problems than 4.5 or 5ppm.
End of explanation
### Subdivide the data into a feature table
data_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/processed/MTBLS315/'\
'uhplc_pos/xcms_result_4.csv'
## Import the data and remove extraneous columns
df = pd.read_csv(data_path, index_col=0)
df.shape
df.head()
# Make a new index of mz:rt
mz = df.loc[:,"mz"].astype('str')
rt = df.loc[:,"rt"].astype('str')
idx = mz+':'+rt
df.index = idx
df
# separate samples from xcms/camera things to make feature table
not_samples = ['mz', 'mzmin', 'mzmax', 'rt', 'rtmin', 'rtmax',
'npeaks', 'uhplc_pos',
]
samples_list = df.columns.difference(not_samples)
mz_rt_df = df[not_samples]
# convert to samples x features
X_df_raw = df[samples_list].T
# Remove zero-full columns and fill zeroes with 1/2 minimum values
X_df = remove_zero_columns(X_df_raw)
X_df_zero_filled = zero_fill_half_min(X_df)
print "original shape: %s \n# zeros: %f\n" % (X_df_raw.shape, (X_df_raw < 1e-20).sum().sum())
print "zero-columns repalced? shape: %s \n# zeros: %f\n" % (X_df.shape,
(X_df < 1e-20).sum().sum())
print "zeros filled shape: %s \n#zeros: %f\n" % (X_df_zero_filled.shape,
(X_df_zero_filled < 1e-20).sum().sum())
# Convert to numpy matrix to play nicely with sklearn
X = X_df.as_matrix()
print X.shape
Explanation: <h2> Import the dataframe and remove any features that are all zero </h2>
End of explanation
# Get mapping between sample name and assay names
path_sample_name_map = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/raw/'\
'MTBLS315/metadata/a_UPLC_POS_nmfi_and_bsi_diagnosis.txt'
# Index is the sample name
sample_df = pd.read_csv(path_sample_name_map,
sep='\t', index_col=0)
sample_df = sample_df['MS Assay Name']
sample_df.shape
print sample_df.head(10)
# get mapping between sample name and sample class
path_sample_class_map = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/raw/'\
'MTBLS315/metadata/s_NMFI and BSI diagnosis.txt'
class_df = pd.read_csv(path_sample_class_map,
sep='\t')
# Set index as sample name
class_df.set_index('Sample Name', inplace=True)
class_df = class_df['Factor Value[patient group]']
print class_df.head(10)
# convert all non-malarial classes into a single classes
# (collapse non-malarial febril illness and bacteremia together)
class_map_df = pd.concat([sample_df, class_df], axis=1)
class_map_df.rename(columns={'Factor Value[patient group]': 'class'}, inplace=True)
class_map_df
binary_class_map = class_map_df.replace(to_replace=['non-malarial febrile illness', 'bacterial bloodstream infection' ],
value='non-malarial fever')
binary_class_map
# convert classes to numbers
le = preprocessing.LabelEncoder()
le.fit(binary_class_map['class'])
y = le.transform(binary_class_map['class'])
Explanation: <h2> Get mappings between sample names, file names, and sample classes </h2>
End of explanation
def rf_violinplot(X, y, n_iter=25, test_size=0.3, random_state=1,
n_estimators=1000):
cross_val_skf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size,
random_state=random_state)
clf = RandomForestClassifier(n_estimators=n_estimators, random_state=random_state)
scores = cross_val_score(clf, X, y, cv=cross_val_skf)
sns.violinplot(scores,inner='stick')
rf_violinplot(X,y)
# TODO - Switch to using caret for this bs..?
# Do multi-fold cross validation for adaboost classifier
def adaboost_violinplot(X, y, n_iter=25, test_size=0.3, random_state=1,
n_estimators=200):
cross_val_skf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf = AdaBoostClassifier(n_estimators=n_estimators, random_state=random_state)
scores = cross_val_score(clf, X, y, cv=cross_val_skf)
sns.violinplot(scores,inner='stick')
adaboost_violinplot(X,y)
# TODO PQN normalization, and log-transformation,
# and some feature selection (above certain threshold of intensity, use principal components), et
def pqn_normalize(X, integral_first=False, plot=False):
'''
Take a feature table and run PQN normalization on it
'''
# normalize by sum of intensities in each sample first. Not necessary
if integral_first:
sample_sums = np.sum(X, axis=1)
X = (X / sample_sums[:,np.newaxis])
# Get the median value of each feature across all samples
mean_intensities = np.median(X, axis=0)
# Divde each feature by the median value of each feature -
# these are the quotients for each feature
X_quotients = (X / mean_intensities[np.newaxis,:])
if plot: # plot the distribution of quotients from one sample
for i in range(1,len(X_quotients[:,1])):
print 'allquotients reshaped!\n\n',
#all_quotients = X_quotients.reshape(np.prod(X_quotients.shape))
all_quotients = X_quotients[i,:]
print all_quotients.shape
x = np.random.normal(loc=0, scale=1, size=len(all_quotients))
sns.violinplot(all_quotients)
plt.title("median val: %f\nMax val=%f" % (np.median(all_quotients), np.max(all_quotients)))
plt.plot( title="median val: ")#%f" % np.median(all_quotients))
plt.xlim([-0.5, 5])
plt.show()
# Define a quotient for each sample as the median of the feature-specific quotients
# in that sample
sample_quotients = np.median(X_quotients, axis=1)
# Quotient normalize each samples
X_pqn = X / sample_quotients[:,np.newaxis]
return X_pqn
# Make a fake sample, with 2 samples at 1x and 2x dilutions
X_toy = np.array([[1,1,1,],
[2,2,2],
[3,6,9],
[6,12,18]], dtype=float)
print X_toy
print X_toy.reshape(1, np.prod(X_toy.shape))
X_toy_pqn_int = pqn_normalize(X_toy, integral_first=True, plot=True)
print X_toy_pqn_int
print '\n\n\n'
X_toy_pqn = pqn_normalize(X_toy)
print X_toy_pqn
Explanation: <h2> Plot the distribution of classification accuracy across multiple cross-validation splits - Kinda Dumb</h2>
Turns out doing this is kind of dumb, because you're not taking into account the prediction score your classifier assigned. Use AUC's instead. You want to give your classifier a lower score if it is really confident and wrong, than vice-versa
End of explanation
X_pqn = pqn_normalize(X)
print X_pqn
Explanation: <h2> pqn normalize your features </h2>
End of explanation
rf_violinplot(X_pqn, y)
# Do multi-fold cross validation for adaboost classifier
adaboost_violinplot(X_pqn, y)
Explanation: <h2>Random Forest & adaBoost with PQN-normalized data</h2>
End of explanation
X_pqn_nlog = np.log(X_pqn)
rf_violinplot(X_pqn_nlog, y)
adaboost_violinplot(X_pqn_nlog, y)
def roc_curve_cv(X, y, clf, cross_val,
path='/home/irockafe/Desktop/roc.pdf',
save=False, plot=True):
t1 = time.time()
# collect vals for the ROC curves
tpr_list = []
mean_fpr = np.linspace(0,1,100)
auc_list = []
# Get the false-positive and true-positive rate
for i, (train, test) in enumerate(cross_val):
clf.fit(X[train], y[train])
y_pred = clf.predict_proba(X[test])[:,1]
# get fpr, tpr
fpr, tpr, thresholds = roc_curve(y[test], y_pred)
roc_auc = auc(fpr, tpr)
#print 'AUC', roc_auc
#sns.plt.plot(fpr, tpr, lw=10, alpha=0.6, label='ROC - AUC = %0.2f' % roc_auc,)
#sns.plt.show()
tpr_list.append(interp(mean_fpr, fpr, tpr))
tpr_list[-1][0] = 0.0
auc_list.append(roc_auc)
if (i % 10 == 0):
print '{perc}% done! {time}s elapsed'.format(perc=100*float(i)/cross_val.n_iter, time=(time.time() - t1))
# get mean tpr and fpr
mean_tpr = np.mean(tpr_list, axis=0)
# make sure it ends up at 1.0
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(auc_list)
if plot:
# plot mean auc
plt.plot(mean_fpr, mean_tpr, label='Mean ROC - AUC = %0.2f $\pm$ %0.2f' % (mean_auc,
std_auc),
lw=5, color='b')
# plot luck-line
plt.plot([0,1], [0,1], linestyle = '--', lw=2, color='r',
label='Luck', alpha=0.5)
# plot 1-std
std_tpr = np.std(tpr_list, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=0.2,
label=r'$\pm$ 1 stdev')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve, {iters} iterations of {cv} cross validation'.format(
iters=cross_val.n_iter, cv='{train}:{test}'.format(test=cross_val.test_size, train=(1-cross_val.test_size)))
)
plt.legend(loc="lower right")
if save:
plt.savefig(path, format='pdf')
plt.show()
return tpr_list, auc_list, mean_fpr
rf_estimators = 1000
n_iter = 3
test_size = 0.3
random_state = 1
cross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf_rf = RandomForestClassifier(n_estimators=rf_estimators, random_state=random_state)
rf_graph_path = '''/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revolutionizing_healthcare/data/MTBLS315/\
isaac_feature_tables/uhplc_pos/rf_roc_{trees}trees_{cv}cviter.pdf'''.format(trees=rf_estimators, cv=n_iter)
print cross_val_rf.n_iter
print cross_val_rf.test_size
tpr_vals, auc_vals, mean_fpr = roc_curve_cv(X_pqn, y, clf_rf, cross_val_rf,
path=rf_graph_path, save=False)
# For adaboosted
n_iter = 3
test_size = 0.3
random_state = 1
adaboost_estimators = 200
adaboost_path = '''/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revolutionizing_healthcare/data/MTBLS315/\
isaac_feature_tables/uhplc_pos/adaboost_roc_{trees}trees_{cv}cviter.pdf'''.format(trees=adaboost_estimators,
cv=n_iter)
cross_val_adaboost = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf = AdaBoostClassifier(n_estimators=adaboost_estimators, random_state=random_state)
adaboost_tpr, adaboost_auc, adaboost_fpr = roc_curve_cv(X_pqn, y, clf, cross_val_adaboost,
path=adaboost_path)
Explanation: <h2> RF & adaBoost with PQN-normalized, log-transformed data </h2>
Turns out a monotonic transformation doesn't really affect any of these things.
I guess they're already close to unit varinace...?
End of explanation
# Make a null model AUC curve
def make_null_model(X, y, clf, cross_val, random_state=1, num_shuffles=5, plot=True):
'''
Runs the true model, then sanity-checks by:
Shuffles class labels and then builds cross-validated ROC curves from them.
Compares true AUC vs. shuffled auc by t-test (assumes normality of AUC curve)
'''
null_aucs = []
print y.shape
print X.shape
tpr_true, auc_true, fpr_true = roc_curve_cv(X, y, clf, cross_val,
save=True)
# shuffle y lots of times
for i in range(0, num_shuffles):
#Iterate through the shuffled y vals and repeat with appropriate params
# Retain the auc vals for final plotting of distribution
y_shuffle = shuffle(y)
cross_val.y = y_shuffle
cross_val.y_indices = y_shuffle
print 'Number of differences b/t original and shuffle: %s' % (y == cross_val.y).sum()
# Get auc values for number of iterations
tpr, auc, fpr = roc_curve_cv(X, y_shuffle, clf, cross_val, plot=False,
)
null_aucs.append(auc)
#plot the outcome
if plot:
flattened_aucs = [j for i in null_aucs for j in i]
my_dict = {'true_auc': auc_true, 'null_auc': flattened_aucs}
df_poop = pd.DataFrame.from_dict(my_dict, orient='index').T
df_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'],
value_name='auc', var_name='AUC_type')
#print flattened_aucs
sns.violinplot(x='AUC_type', y='auc',
inner='points', data=df_tidy)
# Plot distribution of AUC vals
plt.title("Classification is not possible when data is shuffled")
#sns.plt.ylabel('count')
plt.xlabel('True model vs. Null Model')
plt.ylabel('AUC')
#sns.plt.plot(auc_true, 0, color='red', markersize=10)
plt.savefig('/home/irockafe/Desktop/auc distribution')
plt.show()
# Do a quick t-test to see if odds of randomly getting an AUC that good
return auc_true, null_aucs
# Make a null model AUC curve & compare it to null-model
# Random forest magic!
rf_estimators = 1000
n_iter = 50
test_size = 0.3
random_state = 1
cross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf_rf = RandomForestClassifier(n_estimators=rf_estimators, random_state=random_state)
true_auc, all_aucs = make_null_model(X_pqn, y, clf_rf, cross_val_rf, num_shuffles=5)
# make dataframe from true and false aucs
flattened_aucs = [j for i in all_aucs for j in i]
my_dict = {'true_auc': true_auc, 'null_auc': flattened_aucs}
df_poop = pd.DataFrame.from_dict(my_dict, orient='index').T
df_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'],
value_name='auc', var_name='AUC_type')
print df_tidy.head()
#print flattened_aucs
sns.violinplot(x='AUC_type', y='auc',
inner='points', data=df_tidy, bw=0.7)
# Plot distribution of AUC vals
plt.title("Classification is not possible when data is shuffled")
#sns.plt.ylabel('count')
plt.xlabel('True model vs. Null Model')
plt.ylabel('AUC')
#sns.plt.plot(auc_true, 0, color='red', markersize=10)
plt.savefig('/home/irockafe/Desktop/auc distribution', format='pdf')
plt.show()
Explanation: <h2> Great, you can classify things. But make null models and do a sanity check to make
sure you arent just classifying garbage </h2>
End of explanation
def get_rt_slice(df, rt_bounds):
'''
PURPOSE:
Given a tidy feature table with 'mz' and 'rt' column headers,
retain only the features whose rt is between rt_left
and rt_right
INPUT:
df - a tidy pandas dataframe with 'mz' and 'rt' column
headers
rt_left, rt_right: the boundaries of your rt_slice, in seconds
'''
out_df = df.loc[ (df['rt'] > rt_bounds[0]) &
(df['rt'] < rt_bounds[1])]
return out_df
#df.head()
df_slice = get_rt_slice(df, (750, 1050))
print df_slice.shape
# separate samples from xcms/camera things to make feature table
not_samples = ['mz', 'mzmin', 'mzmax', 'rt', 'rtmin', 'rtmax',
'npeaks', 'uhplc_pos',
]
samples_list = df_slice.columns.difference(not_samples)
mz_rt_df = df_slice[not_samples]
# convert to samples x features
X_df_raw = df_slice[samples_list].T
# Remove zero-full columns and fill zeroes with 1/2 minimum values
X_df = remove_zero_columns(X_df_raw)
print X_df.shape
X_slice = X_df.as_matrix()
# pqn-normalize
X_slice_pqn = pqn_normalize(X)
# Make a null model AUC curve & compare it to null-model
# For the slice of retention time
# Random forest magic!
rf_estimators = 1000
n_iter = 50
test_size = 0.3
random_state = 1
cross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter,
test_size=test_size, random_state=random_state)
clf_rf = RandomForestClassifier(n_estimators=rf_estimators,
random_state=random_state)
true_auc, all_aucs = make_null_model(X_slice_pqn, y, clf_rf, cross_val_rf, num_shuffles=5)
# make dataframe from true and false aucs
flattened_aucs = [j for i in all_aucs for j in i]
my_dict = {'true_auc': true_auc, 'null_auc': flattened_aucs}
df_poop = pd.DataFrame.from_dict(my_dict, orient='index').T
df_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'],
value_name='auc', var_name='AUC_type')
print df_tidy.head()
#print flattened_aucs
sns.violinplot(x='AUC_type', y='auc',
inner='points', data=df_tidy, bw=0.7)
# Plot distribution of AUC vals
plt.title("Classification is not possible when data is shuffled")
#sns.plt.ylabel('count')
plt.xlabel('True model vs. Null Model')
plt.ylabel('AUC')
#sns.plt.plot(auc_true, 0, color='red', markersize=10)
plt.savefig('/home/irockafe/Desktop/auc_distribution', format='pdf')
plt.show()
Explanation: <h2> Let's look at a slice of retention time </h2>
End of explanation |
10,219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Merge overlapping striplogs
Imagine we have a Striplog with overlapping Interval. We would like to be able to merge the Intervals in this Striplog, while following some rules about priority.
For example, imagine we have a striplog object, as shown below on the left. Note that the Intervals in the striplog have a time property, so I've 'exploded' that out along the horizontal axis so the Intervals don't all overlap in this display. The point is that they overlap in depth.
<img src="./merging.png" />
NB This other property does not have to be time, it could be any property that we can use to make comparisons.
NB again The 'mix' (blended) merges indicated in the figure have not been implemented yet.
Make some data
Let's make some striplogs that look like these...
Step2: Here's what the first Interval looks like
Step4: We'll need a Legend so we can make nice plots
Step5: Now we can make a plot. I'll make the Intervals semi-transparent so you can see how they overlap
Step6: It's not all that pretty, but we can also plot each time separately as in the figure we started with
Step7: Merge
We'd like to merge the Intervals so that everything retains depth order of course, but we want a few options about which Intervals should 'win' when there are several Intervals overlapping. For example, when looking at a particular depth, do we want to retain the Interval with the shallowest top? Or the deepest base? Or the greatest thickess? Or the one from the latest time?
Step8: So if we merge using top as a priority, we'll get whichever Interval has the greatest (deepest) top at any given depth
Step9: If we use reverse=True then we get whichever has the shallowest top
Step10: Thickness priority
What if we want to keep the thickest bed at any given depth?
Step11: Or the thinnest bed?
Step12: Using the other attribute
Remember we also have the time attribute? We could use that... we'll end up with which bed has the greatest (i.e. latest) top
Step13: ...or earliest
Step14: Real data example
Intervals are perforations. Many of them overlap. They all have datetimes. For a given depth, we want to keep the latest perforation.
Step16: Merge the perfs
Now we can merge based on date
Step17: There are none, that's good!
Export to Petrel
Step20: This isn't quite right for Petrel.
Let's make another format. | Python Code:
from striplog import Striplog
csv = Top,Base,Comp Time,Comp Depth
100,200,2,a
110,120,1,b
150,325,3,c
210,225,1,d
300,400,2,e
350,375,3,f
s = Striplog.from_csv(text=csv)
Explanation: Merge overlapping striplogs
Imagine we have a Striplog with overlapping Interval. We would like to be able to merge the Intervals in this Striplog, while following some rules about priority.
For example, imagine we have a striplog object, as shown below on the left. Note that the Intervals in the striplog have a time property, so I've 'exploded' that out along the horizontal axis so the Intervals don't all overlap in this display. The point is that they overlap in depth.
<img src="./merging.png" />
NB This other property does not have to be time, it could be any property that we can use to make comparisons.
NB again The 'mix' (blended) merges indicated in the figure have not been implemented yet.
Make some data
Let's make some striplogs that look like these...
End of explanation
s[0]
Explanation: Here's what the first Interval looks like:
End of explanation
from striplog import Legend
# Data for legend (for display)...
leg_csv = Colour,Comp Depth
red,a
orange,b
limegreen,c
cyan,d
blue,e
magenta,f
legend = Legend.from_csv(text=leg_csv)
Explanation: We'll need a Legend so we can make nice plots:
End of explanation
s.plot(legend=legend, lw=1, aspect=3, alpha=0.35)
Explanation: Now we can make a plot. I'll make the Intervals semi-transparent so you can see how they overlap:
End of explanation
import matplotlib.pyplot as plt
fig, (ax0, ax1, ax2) = plt.subplots(ncols=3, sharey=True, figsize=(8, 8))
s.find('1.0').plot(legend=legend, ax=ax0)
s.find('2.0').plot(legend=legend, ax=ax1)
s.find('3.0').plot(legend=legend, ax=ax2)
plt.ylim(420, 80)
Explanation: It's not all that pretty, but we can also plot each time separately as in the figure we started with:
End of explanation
s[0]
Explanation: Merge
We'd like to merge the Intervals so that everything retains depth order of course, but we want a few options about which Intervals should 'win' when there are several Intervals overlapping. For example, when looking at a particular depth, do we want to retain the Interval with the shallowest top? Or the deepest base? Or the greatest thickess? Or the one from the latest time?
End of explanation
s.merge('top').plot(legend=legend, aspect=3)
Explanation: So if we merge using top as a priority, we'll get whichever Interval has the greatest (deepest) top at any given depth:
End of explanation
s.merge('top', reverse=True).plot(legend=legend, aspect=3)
Explanation: If we use reverse=True then we get whichever has the shallowest top:
End of explanation
s.merge('thickness').plot(legend=legend, aspect=3)
Explanation: Thickness priority
What if we want to keep the thickest bed at any given depth?
End of explanation
s.merge('thickness', reverse=True).plot(legend=legend, aspect=3)
Explanation: Or the thinnest bed?
End of explanation
s.merge('time').plot(legend=legend, aspect=3)
Explanation: Using the other attribute
Remember we also have the time attribute? We could use that... we'll end up with which bed has the greatest (i.e. latest) top:
End of explanation
s.merge('time', reverse=True).plot(legend=legend, aspect=3)
Explanation: ...or earliest:
End of explanation
from striplog import Striplog
remap = {'bottom':'base','type':'comp type'}
s = Striplog.from_csv("og815.csv", remap=remap)
from datetime import datetime
for iv in s:
iv.primary.date = datetime.fromisoformat(iv.data['date'])
s[1]
s.plot(alpha=0.25)
Explanation: Real data example
Intervals are perforations. Many of them overlap. They all have datetimes. For a given depth, we want to keep the latest perforation.
End of explanation
sm = s.merge('date')
leg = colour,comp type
limegreen,perforation
red,squeeze
from striplog import Legend
legend = Legend.from_csv(text=leg)
sm.plot(legend, lw=1, aspect=5)
sm.find_overlaps()
Explanation: Merge the perfs
Now we can merge based on date:
End of explanation
print(sm.to_csv())
Explanation: There are none, that's good!
Export to Petrel
End of explanation
def _to_petrel_csv(strip, attr, null):
Make Petrel-ready CSV text for a striplog
csv = ""
gap_top = 0
for iv in strip.merge_neighbours():
if iv.top.middle != gap_top:
csv += f"{gap_top},{null}\n"
csv += f"{iv.top.middle},{getattr(iv.primary, attr)}\n"
gap_top = iv.base.middle
csv += f"{iv.base.middle},{null}\n"
return csv
def to_petrel(fname, strip, attr, null=None):
Make a Petrel-ready CSV file.
Args
fname (str): the filename, including extension
strip (Striplog): the striplog
null (str): what to use for nulls
Returns
None (writes file as side effect)
if null is None:
null = "-999.25"
with open(fname, 'w') as f:
f.write(_to_petrel_csv(strip, attr, null))
return
print(_to_petrel_csv(sm, attr='type', null=-999.25))
to_petrel("well_for_petrel.csv", sm, attr='type')
Explanation: This isn't quite right for Petrel.
Let's make another format.
End of explanation |
10,220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python's super()
TODO
* https
Step1: Sans super()
Avant l'existance de la fonction super(), we would have hardwired the call with A.bonjour(self).
...
Step2: Le même exemple avec un argument
Step3: Exemple qui montre que A.bonjour() est bien appelée sur l'objet b
Step4: Avec super()
Step5: Le même exemple avec un argument
Step6: Exemple qui montre que super().bonjour() est bien appelée sur l'objet b
Step7: En fait, super() retourne la classe implicite A
Step8: Ajout de contraintes
Step9: Ce qui est l'équivalant de
Step10: L'ordre de résolution des méthodes ("Method Resolution Order" ou MRO en anglais)
Step11: Les bases d'une classe
Step12: First use case
Step13: First use case | Python Code:
help(super)
Explanation: Python's super()
TODO
* https://docs.python.org/3/library/functions.html#super
* https://rhettinger.wordpress.com/2011/05/26/super-considered-super/
* https://stackoverflow.com/questions/904036/chain-calling-parent-constructors-in-python
* https://stackoverflow.com/questions/2399307/how-to-invoke-the-super-constructor
super(type [, objet_ou_type])
retourne la classe de base de type.
Si le second argument est omis, alors l'objet retourné ~n'est pas limité~.
Si le second argument est un objet, alors isinstance(objet, type) doit être vrai.
Si le second argument est un type, alors issubclass(type2, type) doit être vrai.
This is useful for accessing inherited methods that have been overridden in a class.
There are two typical use cases for super. In a class hierarchy with single inheritance, super can be used to refer to parent classes without naming them explicitly, thus making the code more maintainable. This use closely parallels the use of super in other programming languages.
The second use case is to support cooperative multiple inheritance in a dynamic execution environment. This use case is unique to Python and is not found in statically compiled languages or languages that only support single inheritance. This makes it possible to implement “diamond diagrams” where multiple base classes implement the same method. Good design dictates that this method have the same calling signature in every case (because the order of calls is determined at runtime, because that order adapts to changes in the class hierarchy, and because that order can include sibling classes that are unknown prior to runtime).
For both use cases, a typical superclass call looks like this:
class C(B):
def method(self, arg):
super().method(arg) # This does the same thing as:
# super(C, self).method(arg)
Also note that, aside from the zero argument form, super() is not limited to use inside methods. The two argument form specifies the arguments exactly and makes the appropriate references.
The zero argument form only works inside a class definition, as the compiler fills in the necessary details to correctly retrieve the class being defined, as well as accessing the current instance for ordinary methods.
End of explanation
class A:
def bonjour(self):
print("Bonjour de la part de A.")
class B(A):
def bonjour(self):
print("Bonjour de la part de B.")
A.bonjour(self)
b = B()
b.bonjour()
Explanation: Sans super()
Avant l'existance de la fonction super(), we would have hardwired the call with A.bonjour(self).
...
End of explanation
class A:
def bonjour(self, arg):
print("Bonjour de la part de A. J'ai été appelée avec l'argument arg:", arg)
class B(A):
def bonjour(self, arg):
print("Bonjour de la part de B. J'ai été appelée avec l'argument arg:", arg)
A.bonjour(self, arg)
b = B()
b.bonjour('hey')
Explanation: Le même exemple avec un argument:
End of explanation
class A:
def __init__(self):
self.nom = "Alice"
def bonjour(self):
print("Bonjour de la part de A. Je m'appelle:", self.nom)
class B(A):
def __init__(self):
self.nom = "Bob"
def bonjour(self):
A.bonjour(self)
b = B()
b.bonjour()
Explanation: Exemple qui montre que A.bonjour() est bien appelée sur l'objet b:
End of explanation
class A:
def bonjour(self):
print("Bonjour de la part de A.")
class B(A):
def bonjour(self):
print("Bonjour de la part de B.")
super().bonjour() # au lieu de "A.bonjour(self)"
b = B()
b.bonjour()
Explanation: Avec super(): premier exemple
Dans l'exemple précedent, la ligne A.bonjour(self) (dans B.bonjour()) définie explicitement le nom de la classe contenant la fonction à appeler (ici A) ainsi que l'objet (self) sur lequel est appelé la fonction.
Un des deux principaux intérets de super() est de rendre inplicite le nom de la classe d'appel A (ainsi que l'objet self sur lequel est appelé la fonction).
Ainsi, l'appel A.bonjour(self) devient super().bonjour().
Ainsi, par exemple, si on décide de renommer la classe A ou si on décide que B hérite de C plutôt que A, on a pas besoin de mettre à jours le contenu de la fonction B.bonjour(). Les changements sont isolés.
End of explanation
class A:
def bonjour(self, arg):
print("Bonjour de la part de A. J'ai été appelée avec l'argument arg:", arg)
class B(A):
def bonjour(self, arg):
print("Bonjour de la part de B. J'ai été appelée avec l'argument arg:", arg)
super().bonjour(arg)
b = B()
b.bonjour('hey')
Explanation: Le même exemple avec un argument:
End of explanation
class A:
def __init__(self):
self.nom = "Alice"
def bonjour(self):
print("Bonjour de la part de A. Je m'appelle:", self.nom)
class B(A):
def __init__(self):
self.nom = "Bob"
def bonjour(self):
super().bonjour()
b = B()
b.bonjour()
Explanation: Exemple qui montre que super().bonjour() est bien appelée sur l'objet b:
End of explanation
class A:
pass
class B(A):
def bonjour(self):
print(super())
b = B()
b.bonjour()
Explanation: En fait, super() retourne la classe implicite A:
End of explanation
class A:
def bonjour(self):
print("Bonjour de la part de A.")
class B(A):
def bonjour(self):
print("Bonjour de la part de B.")
class C(B):
def bonjour(self):
print("Bonjour de la part de C.")
super(C, self).bonjour()
c = C()
c.bonjour()
Explanation: Ajout de contraintes: super(type, obj_ou_type) [TODO]
Une autre syntaxe peut être utilisée pour rendre un peut plus explicite la classe de base à utiliser pour l'appel de la fonction:
End of explanation
class A:
def bonjour(self):
print("Bonjour de la part de A.")
class B(A):
def bonjour(self):
print("Bonjour de la part de B.")
class C(B):
def bonjour(self):
print("Bonjour de la part de C.")
super().bonjour()
c = C()
c.bonjour()
# **TODO**
class A():
def bonjour(self):
print("Bonjour de la part de A.")
class B:
def bonjour(self):
print("Bonjour de la part de B.")
class C(A, B):
def bonjour(self):
print("Bonjour de la part de C.")
print(super(B, self))
super(B, self).bonjour()
c = C()
c.bonjour()
Explanation: Ce qui est l'équivalant de:
End of explanation
class A:
pass
class B(A):
pass
class C(A):
pass
class D(B, C):
pass
print(D.__mro__)
Explanation: L'ordre de résolution des méthodes ("Method Resolution Order" ou MRO en anglais)
End of explanation
class A:
pass
class B(A):
pass
class C(A):
pass
class D(B, C):
pass
print(D.__bases__)
Explanation: Les bases d'une classe
End of explanation
class A(object):
def hello(self, arg):
print("Hello", arg, "from A.")
class B(A):
def hello(self, arg):
super(B, self).hello(arg)
print("Hello", arg, "from B.")
a = A()
b = B()
#a.hello('john')
b.hello('john')
#This works for class methods too:
class C(B):
@classmethod
def cmeth(cls, arg):
super().cmeth(arg)
class A(object):
def hello(self, arg):
print("Hello", arg, "from A.")
class B(A):
def hello(self, arg):
super(B, self).hello(arg)
print("Hello", arg, "from B.")
class C(B):
def hello(self, arg):
super(C, self).hello(arg)
print("Hello", arg, "from C.")
a = A()
b = B()
c = C()
c.hello('john')
# comment appeler B.hello() sur c ?
# comment appeler A.hello() sur c ?
class A(object):
def hello(self, arg):
print("Hello", arg, "from A.")
class B(A):
def hello(self, arg):
super().hello(arg)
print("Hello", arg, "from B.")
class C(A):
def hello(self, arg):
super().hello(arg)
print("Hello", arg, "from C.")
class D(B, C):
def hello(self, arg):
super().hello(arg)
print("Hello", arg, "from D.")
a = A()
b = B()
c = C()
d = D()
a.hello('john')
print()
b.hello('john')
print()
c.hello('john')
print()
d.hello('john')
class A(object):
def __init__(self, name):
self.name = name
def hello(self, arg):
print("Hello", arg, "from A.")
class B(A):
def hello(self, arg):
super().hello(arg)
print("Hello", arg, "from B.")
a = A("foo")
b = B()
a.hello('john')
print()
b.hello('john')
Explanation: First use case: ...
"In addition to isolating changes, there is another major benefit to computed indirection, one that may not be familiar to people coming from static languages. Since the indirection is computed at runtime, we have the freedom to influence the calculation so that the indirection will point to some other class."
End of explanation
class A:
def __init__(self):
self.name = "A"
def hello(self, arg):
print("Hello from A with arg:", arg, "self beeing", self.name)
class B(A):
def __init__(self):
self.name = "B"
def hello(self, arg):
super().hello(arg)
print("Hello from B with arg:", arg, "self beeing", self.name)
a = A()
a.hello('foo')
b = B()
b.hello('foo')
class A:
def __init__(self, name):
self.name = name
def hello(self, arg):
print("Hello from A with arg:", arg, "self beeing", self.name)
class B(A):
#def __init__(self):
# self.name = "B"
def hello(self, arg):
super().hello(arg)
print("Hello from B with arg:", arg, "self beeing", self.name)
a = A("A")
a.hello('foo')
b = B()
b.hello('foo')
class A:
def __init__(self, arg):
print(arg)
class B(A):
pass
b = B()
Explanation: First use case: ...
End of explanation |
10,221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GPy
GPy is a framework for Gaussian process based applications. It is design for speed and reliability. The main three pillars of its functionality are made of
Ease of use
Reproduceability
Scalability
In this tutorial we will have a look at the three main pillars, so you may be able to use Gaussian processes with ease of mind and without the complications of cutting edge research code.
Step1: Ease of use
GPy handles the parameters of the parameter based models on the basis of the parameterized framework built in itself. The framework allows to use parameters in an intelligent and intuative way.
Step2: Changing parameters is as easy as assigning new values to the respective parameter
Step3: The whole model gets updated automatically, when updating a parameter, without you having to interfere at all.
Change some parameters and plot the results, using the models plot() function
What do the different parameters change in the result?
Step4: The parameters can be optimized using gradient based optimization. The optimization routines are taken over from scipy. Running the optimization in a GPy model is a call to the models own optimize method.
Step5: Reproduceability
GPy has a built in save and load functionality, allowing you to pickle a model with all its parameters and data in a single file. This is usefull when transferring models to another location, or rerunning models with different intializations etc.
Try saving a model using the models pickle(<name>) function and load it again using GPy.load(<name>). The loaded model is fully functional and can be used as usual.
Step6: We have put a lot of effort in stability of execution, so try to randomize a model using its randomize() function, which randomized the models parameters. After optimization the result whould be very close to previous model optimizations.
Step7: Scalability
GPys parameterized framework can handle as many parameters as you like and is memory and speed efficient in setting parameters by having only one copy of the parameters in memory.
There are many scalability based Gaussian process methods implemented in GPy, have a look at
Step8: We can easily run a sparse GP on above data by using the wrapper methods for running different GPy models | Python Code:
import GPy, numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: GPy
GPy is a framework for Gaussian process based applications. It is design for speed and reliability. The main three pillars of its functionality are made of
Ease of use
Reproduceability
Scalability
In this tutorial we will have a look at the three main pillars, so you may be able to use Gaussian processes with ease of mind and without the complications of cutting edge research code.
End of explanation
X = np.random.uniform(0, 10, (200, 1))
f = np.sin(.3*X) + .3*np.cos(1.3*X)
f -= f.mean()
Y = f+np.random.normal(0, .1, f.shape)
plt.scatter(X, Y)
m = GPy.models.GPRegression(X, Y)
m
Explanation: Ease of use
GPy handles the parameters of the parameter based models on the basis of the parameterized framework built in itself. The framework allows to use parameters in an intelligent and intuative way.
End of explanation
m.rbf.lengthscale = 1.5
m
Explanation: Changing parameters is as easy as assigning new values to the respective parameter:
End of explanation
# Type your code here
Explanation: The whole model gets updated automatically, when updating a parameter, without you having to interfere at all.
Change some parameters and plot the results, using the models plot() function
What do the different parameters change in the result?
End of explanation
m.optimize(messages=1)
_ = m.plot()
# You can use different kernels to use on the data.
# Try out three different kernels and plot the result after optimizing the GP:
# See kernels using GPy.kern.<tab>
# Type your code here
Explanation: The parameters can be optimized using gradient based optimization. The optimization routines are taken over from scipy. Running the optimization in a GPy model is a call to the models own optimize method.
End of explanation
# Type your code here
Explanation: Reproduceability
GPy has a built in save and load functionality, allowing you to pickle a model with all its parameters and data in a single file. This is usefull when transferring models to another location, or rerunning models with different intializations etc.
Try saving a model using the models pickle(<name>) function and load it again using GPy.load(<name>). The loaded model is fully functional and can be used as usual.
End of explanation
# Type your code here
Explanation: We have put a lot of effort in stability of execution, so try to randomize a model using its randomize() function, which randomized the models parameters. After optimization the result whould be very close to previous model optimizations.
End of explanation
GPy.core.SparseGP?
GPy.core.SVGP?
Explanation: Scalability
GPys parameterized framework can handle as many parameters as you like and is memory and speed efficient in setting parameters by having only one copy of the parameters in memory.
There are many scalability based Gaussian process methods implemented in GPy, have a look at
End of explanation
#Type your code here
Explanation: We can easily run a sparse GP on above data by using the wrapper methods for running different GPy models:
GPy.models.<tab>
Use the GPy.models.SparseGPRegression to run the above data using the sparse GP:
End of explanation |
10,222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EventVestor
Step1: Let's go over the columns
Step2: Now suppose we want a DataFrame of the Blaze Data Object above, want to filter it further down to the announcements only, and we only want the sid, issue_amount, and the asof_date. | Python Code:
# import the dataset
from quantopian.interactive.data.eventvestor import issue_equity
# or if you want to import the free dataset, use:
# from quantopian.interactive.data.eventvestor import issue_equity_free
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
issue_equity.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
issue_equity.count()
# Let's see what the data looks like. We'll grab the first three rows.
issue_equity[:3]
Explanation: EventVestor: Issue Equity
In this notebook, we'll take a look at EventVestor's Issue Equity dataset, available on the Quantopian Store. This dataset spans January 01, 2007 through the current day, and documents events and announcements covering secondary equity issues by companies.
Blaze
Before we dig into the data, we want to tell you about how you generally access Quantopian Store data sets. These datasets are available through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* Query building for Blaze
* Pandas-to-Blaze dictionary
* SQL-to-Blaze dictionary.
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
Free samples and limits
One other key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
There is a free version of this dataset as well as a paid one. The free one includes about three years of historical data, though not up to the current day.
With preamble in place, let's get started:
End of explanation
issues = issue_equity[('2014-12-31' < issue_equity['asof_date']) &
(issue_equity['asof_date'] <'2016-01-01') &
(issue_equity.issue_amount < 20)&
(issue_equity.issue_units == "$M")]
# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.
issues.sort('asof_date')
Explanation: Let's go over the columns:
- event_id: the unique identifier for this event.
- asof_date: EventVestor's timestamp of event capture.
- trade_date: for event announcements made before trading ends, trade_date is the same as event_date. For announcements issued after market close, trade_date is next market open day.
- symbol: stock ticker symbol of the affected company.
- event_type: this should always be Issue Equity.
- event_headline: a brief description of the event
- issue_amount: value of the equity issued in issue_units
- issue_units: units of the issue_amount: most commonly millions of dollars or millions of shares
- issue_stage: phase of the issue process: announcement, closing, pricing, etc. Note: currently, there appear to be unrelated entries in this column. We are speaking with the data vendor to amend this.
- event_rating: this is always 1. The meaning of this is uncertain.
- timestamp: this is our timestamp on when we registered the data.
- sid: the equity's unique identifier. Use this instead of the symbol.
We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.
We can select columns and rows with ease. Below, we'll fetch all 2015 equity issues smaller than $20M.
End of explanation
df = odo(issues, pd.DataFrame)
df = df[df.issue_stage == "Announcement"]
df = df[['sid', 'issue_amount', 'asof_date']].dropna()
# When printing a pandas DataFrame, the head 30 and tail 30 rows are displayed. The middle is truncated.
df
Explanation: Now suppose we want a DataFrame of the Blaze Data Object above, want to filter it further down to the announcements only, and we only want the sid, issue_amount, and the asof_date.
End of explanation |
10,223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Background on projectors and projections
This tutorial provides background information on projectors and Signal Space
Projection (SSP), and covers loading and saving projectors, adding and removing
projectors from Raw objects, the difference between "applied" and "unapplied"
projectors, and at what stages MNE-Python applies projectors automatically.
We'll start by importing the Python modules we need; we'll also define a short
function to make it easier to make several plots that look similar
Step1: What is a projection?
In the most basic terms, a projection is an operation that converts one set
of points into another set of points, where repeating the projection
operation on the resulting points has no effect. To give a simple geometric
example, imagine the point $(3, 2, 5)$ in 3-dimensional space. A
projection of that point onto the $x, y$ plane looks a lot like a
shadow cast by that point if the sun were directly above it
Step2: <div class="alert alert-info"><h4>Note</h4><p>The ``@`` symbol indicates matrix multiplication on NumPy arrays, and was
introduced in Python 3.5 / NumPy 1.10. The notation ``plot(*point)`` uses
Python `argument expansion`_ to "unpack" the elements of ``point`` into
separate positional arguments to the function. In other words,
``plot(*point)`` expands to ``plot(3, 2, 5)``.</p></div>
Notice that we used matrix multiplication to compute the projection of our
point $(3, 2, 5)$onto the $x, y$ plane
Step3: Knowing that, we can compute a plane that is orthogonal to the effect of the
trigger (using the fact that a plane through the origin has equation
$Ax + By + Cz = 0$ given a normal vector $(A, B, C)$), and
project our real measurements onto that plane.
Step4: Computing the projection matrix from the trigger_effect vector is done
using singular value decomposition (SVD); interested readers may
consult the internet or a linear algebra textbook for details on this method.
With the projection matrix in place, we can project our original vector
$(3, 2, 5)$ to remove the effect of the trigger, and then plot it
Step5: Just as before, the projection matrix will map any point in $x, y, z$
space onto that plane, and once a point has been projected onto that plane,
applying the projection again will have no effect. For that reason, it should
be clear that although the projected points vary in all three $x$,
$y$, and $z$ directions, the set of projected points have only
two effective dimensions (i.e., they are constrained to a plane).
.. sidebar
Step6: In MNE-Python, the environmental noise vectors are computed using principal
component analysis, usually abbreviated "PCA", which is why the SSP
projectors usually have names like "PCA-v1". (Incidentally, since the process
of performing PCA uses singular value decomposition under the hood,
it is also common to see phrases like "projectors were computed using SVD" in
published papers.) The projectors are stored in the projs field of
raw.info
Step7: raw.info['projs'] is an ordinary Python
Step8: The
Step9: Computing projectors
In MNE-Python, SSP vectors can be computed using general purpose functions
Step10: Additional ways of visualizing projectors are covered in the tutorial
tut-artifact-ssp.
Loading and saving projectors
SSP can be used for other types of signal cleaning besides just reduction of
environmental noise. You probably noticed two large deflections in the
magnetometer signals in the previous plot that were not removed by the
empty-room projectors — those are artifacts of the subject's heartbeat. SSP
can be used to remove those artifacts as well. The sample data includes
projectors for heartbeat noise reduction that were saved in a separate file
from the raw data, which can be loaded with the
Step11: There is a corresponding
Step12: To remove projectors, there is a corresponding method | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa
from scipy.linalg import svd
import mne
def setup_3d_axes():
ax = plt.axes(projection='3d')
ax.view_init(azim=-105, elev=20)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.set_xlim(-1, 5)
ax.set_ylim(-1, 5)
ax.set_zlim(0, 5)
return ax
Explanation: Background on projectors and projections
This tutorial provides background information on projectors and Signal Space
Projection (SSP), and covers loading and saving projectors, adding and removing
projectors from Raw objects, the difference between "applied" and "unapplied"
projectors, and at what stages MNE-Python applies projectors automatically.
We'll start by importing the Python modules we need; we'll also define a short
function to make it easier to make several plots that look similar:
End of explanation
ax = setup_3d_axes()
# plot the vector (3, 2, 5)
origin = np.zeros((3, 1))
point = np.array([[3, 2, 5]]).T
vector = np.hstack([origin, point])
ax.plot(*vector, color='k')
ax.plot(*point, color='k', marker='o')
# project the vector onto the x,y plane and plot it
xy_projection_matrix = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 0]])
projected_point = xy_projection_matrix @ point
projected_vector = xy_projection_matrix @ vector
ax.plot(*projected_vector, color='C0')
ax.plot(*projected_point, color='C0', marker='o')
# add dashed arrow showing projection
arrow_coords = np.concatenate([point, projected_point - point]).flatten()
ax.quiver3D(*arrow_coords, length=0.96, arrow_length_ratio=0.1, color='C1',
linewidth=1, linestyle='dashed')
Explanation: What is a projection?
In the most basic terms, a projection is an operation that converts one set
of points into another set of points, where repeating the projection
operation on the resulting points has no effect. To give a simple geometric
example, imagine the point $(3, 2, 5)$ in 3-dimensional space. A
projection of that point onto the $x, y$ plane looks a lot like a
shadow cast by that point if the sun were directly above it:
End of explanation
trigger_effect = np.array([[3, -1, 1]]).T
Explanation: <div class="alert alert-info"><h4>Note</h4><p>The ``@`` symbol indicates matrix multiplication on NumPy arrays, and was
introduced in Python 3.5 / NumPy 1.10. The notation ``plot(*point)`` uses
Python `argument expansion`_ to "unpack" the elements of ``point`` into
separate positional arguments to the function. In other words,
``plot(*point)`` expands to ``plot(3, 2, 5)``.</p></div>
Notice that we used matrix multiplication to compute the projection of our
point $(3, 2, 5)$onto the $x, y$ plane:
\begin{align}\left[
\begin{matrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 0 \end{matrix}
\right]
\left[ \begin{matrix} 3 \ 2 \ 5 \end{matrix} \right] =
\left[ \begin{matrix} 3 \ 2 \ 0 \end{matrix} \right]\end{align}
...and that applying the projection again to the result just gives back the
result again:
\begin{align}\left[
\begin{matrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 0 \end{matrix}
\right]
\left[ \begin{matrix} 3 \ 2 \ 0 \end{matrix} \right] =
\left[ \begin{matrix} 3 \ 2 \ 0 \end{matrix} \right]\end{align}
From an information perspective, this projection has taken the point
$x, y, z$ and removed the information about how far in the $z$
direction our point was located; all we know now is its position in the
$x, y$ plane. Moreover, applying our projection matrix to any point
in $x, y, z$ space will reduce it to a corresponding point on the
$x, y$ plane. The term for this is a subspace: the projection matrix
projects points in the original space into a subspace of lower dimension
than the original. The reason our subspace is the $x,y$ plane (instead
of, say, the $y,z$ plane) is a direct result of the particular values
in our projection matrix.
Example: projection as noise reduction
Another way to describe this "loss of information" or "projection into a
subspace" is to say that projection reduces the rank (or "degrees of
freedom") of the measurement — here, from 3 dimensions down to 2. On the
other hand, if you know that measurement component in the $z$ direction
is just noise due to your measurement method, and all you care about are the
$x$ and $y$ components, then projecting your 3-dimensional
measurement into the $x, y$ plane could be seen as a form of noise
reduction.
Of course, it would be very lucky indeed if all the measurement noise were
concentrated in the $z$ direction; you could just discard the $z$
component without bothering to construct a projection matrix or do the matrix
multiplication. Suppose instead that in order to take that measurement you
had to pull a trigger on a measurement device, and the act of pulling the
trigger causes the device to move a little. If you measure how
trigger-pulling affects measurement device position, you could then "correct"
your real measurements to "project out" the effect of the trigger pulling.
Here we'll suppose that the average effect of the trigger is to move the
measurement device by $(3, -1, 1)$:
End of explanation
# compute the plane orthogonal to trigger_effect
x, y = np.meshgrid(np.linspace(-1, 5, 61), np.linspace(-1, 5, 61))
A, B, C = trigger_effect
z = (-A * x - B * y) / C
# cut off the plane below z=0 (just to make the plot nicer)
mask = np.where(z >= 0)
x = x[mask]
y = y[mask]
z = z[mask]
Explanation: Knowing that, we can compute a plane that is orthogonal to the effect of the
trigger (using the fact that a plane through the origin has equation
$Ax + By + Cz = 0$ given a normal vector $(A, B, C)$), and
project our real measurements onto that plane.
End of explanation
# compute the projection matrix
U, S, V = svd(trigger_effect, full_matrices=False)
trigger_projection_matrix = np.eye(3) - U @ U.T
# project the vector onto the orthogonal plane
projected_point = trigger_projection_matrix @ point
projected_vector = trigger_projection_matrix @ vector
# plot the trigger effect and its orthogonal plane
ax = setup_3d_axes()
ax.plot_trisurf(x, y, z, color='C2', shade=False, alpha=0.25)
ax.quiver3D(*np.concatenate([origin, trigger_effect]).flatten(),
arrow_length_ratio=0.1, color='C2', alpha=0.5)
# plot the original vector
ax.plot(*vector, color='k')
ax.plot(*point, color='k', marker='o')
offset = np.full((3, 1), 0.1)
ax.text(*(point + offset).flat, '({}, {}, {})'.format(*point.flat), color='k')
# plot the projected vector
ax.plot(*projected_vector, color='C0')
ax.plot(*projected_point, color='C0', marker='o')
offset = np.full((3, 1), -0.2)
ax.text(*(projected_point + offset).flat,
'({}, {}, {})'.format(*np.round(projected_point.flat, 2)),
color='C0', horizontalalignment='right')
# add dashed arrow showing projection
arrow_coords = np.concatenate([point, projected_point - point]).flatten()
ax.quiver3D(*arrow_coords, length=0.96, arrow_length_ratio=0.1,
color='C1', linewidth=1, linestyle='dashed')
Explanation: Computing the projection matrix from the trigger_effect vector is done
using singular value decomposition (SVD); interested readers may
consult the internet or a linear algebra textbook for details on this method.
With the projection matrix in place, we can project our original vector
$(3, 2, 5)$ to remove the effect of the trigger, and then plot it:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60).load_data()
Explanation: Just as before, the projection matrix will map any point in $x, y, z$
space onto that plane, and once a point has been projected onto that plane,
applying the projection again will have no effect. For that reason, it should
be clear that although the projected points vary in all three $x$,
$y$, and $z$ directions, the set of projected points have only
two effective dimensions (i.e., they are constrained to a plane).
.. sidebar:: Terminology
In MNE-Python, the matrix used to project a raw signal into a subspace is
usually called a :term:`projector` or a *projection
operator* — these terms are interchangeable with the term *projection
matrix* used above.
Projections of EEG or MEG signals work in very much the same way: the point
$x, y, z$ corresponds to the value of each sensor at a single time
point, and the projection matrix varies depending on what aspects of the
signal (i.e., what kind of noise) you are trying to project out. The only
real difference is that instead of a single 3-dimensional point $(x, y,
z)$ you're dealing with a time series of $N$-dimensional "points" (one
at each sampling time), where $N$ is usually in the tens or hundreds
(depending on how many sensors your EEG/MEG system has). Fortunately, because
projection is a matrix operation, it can be done very quickly even on signals
with hundreds of dimensions and tens of thousands of time points.
Signal-space projection (SSP)
We mentioned above that the projection matrix will vary depending on what
kind of noise you are trying to project away. Signal-space projection (SSP)
:footcite:UusitaloIlmoniemi1997 is a way of estimating what that projection
matrix should be, by
comparing measurements with and without the signal of interest. For example,
you can take additional "empty room" measurements that record activity at the
sensors when no subject is present. By looking at the spatial pattern of
activity across MEG sensors in an empty room measurement, you can create one
or more $N$-dimensional vector(s) giving the "direction(s)" of
environmental noise in sensor space (analogous to the vector for "effect of
the trigger" in our example above). SSP is also often used for removing
heartbeat and eye movement artifacts — in those cases, instead of empty room
recordings the direction of the noise is estimated by detecting the
artifacts, extracting epochs around them, and averaging. See
tut-artifact-ssp for examples.
Once you know the noise vectors, you can create a hyperplane that is
orthogonal
to them, and construct a projection matrix to project your experimental
recordings onto that hyperplane. In that way, the component of your
measurements associated with environmental noise can be removed. Again, it
should be clear that the projection reduces the dimensionality of your data —
you'll still have the same number of sensor signals, but they won't all be
linearly independent — but typically there are tens or hundreds of sensors
and the noise subspace that you are eliminating has only 3-5 dimensions, so
the loss of degrees of freedom is usually not problematic.
Projectors in MNE-Python
In our example data, SSP <ssp-tutorial> has already been performed
using empty room recordings, but the :term:projectors <projector> are
stored alongside the raw data and have not been applied yet (or,
synonymously, the projectors are not active yet). Here we'll load
the sample data <sample-dataset> and crop it to 60 seconds; you can
see the projectors in the output of :func:~mne.io.read_raw_fif below:
End of explanation
print(raw.info['projs'])
Explanation: In MNE-Python, the environmental noise vectors are computed using principal
component analysis, usually abbreviated "PCA", which is why the SSP
projectors usually have names like "PCA-v1". (Incidentally, since the process
of performing PCA uses singular value decomposition under the hood,
it is also common to see phrases like "projectors were computed using SVD" in
published papers.) The projectors are stored in the projs field of
raw.info:
End of explanation
first_projector = raw.info['projs'][0]
print(first_projector)
print(first_projector.keys())
Explanation: raw.info['projs'] is an ordinary Python :class:list of
:class:~mne.Projection objects, so you can access individual projectors by
indexing into it. The :class:~mne.Projection object itself is similar to a
Python :class:dict, so you can use its .keys() method to see what
fields it contains (normally you don't need to access its properties
directly, but you can if necessary):
End of explanation
print(raw.proj)
print(first_projector['active'])
Explanation: The :class:~mne.io.Raw, :class:~mne.Epochs, and :class:~mne.Evoked
objects all have a boolean :attr:~mne.io.Raw.proj attribute that indicates
whether there are any unapplied / inactive projectors stored in the object.
In other words, the :attr:~mne.io.Raw.proj attribute is True if at
least one :term:projector is present and all of them are active. In
addition, each individual projector also has a boolean active field:
End of explanation
mags = raw.copy().crop(tmax=2).pick_types(meg='mag')
for proj in (False, True):
with mne.viz.use_browser_backend('matplotlib'):
fig = mags.plot(butterfly=True, proj=proj)
fig.subplots_adjust(top=0.9)
fig.suptitle('proj={}'.format(proj), size='xx-large', weight='bold')
Explanation: Computing projectors
In MNE-Python, SSP vectors can be computed using general purpose functions
:func:mne.compute_proj_raw, :func:mne.compute_proj_epochs, and
:func:mne.compute_proj_evoked. The general assumption these functions make
is that the data passed contains raw data, epochs or averages of the artifact
you want to repair via projection. In practice this typically involves
continuous raw data of empty room recordings or averaged ECG or EOG
artifacts. A second set of high-level convenience functions is provided to
compute projection vectors for typical use cases. This includes
:func:mne.preprocessing.compute_proj_ecg and
:func:mne.preprocessing.compute_proj_eog for computing the ECG and EOG
related artifact components, respectively; see tut-artifact-ssp for
examples of these uses. For computing the EEG reference signal as a
projector, the function :func:mne.set_eeg_reference can be used; see
tut-set-eeg-ref for more information.
<div class="alert alert-danger"><h4>Warning</h4><p>It is best to compute projectors only on channels that will be
used (e.g., excluding bad channels). This ensures that
projection vectors will remain ortho-normalized and that they
properly capture the activity of interest.</p></div>
Visualizing the effect of projectors
You can see the effect the projectors are having on the measured signal by
comparing plots with and without the projectors applied. By default,
raw.plot() will apply the projectors in the background before plotting
(without modifying the :class:~mne.io.Raw object); you can control this
with the boolean proj parameter as shown below, or you can turn them on
and off interactively with the projectors interface, accessed via the
:kbd:Proj button in the lower right corner of the plot window. Here we'll
look at just the magnetometers, and a 2-second sample from the beginning of
the file.
End of explanation
ecg_proj_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_ecg-proj.fif')
ecg_projs = mne.read_proj(ecg_proj_file)
print(ecg_projs)
Explanation: Additional ways of visualizing projectors are covered in the tutorial
tut-artifact-ssp.
Loading and saving projectors
SSP can be used for other types of signal cleaning besides just reduction of
environmental noise. You probably noticed two large deflections in the
magnetometer signals in the previous plot that were not removed by the
empty-room projectors — those are artifacts of the subject's heartbeat. SSP
can be used to remove those artifacts as well. The sample data includes
projectors for heartbeat noise reduction that were saved in a separate file
from the raw data, which can be loaded with the :func:mne.read_proj
function:
End of explanation
raw.add_proj(ecg_projs)
Explanation: There is a corresponding :func:mne.write_proj function that can be used to
save projectors to disk in .fif format:
python3
mne.write_proj('heartbeat-proj.fif', ecg_projs)
<div class="alert alert-info"><h4>Note</h4><p>By convention, MNE-Python expects projectors to be saved with a filename
ending in ``-proj.fif`` (or ``-proj.fif.gz``), and will issue a warning
if you forgo this recommendation.</p></div>
Adding and removing projectors
Above, when we printed the ecg_projs list that we loaded from a file, it
showed two projectors for gradiometers (the first two, marked "planar"), two
for magnetometers (the middle two, marked "axial"), and two for EEG sensors
(the last two, marked "eeg"). We can add them to the :class:~mne.io.Raw
object using the :meth:~mne.io.Raw.add_proj method:
End of explanation
mags_ecg = raw.copy().crop(tmax=2).pick_types(meg='mag')
for data, title in zip([mags, mags_ecg], ['Without', 'With']):
with mne.viz.use_browser_backend('matplotlib'):
fig = data.plot(butterfly=True, proj=True)
fig.subplots_adjust(top=0.9)
fig.suptitle('{} ECG projector'.format(title), size='xx-large',
weight='bold')
Explanation: To remove projectors, there is a corresponding method
:meth:~mne.io.Raw.del_proj that will remove projectors based on their index
within the raw.info['projs'] list. For the special case of replacing the
existing projectors with new ones, use
raw.add_proj(ecg_projs, remove_existing=True).
To see how the ECG projectors affect the measured signal, we can once again
plot the data with and without the projectors applied (though remember that
the :meth:~mne.io.Raw.plot method only temporarily applies the projectors
for visualization, and does not permanently change the underlying data).
We'll compare the mags variable we created above, which had only the
empty room SSP projectors, to the data with both empty room and ECG
projectors:
End of explanation |
10,224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explicit Solutions
This notebook will show how to fit explicit solutions to summary measures.
It will first fit the Rescorla-Wagner model to the trial-by-trial response rates. It will then fit Modular Theory to the same trial-by-trial response rates. While the Rescorla-Wagner model is restricted to fitting response rate data (or some transform of it), Modular Theory can fit both response rate and response timing data. So the last section fits Modular Theory to the averaged response gradient.
1.0 Setting up the code environment
1.1 Importing packages
Step1: 1.2 Processing the data
The data is stored as text files. They first need to be loaded and processed. The following command does this, and returns a single (processed) csv file that is saved to your computer. It only creates the file if one doesn't already exist.
Step2: 1.3 Loading the data into a pandas DataFrame
Step3: For this analysis, we only need a portion of the data. We'll take the FI-120 acquisition data.
Step5: 2.0 Analysis of the learning curve
The learning curve is most often represented as the response rate per trial or session, often averaged over rats. When represented over sessions, the learning curve is smoother, but representes the underlying data less precisely. When represented over trials, the learning curve is noiser, but represents single trial behavior. And averaging over rats prevents us from estimate learning rates for any individual rat.
Our goal is to fit learning curves to individual trials for individual rats. So we need to get the response rates per trial for each rat.
2.1 Obtaining the learning curve
The learning curve will be the response rate per trial, over trials, for each individual rat.
Step8: 2.2 Fitting the Rescorla-Wagner Model
2.2.1 The Model
The Rescorla-Wagner model for a single stimulus is made up of two ideas. The first is that learning the associative value is incremental and progresses gradually at rate $\alpha$
$$V_n \leftarrow V_n + \alpha \Delta V_n$$
The second idea is that how much the associative value increases depends on the reward prediction error
Step10: 2.2.2 The fitting
To fit, we need to define the loss function. This is our "goodness-of-fit" measure that can be computed for any combination of parameter values. This is our measure of how well any particular combination of parameters fits our data.
The most standard loss function is the squared error
$$SSR(\alpha, A) = \sum_i (y_i - \hat{y_i})^2$$
where $\hat{y_i}$ is simply our prediction of $y_i$. That is, its the output of the rescorla_wagner function, above. The sum of the squared residuals (SSR) depends on the parameter values $\alpha$ and $A$ because the prediction depends on them.
Step12: We often want to put bounds on the parameters. For example, $A$ must be positive because there's no such thing as a negative response rate. And $\alpha$ must be between $0$ and $1$ because it represents the proportion learned on each trial.
Without specifying constraints on the parameters, fitting the data becomes much harder.
In the $\textit{constraints}$ function below, we use the standard notation that the parameters of a model are a parameter vector, $\theta = {\alpha, A}$. This allows us to write a single function to handle constraints from any model
Step13: In our case, the bounds for the Rescorla-Wagner model are
Step14: If the bounds are not satisfied, we want to return the highest SSR possible. This is defined as the maximum number the computer can store.
Step16: Next we want to evaluate how well any combination of parameters fits the data. For this we need to use the parts we just built up
Step18: Now we're ready to fit. We want to use the "Nelder-Mead" fit algorithm because, while it is slower than some others, it is robust to complicated data.
An important but often overlooked aspect to fitting is the idea of "getting stuck in a local minima." That is, the parameters returned as the "best" are the best fit of all the nearby points, but not the best fit overall.
The way to account for this is to run the fitting algorithm sevaral times, each starting with a new initial point. The best fit of all the runs of the algorithm is a better estimate of the global minimum --the true best estimates of the parameters. The more initial points are chosen, the more likely you are to have found the best fit.
Step20: And we'll call fit_local which is just a thin wrapper around the minimize function. But having this function will allow for generality later. Its called fit_local because it finds a local minima (which may also be a global).
Step22: The last function is the specific function that actually fits the Rescorla-Wagner model. Its what runs through all of the initial points (finds many local minima), and returns the minimum.
Step23: Just before we fit, we'll just write the function that shows the fit.
Step24: 2.2.3 Fitting the average rat
The easiest way to fit is to average the data over rats and fit that. This reduces the variability in the data.
Step25: 2.2.1 Fitting the individual rat
Fitting the individual rats, though, gives you an estimates of the parameters per rat. That is, it gives you an individualized learning rate and asymptotic response rate for each rat.
Writing the code as tiny modules above gives us generality. Fitting the individuals is now a simple pandas groupby away.
Step26: 3.0 Fitting Modular Theory to the learning curves
Modular theory is a process model whose input is a procedure and whose output are the times of the events. Because of this, it has many parameters that are important along the way (e.g., the parameters that generate individual responses). But many of these parameters are not important for fitting averaged data.
The explicit solution to the learning curve is
$$R(n) = A \cdot w(n) + R_0$$
where $w(n)$ is the strength memory on trial n. Notice that it looks almost identical to the Rescorla Wagner model at this pint, except for an additive operant rate $R_0$.
The value of $w(n)$ depends on the relationship between the learning rate during reinforcement and the learning rate during non-reinforcement. Each of these two are simply continuous-time Rescorla-Wagner equations with learning rates $\beta_r$ for reinforcement and $\beta_e$ for non-reinforcement. The explicit solution over trials is
$$w(n) = \beta + (w_0 - \beta)(1 - \beta_rd - \beta_eT)^n$$
where $$\beta = \frac{\beta_rd}{\beta_rd + \beta_eT}$$
that is, $\beta$ is the proportional increase in strength at the end of the trial how much reinforcement was given ($\beta_rd$) and how much non-reinforcement ($\beta_eT$).
Step27: Now that we went through all the trouble above to make the code general, all we need to do is specify the bounds on the parameters of the model, and then call the fit function
Step30: 4.0 Fitting Modular Theory to the response gradient
4.1 Obtaining the response gradient
The response gradient is the moment-by-moment estimate of the response rate in a cycle. Most often, it is estimated in 1 second time bins, although any resolution can be used.
Step31: We'll compute the response gradient over the pooled response times from the last 10 sessions of the acquisition phase.
Step32: 4.2 Fitting the response gradient with modular theory
Modular theory's explicit solution to the response gradient is given by the operant rate ($r_0$), the ($A$)symptotic rate, and the summary statistics of the start time distribution, $f(b)$, which for simplicity is assumed to be a normal distribution.
Given this, the explicit solution to modular theory is
$$\begin{align}
R(t) &= A\int_{m_n(t)}^{\infty} f(b)db + R_0 \
&= A \cdot \Phi(t; \mu, \sigma) + R_0
\end{align}$$
where $\Phi(\cdot)$ is the cumulative normal distribution function
Step33: 4.2.1 Fitting the average response gradient
And now we can just use the same functions as before...
Step34: 4.2.2 Fitting the individual rats | Python Code:
# System packages
import sys
# Storing and manipulating data...
import pandas as pd
import numpy as np
from scipy.optimize import curve_fit, minimize
from scipy.stats import norm, pearsonr
# Plotting...
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.rc('text', usetex=True)
# The following packages are unique to this script
from create_data_file import create_modular_theory_file
Explanation: Explicit Solutions
This notebook will show how to fit explicit solutions to summary measures.
It will first fit the Rescorla-Wagner model to the trial-by-trial response rates. It will then fit Modular Theory to the same trial-by-trial response rates. While the Rescorla-Wagner model is restricted to fitting response rate data (or some transform of it), Modular Theory can fit both response rate and response timing data. So the last section fits Modular Theory to the averaged response gradient.
1.0 Setting up the code environment
1.1 Importing packages
End of explanation
create_modular_theory_file(how="acquisition") # only process the acquisition data
Explanation: 1.2 Processing the data
The data is stored as text files. They first need to be loaded and processed. The following command does this, and returns a single (processed) csv file that is saved to your computer. It only creates the file if one doesn't already exist.
End of explanation
full_data = pd.read_csv("../data/modular_theory2007/Data_Modular_Theory2007.csv", engine="c")
Explanation: 1.3 Loading the data into a pandas DataFrame
End of explanation
interval = 120
trialname = "FI{}".format(interval)
row_idx = (full_data["trial_"+trialname] > 0) & (full_data.phase == "acquisition")
col_idx = ["subject", "session", "trial_"+trialname, "time_"+trialname, "event"]
data = full_data.ix[row_idx, col_idx]
data = data.rename(columns={"trial_"+trialname:"trial",
"time_"+trialname:"time"})
Explanation: For this analysis, we only need a portion of the data. We'll take the FI-120 acquisition data.
End of explanation
def response_rate(event, interval):
Return the response rate
computed as responses per minute
60 * n/t
return 60 * ((event == 8).sum() / interval)
# Individual response rates per trial per rat
df_response_rate = (data.groupby(["subject", "trial"])
.event
.aggregate(response_rate, interval)
.pipe(pd.DataFrame)
.reset_index()
.rename(columns={"event": "rpm"})
)
# Average over rats.
df_mean_responserate = (df_response_rate.groupby("trial")
.rpm
.agg(["mean", "sem"])
.reset_index()
)
Explanation: 2.0 Analysis of the learning curve
The learning curve is most often represented as the response rate per trial or session, often averaged over rats. When represented over sessions, the learning curve is smoother, but representes the underlying data less precisely. When represented over trials, the learning curve is noiser, but represents single trial behavior. And averaging over rats prevents us from estimate learning rates for any individual rat.
Our goal is to fit learning curves to individual trials for individual rats. So we need to get the response rates per trial for each rat.
2.1 Obtaining the learning curve
The learning curve will be the response rate per trial, over trials, for each individual rat.
End of explanation
def rescorla_wagner(xdata, α, V0, r):
Return the predictions from the RW model
ntrials = xdata.size
V = np.zeros(ntrials)
V[0] = V0
for trial in range(1, ntrials):
ΔV = r-V[trial-1]
V[trial] = V[trial-1] + α*ΔV
return V
def rescorla_wagner(xdata, α, V0, r):
Analytical solution
r is a fitted parameter
n = np.arange(xdata.size)
return r - r*(1-α)**n + V0*(1-α)**n
Explanation: 2.2 Fitting the Rescorla-Wagner Model
2.2.1 The Model
The Rescorla-Wagner model for a single stimulus is made up of two ideas. The first is that learning the associative value is incremental and progresses gradually at rate $\alpha$
$$V_n \leftarrow V_n + \alpha \Delta V_n$$
The second idea is that how much the associative value increases depends on the reward prediction error: the difference between the reinforcement obtained and the current associative value.
$$\Delta V_n = r - V_{n-1}$$
The closed-form solution assuming a constant $r$ is
$$V_n = r - r(1-\alpha)^n + V_0(1-\alpha)^n$$
End of explanation
def ssr(ydata, yfit):
Return the sum of squared residuals
return ((ydata-yfit)**2).sum()
Explanation: 2.2.2 The fitting
To fit, we need to define the loss function. This is our "goodness-of-fit" measure that can be computed for any combination of parameter values. This is our measure of how well any particular combination of parameters fits our data.
The most standard loss function is the squared error
$$SSR(\alpha, A) = \sum_i (y_i - \hat{y_i})^2$$
where $\hat{y_i}$ is simply our prediction of $y_i$. That is, its the output of the rescorla_wagner function, above. The sum of the squared residuals (SSR) depends on the parameter values $\alpha$ and $A$ because the prediction depends on them.
End of explanation
def constraints(θ, θ_bounds):
Check bounds on parameters
if not ((θ_bounds[0] <= θ) & (θ <= θ_bounds[1])).all():
return False
return True
Explanation: We often want to put bounds on the parameters. For example, $A$ must be positive because there's no such thing as a negative response rate. And $\alpha$ must be between $0$ and $1$ because it represents the proportion learned on each trial.
Without specifying constraints on the parameters, fitting the data becomes much harder.
In the $\textit{constraints}$ function below, we use the standard notation that the parameters of a model are a parameter vector, $\theta = {\alpha, A}$. This allows us to write a single function to handle constraints from any model
End of explanation
# I've never seen a rat with higher than 150 responses per minute. Make the upper bound 300.
# 0 <= α <= 1, 0 <= A <= 300
θ_bounds = [(0, 0, 0), (1, 300, 300)]
# convert to a list of numpy arrays because its easier to work with
θ_bounds = list(map(np.array, θ_bounds))
Explanation: In our case, the bounds for the Rescorla-Wagner model are
End of explanation
MAX_INT = sys.maxsize
Explanation: If the bounds are not satisfied, we want to return the highest SSR possible. This is defined as the maximum number the computer can store.
End of explanation
def goodness_of_fit(θ, f, θ_bounds, ydata):
Return goodness-of-fit for the RW model evaluated at θ
if not constraints(θ, θ_bounds):
return MAX_INT
yfit = f(ydata, *θ)
return ssr(ydata, yfit)
Explanation: Next we want to evaluate how well any combination of parameters fits the data. For this we need to use the parts we just built up: (i) Check the bounds, (ii) Run the RW model with those parameters, (iii) compare the output to the data.
End of explanation
def random_initial_point(θ_bounds):
Return a random parameter vector
return θ_bounds[0] + (θ_bounds[1]-θ_bounds[0]) * np.random.rand(θ_bounds[0].size)
Explanation: Now we're ready to fit. We want to use the "Nelder-Mead" fit algorithm because, while it is slower than some others, it is robust to complicated data.
An important but often overlooked aspect to fitting is the idea of "getting stuck in a local minima." That is, the parameters returned as the "best" are the best fit of all the nearby points, but not the best fit overall.
The way to account for this is to run the fitting algorithm sevaral times, each starting with a new initial point. The best fit of all the runs of the algorithm is a better estimate of the global minimum --the true best estimates of the parameters. The more initial points are chosen, the more likely you are to have found the best fit.
End of explanation
def fit_local(f, ydata, θ_init, θ_bounds):
Return the best estimate from an initial guess
return minimize(goodness_of_fit, θ_init, args=(f, θ_bounds, ydata),
method="Nelder-Mead", options={"maxiter": 1e4}
)
Explanation: And we'll call fit_local which is just a thin wrapper around the minimize function. But having this function will allow for generality later. Its called fit_local because it finds a local minima (which may also be a global).
End of explanation
def fit(ydata, f, θ_bounds, npoints=100):
Return the best estimate over all initial guesses.
θ_init = [random_initial_point(θ_bounds) for _ in range(npoints)]
results = [fit_local(f, ydata, θ, θ_bounds)
for θ in θ_init]
successful_results = filter(lambda x: x.success, results)
return sorted(successful_results, key=lambda x: x.fun)[0].x
Explanation: The last function is the specific function that actually fits the Rescorla-Wagner model. Its what runs through all of the initial points (finds many local minima), and returns the minimum.
End of explanation
def show_fit(f, θ, xdata, ydata, ax=None, show_smooth=True):
ydata_smooth = ydata.rolling(20).mean()
yfit = f(xdata, *θ)
if ax is None:
fig, ax = plt.subplots()
if show_smooth:
ax.plot(xdata, ydata_smooth, zorder=2, linewidth=5, color="#3333ff")
ax.plot(xdata, yfit, zorder=3, linewidth=5, color="#ff6666")
ax.scatter(xdata, ydata, s=50, zorder=1, color="#33ccff")
ax.set_xlim((0, xdata.max()))
ax.set_ylim((0, 1.02*ydata.max()))
ax.tick_params(labelsize=20)
ax.set_xlabel("x", fontsize=22)
ax.set_ylabel("y", fontsize=22)
return fig, ax
Explanation: Just before we fit, we'll just write the function that shows the fit.
End of explanation
θhat_mean = fit(df_mean_responserate["mean"], rescorla_wagner, θ_bounds)
fig, ax = show_fit(rescorla_wagner, θhat_mean,
df_mean_responserate["trial"], df_mean_responserate["mean"]);
ax.text(30, 35, r"$\alpha={:.2}$" "\n" r"$V_0={:2.3}$" "\n" r"$r={:2.3}$".format(*θhat_mean), fontsize=18)
ax.set_xlabel("Trial")
ax.set_ylabel("Response rate")
fig.tight_layout()
Explanation: 2.2.3 Fitting the average rat
The easiest way to fit is to average the data over rats and fit that. This reduces the variability in the data.
End of explanation
θhat_individual = (df_response_rate.groupby("subject")
.rpm
.apply(fit, rescorla_wagner, θ_bounds)
.apply(pd.Series)
.rename(columns={0:"α", 1:"V_0", 2: "r"})
.reset_index()
)
def show_rw_individuals(subject):
idx = df_response_rate.subject == subject
xdata = df_response_rate[idx].trial
ydata = df_response_rate[idx].rpm
θ = θhat_individual.ix[θhat_individual.subject==subject, ["α", "V0", "r"]].as_matrix()
fig, ax = show_fit(rescorla_wagner, *θ, xdata, ydata);
ax.set_xlabel("Trial")
ax.set_ylabel("Response rate")
fig.tight_layout()
show_rw_individuals(433)
#list(map(show_rw_individuals, θhat_individual.subject.unique()))
def show_paramter(θ, parameter, ax=None):
if ax is None:
fig, ax = plt.subplots()
θ_i = θ[parameter].sort_values()
ax.stem(range(1, 1+θ_i.size), θ_i, linefmt='b-', markerfmt='bo', basefmt='')
ax.set_xlim((0, 1+θ_i.size))
ax.set_xlabel("Rank", fontsize=22)
#ax.set_ylabel(parameter, fontsize=22)
xlim = ax.get_xlim()
ylim = ax.get_ylim()
if parameter == "α":
parameter = "\\alpha" # matplotlib unicode issue.
elif parameter == "βe":
parameter = "\\beta_e"
elif parameter == "βr":
parameter == "\\beta_r"
parameter_txt = r"${}$".format(parameter)
ax.text(0.05*xlim[1], 0.9*ylim[1],
parameter_txt, fontsize=22)
ax.tick_params(labelsize=20)
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(15,5))
show_paramter(θhat_individual, "α", ax[0])
show_paramter(θhat_individual, "V_0", ax[1])
show_paramter(θhat_individual, "r", ax[2])
fig.tight_layout()
def corrplot(θ):
fig, ax = plt.subplots(figsize=(10,10))
corr = θ.corr()
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
cmap = sns.diverging_palette(220, 10, as_cmap=True)
ax = sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3,
annot=True, annot_kws={"size":22}, square=True)
ax.tick_params(labelsize=22)
#corrplot(θhat_individual[["α", "V_0", "r"]])
Explanation: 2.2.1 Fitting the individual rat
Fitting the individual rats, though, gives you an estimates of the parameters per rat. That is, it gives you an individualized learning rate and asymptotic response rate for each rat.
Writing the code as tiny modules above gives us generality. Fitting the individuals is now a simple pandas groupby away.
End of explanation
def mt_learningcurve(ydata, βr, βe, R0, A):
d, T = 1, interval # set wayy above!
w0 = 0
w = βr*d / (βr*d + βe*T)
n = np.arange(ydata.size)
W = w + (w0-w)*((1-βr*d - βe*T)**n)
return A*W + R0
Explanation: 3.0 Fitting Modular Theory to the learning curves
Modular theory is a process model whose input is a procedure and whose output are the times of the events. Because of this, it has many parameters that are important along the way (e.g., the parameters that generate individual responses). But many of these parameters are not important for fitting averaged data.
The explicit solution to the learning curve is
$$R(n) = A \cdot w(n) + R_0$$
where $w(n)$ is the strength memory on trial n. Notice that it looks almost identical to the Rescorla Wagner model at this pint, except for an additive operant rate $R_0$.
The value of $w(n)$ depends on the relationship between the learning rate during reinforcement and the learning rate during non-reinforcement. Each of these two are simply continuous-time Rescorla-Wagner equations with learning rates $\beta_r$ for reinforcement and $\beta_e$ for non-reinforcement. The explicit solution over trials is
$$w(n) = \beta + (w_0 - \beta)(1 - \beta_rd - \beta_eT)^n$$
where $$\beta = \frac{\beta_rd}{\beta_rd + \beta_eT}$$
that is, $\beta$ is the proportional increase in strength at the end of the trial how much reinforcement was given ($\beta_rd$) and how much non-reinforcement ($\beta_eT$).
End of explanation
θ_bounds = [(0, 0, 0, 0), (.2, .2, 50, 300)]
θ_bounds = list(map(np.array, θ_bounds))
θhat_mean = fit(df_mean_responserate["mean"], mt_learningcurve, θ_bounds)
fig, ax = show_fit(mt_learningcurve, θhat_mean,
df_mean_responserate["trial"], df_mean_responserate["mean"]);
ax.text(30, 30, r"$\beta_r={:.2}$" "\n" r"$\beta_e={:2.3}$"
"\n" r"$R_0={:2.3}$" "\n" r"$A={:2.3}$".format(*θhat_mean), fontsize=18)
ax.set_xlabel("Trial")
ax.set_ylabel("Response rate")
fig.tight_layout()
θhat_individual = (df_response_rate.groupby("subject")
.rpm
.apply(fit, mt_learningcurve, θ_bounds)
.apply(pd.Series)
.rename(columns={0:"βr", 1:"βe", 2:"R0", 3:"A"})
.reset_index()
)
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(15, 7.5))
show_paramter(θhat_individual, "βr", ax[0,0])
show_paramter(θhat_individual, "βe", ax[0,1])
show_paramter(θhat_individual, "R0", ax[1,0])
show_paramter(θhat_individual, "A", ax[1,1])
fig.tight_layout()
corrplot(θhat_individual[["βr", "βe", "R0", "A"]])
Explanation: Now that we went through all the trouble above to make the code general, all we need to do is specify the bounds on the parameters of the model, and then call the fit function
End of explanation
def response_gradient(t, n):
Return the response gradient
counts = np.histogram(t, range=(0, int(t.max())), bins=int(t.max()))[0]
return 60*(counts/n)
def pd_response_gradient(df):
Groupby wrapper for response_gradient
takes in a data frame, and calls response_gradient
while correctly handling the number of trials
return response_gradient(df.time, 1+(df.trial.max()-df.iloc[0].trial))
Explanation: 4.0 Fitting Modular Theory to the response gradient
4.1 Obtaining the response gradient
The response gradient is the moment-by-moment estimate of the response rate in a cycle. Most often, it is estimated in 1 second time bins, although any resolution can be used.
End of explanation
idx = (data.session >= data.session.max() - 10) & (data.event == 8)
df_gradient = (data[idx].groupby("subject")
.apply(pd_response_gradient) # get the response gradient for each rat
.apply(pd.Series) # auto generate the bins
.stack() # which are then used as indices
.pipe(pd.DataFrame) # of the dataframe
.reset_index()
.rename(columns={"level_1": "bins", 0:"rpm"})
)
# Average over rats
df_mean_gradient = (df_gradient.groupby("bins")
.rpm
.mean()
.pipe(pd.DataFrame)
.reset_index()
)
Explanation: We'll compute the response gradient over the pooled response times from the last 10 sessions of the acquisition phase.
End of explanation
def mt_gradient(ydata, r, A, μ, σ):
x = np.arange(ydata.size)
return r + A*norm.cdf(x, μ, σ)
Explanation: 4.2 Fitting the response gradient with modular theory
Modular theory's explicit solution to the response gradient is given by the operant rate ($r_0$), the ($A$)symptotic rate, and the summary statistics of the start time distribution, $f(b)$, which for simplicity is assumed to be a normal distribution.
Given this, the explicit solution to modular theory is
$$\begin{align}
R(t) &= A\int_{m_n(t)}^{\infty} f(b)db + R_0 \
&= A \cdot \Phi(t; \mu, \sigma) + R_0
\end{align}$$
where $\Phi(\cdot)$ is the cumulative normal distribution function
End of explanation
θ_bounds = [(0, 0, 0, 0), (100, 300, 120, 120)]
θ_bounds = list(map(np.array, θ_bounds))
θhat_mean = fit(df_mean_gradient["rpm"], mt_gradient, θ_bounds)
fig, ax = show_fit(mt_gradient, θhat_mean,
df_mean_gradient["bins"], df_mean_gradient["rpm"],
show_smooth=False);
ax.text(10, 40, r"$R_0={:.2}$" "\n" r"$A={:2.3}$"
"\n" r"$\mu={:2.3}$" "\n" r"$\sigma={:2.3}$".format(*θhat_mean), fontsize=18)
ax.set_xlabel("Time (s)")
ax.set_ylabel("Response rate")
fig.tight_layout()
Explanation: 4.2.1 Fitting the average response gradient
And now we can just use the same functions as before...
End of explanation
θhat_individual = (df_gradient.groupby("subject")
.rpm
.apply(fit, mt_gradient, θ_bounds)
.apply(pd.Series)
.rename(columns={0:"r", 1:"A", 2:"μ", 3:"σ"})
.reset_index()
)
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(15, 7.5))
show_paramter(θhat_individual, "r", ax[0,0])
show_paramter(θhat_individual, "A", ax[0,1])
show_paramter(θhat_individual, "μ", ax[1,0])
show_paramter(θhat_individual, "σ", ax[1,1])
fig.tight_layout()
corrplot(θhat_individual[["r", "A", "μ", "σ"]])
def show_mt_gradient_individuals(subject):
idx = df_gradient.subject == subject
xdata = df_gradient[idx].bins
ydata = df_gradient[idx].rpm
θ = θhat_individual.ix[θhat_individual.subject==subject, ["r", "A", "μ", "σ"]].as_matrix()
fig, ax = show_fit(mt_gradient, *θ, xdata, ydata, show_smooth=False);
ax.set_xlabel("Trial")
ax.set_ylabel("Response rate")
fig.tight_layout()
#show_mt_gradient_individuals(433)
list(map(show_mt_gradient_individuals, θhat_individual.subject.unique()))
Explanation: 4.2.2 Fitting the individual rats
End of explanation |
10,225 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4:
Usage Tips
General
If TF-Coder finds a solution, it is guaranteed that the solution produces
the example output when run on the example inputs. However, it is not
guaranteed that the solution generalizes in the way you intend! Please
carefully review solutions produced by TF-Coder before using them in your real
project.
TF-Coder will often produce a solution that uses hardcoded constants for
shapes or lengths, e.g., tf.reshape(to_flatten, (6,)) in order to flatten an
input tensor with shape (2, 3). You may need to manually change these
constants to improve the generality of the solution, e.g., replacing 6 with
-1 in this case. Use the shape attribute to obtain dimension lengths of
input tensors, e.g., to_flatten.shape[0] would be 2.
If you want to play with TensorFlow in Colab (e.g., to understand how a
TF-Coder solution works or to test your own solution)
Step5: Supported Operations | Python Code:
#@title Run this cell after making your choices.
allow_data_collection = True #@param {type: "boolean"}
include_in_dataset = True #@param {type: "boolean"}
if allow_data_collection:
if include_in_dataset:
print('Usage data may be collected and released in a public dataset.')
else:
print('Usage data may be collected but will not be publicly released.')
else:
print('Usage data will not be collected.')
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
TensorFlow Coder (TF-Coder): A program synthesis tool for TensorFlow expressions
TensorFlow Coder is a tool that helps you manipulate tensors with TensorFlow! If you provide an example of a tensor manipulation, TF-Coder will search for TensorFlow code that matches the example.
Follow this tutorial to get familiar with TF-Coder.
Make sure to connect to a runtime (click "Connect" in the top right corner).
Step 0: Data collection request
Note from the TF-Coder team at Google:
We are excited to bring you TF-Coder, which we hope will accelerate your TensorFlow development.
We have one quick request first: we would like to log usage data for TF-Coder, so that we can identify scenarios where TF-Coder can be improved. This usage data will help us improve TF-Coder for everyone. We also believe that the usage data will be a valuable resource to the broader program synthesis research community. Please read the text below and then use the following cell to let us know whether we may log or release your usage data. Either way, you may still use the TF-Coder tool.
Collecting TF-Coder usage data will help Google improve the TF-Coder tool, and TensorFlow services more generally. This usage data includes (i) the problems you create, (ii) the settings for the TF-Coder tool, (iii) the TF-Coder tool's results for those problems, (iv) metadata relating to your session, problem and device you are using to use the TF-Coder tool, and (v) your location (determined by your IP address). The usage data does not include any other personally identifiable information. Please do not upload or provide any personal or confidential information to the TF-Coder tool.
In addition to Google’s internal use of your usage data, Google would also like to release some of such data in a public dataset to facilitate related research and to promote reproducible research publications. If your usage data is released it will be done in an open source fashion, meaning anyone with access to the data may use it for their purposes.
To opt-out of Google collecting your usage data entirely, uncheck the first box in the cell below. To opt out of your usage data being released as part of a public dataset, uncheck the second box. For the avoidance of doubt, if you only uncheck the second box, you are consenting to Google’s internal use of your usage data consistent with this disclosure. Regardless of your choice about sharing your usage data, you may still access and use the TF-Coder tool.
End of explanation
#@title Run this cell to install and import TF-Coder.
ready = True
try:
_ = (allow_data_collection, include_in_dataset)
except NameError as e:
print('Please run the cell in Step 0 first.')
ready = False
if ready:
# Import TensorFlow and NumPy in case the user wants to create the example
# programmatically.
import tensorflow as tf
import numpy as np
!pip install tensorflow-coder
from tf_coder.value_search import colab_interface
from tf_coder.value_search import value_search_settings as settings_module
if allow_data_collection:
!pip install tensorflow-coder-colab-logging
from tf_coder_colab_logging import colab_logging
from google.colab import output
output.clear()
print('Imports successful. Loading models...')
colab_interface.warm_up()
print('Done. TF-Coder is now ready to use!')
Explanation: Step 1: Installs and imports
End of explanation
# Edit this cell! Follow the format of the example below.
# A dict mapping input variable names to input tensors.
inputs = {
'rows': [10, 20, 30],
'cols': [1, 2, 3, 4],
}
# The corresponding output tensor.
output = [[11, 12, 13, 14],
[21, 22, 23, 24],
[31, 32, 33, 34]]
# A list of relevant scalar constants, if any.
constants = []
# An English description of the tensor manipulation.
description = 'add two vectors with broadcasting to get a matrix'
Explanation: Step 2: Describe the problem with an example
Provide an input-output example:
inputs is a dictionary containing one or more input tensors with variable names.
output is the corresponding output tensor.
Tensors can be provided as lists (possibly multidimensional) or tf.Tensor objects.
You may also specify relevant scalar constants. TF-Coder also uses heuristics to guess a few useful constants.
Finally, it often helps to provide an English description of the desired tensor manipulation. This description can help the tool decide which TensorFlow operations to prioritize.
Note: Please do not include confidential or personal information.
End of explanation
#@title Run this cell to invoke TF-Coder on the problem from Step 2.
ready = True
try:
_ = colab_interface
except NameError:
print('Run the cell in Step 1 first.')
ready = False
try:
_ = (inputs, output, constants, description)
except NameError:
print('Define the problem by running the cell in Step 2 first.')
ready = False
#@markdown
#@markdown #### **Settings for TF-Coder**
#@markdown How long to search for a solution, in seconds.
time_limit = 60 #@param {type: "integer"}
#@markdown How many solutions to find before stopping. If more than 1, the entire search will slow down.
number_of_solutions = 1 #@param{type: "integer"}
#@markdown Whether solutions must use all inputs, at least one input, or no such requirement.
solution_requirement = "all inputs" #@param ["all inputs", "one input", "no restriction"]
settings = settings_module.from_dict({
'timeout': time_limit,
'only_minimal_solutions': False,
'max_solutions': number_of_solutions,
'require_all_inputs_used': solution_requirement == 'all inputs',
'require_one_input_used': solution_requirement == 'one input',
})
if ready:
if allow_data_collection:
problem_id = colab_logging.get_uuid()
colab_logging.log_problem(inputs, output, constants, description, settings,
include_in_dataset=include_in_dataset,
problem_id=problem_id)
# Results will be printed to the cell's output.
results = colab_interface.run_value_search_from_colab(
inputs, output, constants, description, settings)
if allow_data_collection:
colab_logging.log_result(results,
include_in_dataset=include_in_dataset,
problem_id=problem_id)
Explanation: Step 3: Run the TF-Coder tool
End of explanation
# Real task encountered by a Googler.
inputs = {
'tensor': [[0, 1, 0, 0],
[0, 1, 1, 0],
[1, 1, 1, 1]],
}
output = [[0.0, 1.0, 0.0, 0.0],
[0.0, 0.5, 0.5, 0.0],
[0.25, 0.25, 0.25, 0.25]]
constants = []
description = 'normalize the rows of a tensor'
# Real task encountered by a Googler.
inputs = {
'elements': [0, 0, 0, 1, 3, 3],
}
output = [[0, 0], [0, 1], [0, 2], [1, 0], [3, 0], [3, 1]]
constants = []
description = 'pair each element with a counter'
# Real task encountered by a Googler.
inputs = {
'sparse': tf.SparseTensor(
indices=[[0, 0, 0], [0, 1, 1], [1, 1, 1], [1, 1, 2]],
values=[1., 1., 1., 1.],
dense_shape=[2, 2, 800]),
}
output = tf.SparseTensor(
indices=[[0, 0, 0], [0, 1, 1]],
values=[1., 1.],
dense_shape=[1, 2, 800])
constants = []
description = 'slice index 0 of the first dimension of a SparseTensor'
# Real task encountered by a Googler.
inputs = {
'lengths': [3, 4, 2, 1],
}
output = [[1, 1, 1, 0, 0],
[1, 1, 1, 1, 0],
[1, 1, 0, 0, 0],
[1, 0, 0, 0, 0]]
constants = [5]
description = 'create a mask for sequences of the given lengths'
# Real task encountered by a Googler.
inputs = {
'segments': [ 1, 1, 1, 0, 0, 2],
'data': [10, 20, 30, 14, 15, 26],
}
output = [14, 15, 10, 20, 30, 26]
constants = []
description = 'sort the segments'
# Adapted from https://stackoverflow.com/questions/53054668
inputs = {
'values': [37, 42, 42, 37, 28, 15, 42, 15],
}
output = [0, 1, 1, 0, 2, 3, 1, 3]
constants = []
description = 'group items by value and get the group indices'
# Adapted from https://stackoverflow.com/questions/47816231
inputs = {
'vector': [3, 5, 0, 2, 3, 3, 0],
}
output = [[1., 0., 0., 0., 1., 1., 0.],
[0., 1., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 1.],
[0., 0., 0., 1., 0., 0., 0.],
[1., 0., 0., 0., 1., 1., 0.],
[1., 0., 0., 0., 1., 1., 0.],
[0., 0., 1., 0., 0., 0., 1.]]
constants = []
description = 'binary tensor from vector indicating if elements are equal'
# Adapted from https://stackoverflow.com/questions/44834739
inputs = {
'scores': [[0.7, 0.2, 0.1],
[0.4, 0.5, 0.1],
[0.4, 0.4, 0.2],
[0.3, 0.4, 0.3],
[0.0, 0.0, 1.0]],
}
output = [[1, 0, 0],
[0, 1, 0],
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]
constants = []
description = 'compute argmax in each tensor and set it to 1'
# Adapted from https://stackoverflow.com/questions/33769041
inputs = {
'first': [-1, 0, -3, 2, 1, 3, 5, -1, -9, 2, 10],
'second': [12, 3, 45, 6, 7, 8, 9, 87, 65, 4, 32],
}
output = [6, 8, 9, 4, 32]
constants = [1]
description = 'select the values in the second tensor where the first tensor is greater than 1'
Explanation:
Usage Tips
General
If TF-Coder finds a solution, it is guaranteed that the solution produces
the example output when run on the example inputs. However, it is not
guaranteed that the solution generalizes in the way you intend! Please
carefully review solutions produced by TF-Coder before using them in your real
project.
TF-Coder will often produce a solution that uses hardcoded constants for
shapes or lengths, e.g., tf.reshape(to_flatten, (6,)) in order to flatten an
input tensor with shape (2, 3). You may need to manually change these
constants to improve the generality of the solution, e.g., replacing 6 with
-1 in this case. Use the shape attribute to obtain dimension lengths of
input tensors, e.g., to_flatten.shape[0] would be 2.
If you want to play with TensorFlow in Colab (e.g., to understand how a
TF-Coder solution works or to test your own solution):
The TF-Coder Colab already imports TensorFlow 2 and Numpy, for your
convenience.
Use tf.constant to create a tensor from the list format:
```
>>> tf.constant([[13, 22], [17, 5]])
<tf.Tensor: id=1, shape=(2, 2), dtype=int32, numpy=
array([[13, 22],
[17, 5]], dtype=int32)>
tf.constant(12.3)
<tf.Tensor: id=2, shape=(), dtype=float32, numpy=12.3>
```
* A Colab notebook can only have one cell running at a time. If you want to
experiment with TensorFlow code while TF-Coder is running, consider doing so
in a separate Python shell.
TF-Coder's running time is exponential in the complexity of the solution.
Simplifying the problem, or breaking it down into multiple steps, can help
TF-Coder find solutions quickly. For instance, if you know that a reshape,
transpose, cast, or other similar operation should be applied to an input or
as the last operation to produce the output, consider applying that operation
manually to the input-output example, to help TF-Coder focus on the more
difficult parts.
Input-Output Example
Creating a good input-output example is crucial for TF-Coder to find the
solution you want. The example should be robust enough to rule out false
positive solutions, which are TensorFlow expressions that work on the given
example, but fail to generalize in the desired way.
Here are some techniques that reduce the risk of false positives:
Include more numbers in the input and output tensors. TF-Coder will only
output a solution if it works on the provided example, so having many numbers
in the output tensor means it is less likely for incorrect solutions to
produce all of the correct numbers by chance.
Use random-looking numbers in the input tensors. For example,
[18, 73, 34, 51] would be a better input tensor than [1, 2, 3, 4], since
the former is not all consecutive and not all increasing. This helps eliminate
patterns in the input tensors that false positive solutions can take advantage
of.
Remove patterns from the output other than the intended one. For example,
if the output tensor is a selection of numbers from input tensors, make sure
the selected numbers aren't all the maximum element along some axis, unless
that is the intended pattern.
Include edge cases where relevant. These could include negative numbers,
zero, or duplicate numbers, when applicable to the problem.
Distinguish between indices and non-indices. If you know a number should
not be used as an index, consider making it out of range of valid indices
(negative, too large, or even floating-point).
Follow any constraints that exist in your real program. For example, if an
input tensor only contains positive numbers, TF-Coder may produce a solution
that doesn't generalize to negative numbers. Whether this is acceptable
depends on whether that tensor could possibly contain negative numbers in your
real program. Of course, depending on the problem, a completely general
solution may be unnecessarily harder to find.
In general, false positive solutions are more common if the output tensor
contains a relatively low amount of information given the inputs. This may
happen if the output is a scalar or boolean tensor, or if the output is
constructed by selecting one or a few elements from an input. When possible, try
to include many numbers in the output so that it contains enough information to
unambiguously identify the intended transformation.
Constants
TF-Coder will print out the list of constants that it is using, including
constants chosen through heuristics. This list is ordered with highest-
priority constants at the beginning.
If the intended solution requires a constant that is not in TF-Coder's printed
list of constants, then TF-Coder will be unable to find the intended
solution. So, it is important to provide any necessary constants.
If you explicitly provide constants, they will be used with the highest
priority. Thus, even if TF-Coder's heuristics choose your desired constant, it
may be better to provide the constant explicitly so that TF-Coder is more
confident about using your constant.
Providing extraneous constants will slow down the tool.
Description
The description is optional. If provided, it is used to prioritize TensorFlow
operations that fit with the description.
If you know of a TensorFlow operation (e.g., tf.reduce_max) that is
relevant, include its name (e.g., "tf.reduce_max") anywhere in the
description. This will lead TF-Coder to prioritize that operation.
If possible, try to describe how the output should be computed, rather than
what the output conceptually represents.
A good description is less important than a good input-output example.
Other Details and Advanced Options
When running TF-Coder, you can set the time limit, the number of solutions to
find, and whether solutions are required to use inputs.
Time limit: This is the maximum amount of time, in seconds, that TF-Coder
will spend on the problem before giving up. Note that you can stop the tool
at any time by pressing the cell's stop button.
Number of solutions: TF-Coder can continue searching for more solutions
after the first solution is found. This can help you examine different ways
of solving the problem. However, enabling multiple solutions will cause the
entire search to slow down, even for the first solution.
Solution requirement: By default, solutions are required to use every input
tensor at least once. This constraint can be relaxed to allow solutions that
use only one input (if there are multiple inputs), or even solutions that
use no inputs at all.
By default, integer tensors have a DType of tf.int32, and float tensors have
a DType of tf.float32. To specify a different DType, provide a tf.Tensor
object instead of a list. For example:
If an input is given as [3, 1, 7, 4], then it will have a DType of
tf.int32.
If an input is given as tf.constant([3, 1, 7, 4], dtype=tf.int64), then it
will have a DType of tf.int64.
A primitive scalar input can be specified with a Python float or int, and a
scalar tensor can be specified with a tf.Tensor:
If an input is given as [123], then it will be a 1-dimensional tensor with
shape (1,), equivalent to tf.constant([123]).
If an input is given as 123, then it will remain a Python primitive int,
not a tf.Tensor.
If an input is given as tf.constant(123), then it will be a 0-dimensional
scalar tensor with shape ().
Input and output tensors can have at most 4 dimensions.
Example problems that TF-Coder can solve
Here are several examples of real-life problems that TF-Coder can solve.
End of explanation
# Run this cell to print all supported operations.
colab_interface.print_supported_operations()
Explanation: Supported Operations
End of explanation |
10,226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explainable deep-learning -- Visualizing deep neural networks
Author
Step1: Function to load the model for generating predictions
Step2: Function to generate predictions
Step3: Function to plot prediction accuracy
Step4: Initialize parameters for generating predictions
Step5: Download model weights, configuration, classification labels and example images
Step6: Load the trained deep learning model and load model weights
Step7: Visualizing the model architecture -- Inception version 3
Step8: Specify input size for the image to generate predictions
Step9: Generating predictions and plotting classification accuracy
Test image 1
Visualize input image
Step10: Test image 2 | Python Code:
import sys
import argparse
import numpy as np
import requests
import matplotlib
matplotlib.use('Agg')
import os
import time
import json
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import tqdm
from io import BytesIO
from PIL import Image
from keras.preprocessing import image
from keras.applications.inception_v3 import preprocess_input
from keras.models import model_from_json
from keras import backend as K
from keras.models import Model
from keras.layers import UpSampling2D, Conv2D
from keras.preprocessing import image
#from keras.applications.inception_resnet_v2 import preprocess_input
def generate_timestamp():
timestring = time.strftime("%Y_%m_%d-%H_%M_%S")
print ("Time stamp generated: "+timestring)
return timestring
def is_valid_file(parser, arg):
if not os.path.isfile(arg):
parser.error("The file %s does not exist ..." % arg)
else:
return arg
def is_valid_dir(parser, arg):
if not os.path.isdir(arg):
parser.error("The folder %s does not exist ..." % arg)
else:
return arg
Explanation: Explainable deep-learning -- Visualizing deep neural networks
Author: Dr. Rahul Remanan
CEO and Chief Imagination Officer, Moad Computer
Launch this notebook in Google CoLab
This is a skeletal frame work for building better explainable deep-learning.
In this notebook, using the Kaggle Dogs vs Cats Redux, Kernels Edition dataset.
Import dependencies
End of explanation
def load_prediction_model(args):
try:
print (args.config_file[0])
with open(args.config_file[0]) as json_file:
model_json = json_file.read()
model = model_from_json(model_json)
except:
print ("Please specify a model configuration file ...")
sys.exit(1)
try:
model.load_weights(args.weights_file[0])
print ("Loaded model weights from: " + str(args.weights_file[0]))
except:
print ("Error loading model weights ...")
sys.exit(1)
try:
print (args.labels_file[0])
with open(args.labels_file[0]) as json_file:
labels = json.load(json_file)
print ("Loaded labels from: " + str(args.labels_file[0]))
except:
print ("No labels loaded ...")
sys.exit(1)
return model, labels
Explanation: Function to load the model for generating predictions
End of explanation
def predict(model, img, target_size):
print ("Running prediction model on the image file ...")
if img.size != target_size:
img = img.resize(target_size)
_x_ = image.img_to_array(img)
_x_ = np.expand_dims(_x_, axis=0)
_x_ = preprocess_input(_x_)
preds = model.predict(_x_)
probabilities = model.predict(_x_, batch_size=1).flatten()
prediction = labels[np.argmax(probabilities)]
return preds[0], prediction
Explanation: Function to generate predictions
End of explanation
def plot_preds(image, preds, labels, timestr):
output_loc = args.output_dir[0]
output_file_preds = os.path.join(output_loc+"//preds_out_"+timestr+".png")
fig = plt.figure()
plt.axis('on')
labels = labels
plt.barh([0, 1], preds, alpha=0.5)
plt.yticks([0, 1], labels)
plt.xlabel('Probability')
plt.xlim(0,1.01)
plt.tight_layout()
fig.savefig(output_file_preds, dpi=fig.dpi)
Explanation: Function to plot prediction accuracy
End of explanation
import types
args=types.SimpleNamespace()
args.config_file = ['./trained_cats_dogs.config']
args.weights_file = ['./trained_cats_dogs_epochs30_weights.model']
args.labels_file = ['./trained_labels.json']
args.output_dir = ['./']
args.image = ['./cat_01.jpg']
args.image_url = ['https://github.com/rahulremanan/python_tutorial/raw/master/Machine_Vision/02_Object_Prediction/test_images/dog_01.jpg']
Explanation: Initialize parameters for generating predictions
End of explanation
! wget https://raw.githubusercontent.com/rahulremanan/python_tutorial/master/Machine_Vision/02_Object_Prediction/test_images/cat_01.jpg -O cat_01.jpg
! wget https://raw.githubusercontent.com/rahulremanan/python_tutorial/master/Machine_Vision/02_Object_Prediction/model/trained_cats_dogs.config -O trained_cats_dogs.config
! wget https://raw.githubusercontent.com/rahulremanan/python_tutorial/master/Machine_Vision/02_Object_Prediction/model/trained_labels.json -O trained_labels.json
! wget https://media.githubusercontent.com/media/rahulremanan/python_tutorial/master/Machine_Vision/02_Object_Prediction/model/trained_cats_dogs_epochs30_weights.model -O trained_cats_dogs_epochs30_weights.model
Explanation: Download model weights, configuration, classification labels and example images
End of explanation
model, labels = load_prediction_model(args)
Explanation: Load the trained deep learning model and load model weights
End of explanation
from keras.utils import plot_model
import pydot
import graphviz # apt-get install -y graphviz libgraphviz-dev && pip3 install pydot graphviz
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
output_dir = './'
plot_model(model, to_file= output_dir + '/model_summary_plot.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
Explanation: Visualizing the model architecture -- Inception version 3
End of explanation
target_size = (299, 299)
Explanation: Specify input size for the image to generate predictions
End of explanation
from IPython.display import Image as PyImage
from IPython.core.display import HTML
PyImage(args.image[0])
if args.image is not None:
img = Image.open(args.image[0])
preds = predict(model, img, target_size)
print (preds[1] + "\t" + "\t".join(map(lambda x: "%.2f" % x, preds[0])))
print (str(preds[1]))
timestr = generate_timestamp()
plot_preds(img, preds[0], labels, timestr)
image_path_pred1 = os.path.join('./preds_out_'+timestr+'.png')
PyImage(image_path_pred1)
Explanation: Generating predictions and plotting classification accuracy
Test image 1
Visualize input image
End of explanation
PyImage(url = args.image_url[0])
if args.image_url is not None:
response = requests.get(args.image_url[0])
img = Image.open(BytesIO(response.content))
preds = predict(model, img, target_size)
print (preds[1] + "\t" + "\t".join(map(lambda x: "%.2f" % x, preds[0])))
print (str(preds[1]))
timestr = generate_timestamp()
plot_preds(img, preds[0], labels, timestr)
image_path_pred2 = os.path.join('./preds_out_'+timestr+'.png')
PyImage(image_path_pred2)
def class_activation_map(INPUT_IMG_FILE=None,
PRE_PROCESSOR=None,
LABEL_DECODER=None,
MODEL=None,
LABELS=None,
IM_WIDTH=299,
IM_HEIGHT=299,
CONV_LAYER='conv_7b',
URL_MODE=False,
FILE_MODE=False,
EVAL_STEPS=1,
HEATMAP_SHAPE=[8,8],
BENCHMARK=True):
if INPUT_IMG_FILE == None:
print ('No input file specified to generate predictions ...')
return
if URL_MODE:
response = requests.get(INPUT_IMG_FILE)
img = Image.open(BytesIO(response.content))
img = img.resize((IM_WIDTH, IM_HEIGHT))
elif FILE_MODE:
img = INPUT_IMG_FILE
else:
img = image.load_img(INPUT_IMG_FILE, target_size=(IM_WIDTH, IM_HEIGHT))
x = img
if not FILE_MODE:
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
if PRE_PROCESSOR !=None:
preprocess_input = PRE_PROCESSOR
x = preprocess_input(x)
model = MODEL
if model == None:
print ('No input model specified to generate predictions ...')
return
labels = LABELS
heatmaps = []
heatmap_sum = np.empty(HEATMAP_SHAPE, float)
last_conv_layer = model.get_layer(CONV_LAYER)
feature_size = tensor_featureSizeExtractor(last_conv_layer)
model_input = model.input
model_output = model.output
last_conv_layer_out = last_conv_layer.output
iterate_input = []
pred_labels = []
out_labels = []
probabilities = np.empty((0,len(labels)), float)
for step in (range(EVAL_STEPS)):
startTime = time.time()
preds = model.predict(x, batch_size=1)
preds_endTime = time.time()
probability = preds.flatten()
probabilities = np.append(probabilities,
np.array([probability]),
axis=0)
if labels !=None:
pred_label = labels[np.argmax(probability)]
pred_labels.append(pred_label)
out_labels.append(pred_label)
print('PREDICTION: {}'.format(pred_label))
print('ACCURACY: {}'.format(preds[0]))
del pred_label
elif LABEL_DECODER !=None:
pred_label = pd.DataFrame(LABEL_DECODER(preds, top=3)[0],columns=['col1','category','probability']).iloc[:,1:]
pred_labels.append(pred_label.loc[0,'category'])
out_labels.append(pred_label.loc[0,'category'])
print('PREDICTION:',pred_label.loc[0,'category'])
del pred_label
else:
print ('No labels will be generated ...')
pred_labels = set(pred_labels)
pred_labels = list(pred_labels)
argmax = np.argmax(probability)
heatmap_startTime = time.time()
output = model_output[:, argmax]
model_endTime = time.time()
grads = K.gradients(output,
last_conv_layer_out)[0]
pooled_grads = K.mean(grads,
axis=(0, 1, 2))
iterate = K.function([model_input], [pooled_grads,
last_conv_layer_out[0]])
pooled_grads_value, conv_layer_output_value = iterate([x])
grad_endTime = time.time()
for i in range(feature_size):
conv_layer_output_value[:,:,i] *= pooled_grads_value[i]
iter_endTime = time.time()
heatmap = np.mean(conv_layer_output_value, axis=-1)
heatmap = np.maximum(heatmap, 0)
heatmap /= np.max(heatmap)
heatmap_endTime = time.time()
try:
heatmap_sum = np.add(heatmap_sum, heatmap)
heatmaps.append(heatmap)
if EVAL_STEPS >1:
del probability
del heatmap
del output
del grads
del pooled_grads
del iterate
del pooled_grads_value
del conv_layer_output_value
except:
print ('Failed updating heatmaps ...')
endTime = time.time()
predsTime = preds_endTime - startTime
gradsTime = grad_endTime - model_endTime
iterTime = iter_endTime - grad_endTime
heatmapTime = heatmap_endTime - heatmap_startTime
executionTime = endTime - startTime
model_outputTime = model_endTime - heatmap_startTime
if BENCHMARK:
print ('Heatmap generation time: {} seconds ...'. format(heatmapTime))
print ('Gradient generation time: {} seconds ...'.format(gradsTime))
print ('Iteration loop execution time: {} seconds ...'.format(iterTime))
print ('Model output generation time: {} seconds'.format(model_outputTime))
print ('Prediction generation time: {} seconds ...'.format(predsTime))
print ('Completed processing {} out of {} steps in {} seconds ...'.format(int(step+1), int(EVAL_STEPS), float(executionTime)))
if EVAL_STEPS >1:
mean_heatmap = heatmap_sum/EVAL_STEPS
else:
mean_heatmap = heatmap
mean = np.matrix.mean(np.asmatrix(probabilities), axis=0)
stdev = np.matrix.std(np.asmatrix(probabilities), axis=0)
accuracy = np.matrix.tolist(mean)[0][np.argmax(mean)]
uncertainty = np.matrix.tolist(stdev)[0][np.argmax(mean)]
return [mean_heatmap, accuracy, uncertainty, pred_labels, heatmaps, out_labels, probabilities]
labels_json='./trained_labels.json'
with open(labels_json) as json_file:
labels = json.load(json_file)
print (labels)
PRE_PROCESSOR = preprocess_input
MODEL = model
INPUT_IMG_FILE = './cat_01.jpg'
LABELS= labels
%matplotlib inline
img=mpimg.imread(INPUT_IMG_FILE)
plt.imshow(img)
import tensorflow as tf
def tensor_featureSizeExtractor(last_conv_layer):
if len(last_conv_layer.output.get_shape().as_list()) == 4:
feature_size = last_conv_layer.output.get_shape().as_list()[3]
return feature_size
else:
return 'Received tensor shape: {} instead of expected shape: 4'.format(len(last_conv_layer.output.get_shape().as_list()))
def heatmap_overlay(INPUT_IMG_FILE,
HEATMAP,
THRESHOLD=0.8):
img = cv2.imread(INPUT_IMG_FILE)
heatmap = cv2.resize(HEATMAP, (img.shape[1], img.shape[0]))
heatmap = np.uint8(255 * heatmap)
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
hif = THRESHOLD
superimposed_img = heatmap * hif + img
return [superimposed_img, heatmap]
output = class_activation_map(INPUT_IMG_FILE=INPUT_IMG_FILE,
PRE_PROCESSOR=PRE_PROCESSOR,
MODEL=MODEL,
LABELS=LABELS,
IM_WIDTH=299,
IM_HEIGHT=299,
CONV_LAYER='mixed10')
HEATMAP = output[0]
plt.matshow(HEATMAP)
plt.show()
print (output[3])
heatmap_output = heatmap_overlay(INPUT_IMG_FILE,
HEATMAP,
THRESHOLD=0.8)
superimposed_img = heatmap_output[0]
output_file = './class_activation_map.jpeg'
cv2.imwrite(output_file, superimposed_img)
img=mpimg.imread(output_file)
plt.imshow(img)
Explanation: Test image 2
End of explanation |
10,227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fine-tuning a Pretrained Network for Style Recognition
In this example, we'll explore a common approach that is particularly useful in real-world applications
Step1: 1. Setup and dataset download
Download data required for this exercise.
get_ilsvrc_aux.sh to download the ImageNet data mean, labels, etc.
download_model_binary.py to download the pretrained reference model
finetune_flickr_style/assemble_data.py downloadsd the style training and testing data
We'll download just a small subset of the full dataset for this exercise
Step2: Define weights, the path to the ImageNet pretrained weights we just downloaded, and make sure it exists.
Step3: Load the 1000 ImageNet labels from ilsvrc12/synset_words.txt, and the 5 style labels from finetune_flickr_style/style_names.txt.
Step5: 2. Defining and running the nets
We'll start by defining caffenet, a function which initializes the CaffeNet architecture (a minor variant on AlexNet), taking arguments specifying the data and number of output classes.
Step6: Now, let's create a CaffeNet that takes unlabeled "dummy data" as input, allowing us to set its input images externally and see what ImageNet classes it predicts.
Step7: Define a function style_net which calls caffenet on data from the Flickr style dataset.
The new network will also have the CaffeNet architecture, with differences in the input and output
Step8: Use the style_net function defined above to initialize untrained_style_net, a CaffeNet with input images from the style dataset and weights from the pretrained ImageNet model.
Call forward on untrained_style_net to get a batch of style training data.
Step9: Pick one of the style net training images from the batch of 50 (we'll arbitrarily choose #8 here). Display it, then run it through imagenet_net, the ImageNet-pretrained network to view its top 5 predicted classes from the 1000 ImageNet classes.
Below we chose an image where the network's predictions happen to be reasonable, as the image is of a beach, and "sandbar" and "seashore" both happen to be ImageNet-1000 categories. For other images, the predictions won't be this good, sometimes due to the network actually failing to recognize the object(s) present in the image, but perhaps even more often due to the fact that not all images contain an object from the (somewhat arbitrarily chosen) 1000 ImageNet categories. Modify the batch_index variable by changing its default setting of 8 to another value from 0-49 (since the batch size is 50) to see predictions for other images in the batch. (To go beyond this batch of 50 images, first rerun the above cell to load a fresh batch of data into style_net.)
Step10: We can also look at untrained_style_net's predictions, but we won't see anything interesting as its classifier hasn't been trained yet.
In fact, since we zero-initialized the classifier (see caffenet definition -- no weight_filler is passed to the final InnerProduct layer), the softmax inputs should be all zero and we should therefore see a predicted probability of 1/N for each label (for N labels). Since we set N = 5, we get a predicted probability of 20% for each class.
Step11: We can also verify that the activations in layer fc7 immediately before the classification layer are the same as (or very close to) those in the ImageNet-pretrained model, since both models are using the same pretrained weights in the conv1 through fc7 layers.
Step12: Delete untrained_style_net to save memory. (Hang on to imagenet_net as we'll use it again later.)
Step13: 3. Training the style classifier
Now, we'll define a function solver to create our Caffe solvers, which are used to train the network (learn its weights). In this function we'll set values for various parameters used for learning, display, and "snapshotting" -- see the inline comments for explanations of what they mean. You may want to play with some of the learning parameters to see if you can improve on the results here!
Step15: Now we'll invoke the solver to train the style net's classification layer.
For the record, if you want to train the network using only the command line tool, this is the command
Step16: Let's create and run solvers to train nets for the style recognition task. We'll create two solvers -- one (style_solver) will have its train net initialized to the ImageNet-pretrained weights (this is done by the call to the copy_from method), and the other (scratch_style_solver) will start from a randomly initialized net.
During training, we should see that the ImageNet pretrained net is learning faster and attaining better accuracies than the scratch net.
Step17: Let's look at the training loss and accuracy produced by the two training procedures. Notice how quickly the ImageNet pretrained model's loss value (blue) drops, and that the randomly initialized model's loss value (green) barely (if at all) improves from training only the classifier layer.
Step18: Let's take a look at the testing accuracy after running 200 iterations of training. Note that we're classifying among 5 classes, giving chance accuracy of 20%. We expect both results to be better than chance accuracy (20%), and we further expect the result from training using the ImageNet pretraining initialization to be much better than the one from training from scratch. Let's see.
Step19: 4. End-to-end finetuning for style
Finally, we'll train both nets again, starting from the weights we just learned. The only difference this time is that we'll be learning the weights "end-to-end" by turning on learning in all layers of the network, starting from the RGB conv1 filters directly applied to the input image. We pass the argument learn_all=True to the style_net function defined earlier in this notebook, which tells the function to apply a positive (non-zero) lr_mult value for all parameters. Under the default, learn_all=False, all parameters in the pretrained layers (conv1 through fc7) are frozen (lr_mult = 0), and we learn only the classifier layer fc8_flickr.
Note that both networks start at roughly the accuracy achieved at the end of the previous training session, and improve significantly with end-to-end training. To be more scientific, we'd also want to follow the same additional training procedure without the end-to-end training, to ensure that our results aren't better simply because we trained for twice as long. Feel free to try this yourself!
Step20: Let's now test the end-to-end finetuned models. Since all layers have been optimized for the style recognition task at hand, we expect both nets to get better results than the ones above, which were achieved by nets with only their classifier layers trained for the style task (on top of either ImageNet pretrained or randomly initialized weights).
Step21: We'll first look back at the image we started with and check our end-to-end trained model's predictions.
Step22: Whew, that looks a lot better than before! But note that this image was from the training set, so the net got to see its label at training time.
Finally, we'll pick an image from the test set (an image the model hasn't seen) and look at our end-to-end finetuned style model's predictions for it.
Step23: We can also look at the predictions of the network trained from scratch. We see that in this case, the scratch network also predicts the correct label for the image (Pastel), but is much less confident in its prediction than the pretrained net.
Step24: Of course, we can again look at the ImageNet model's predictions for the above image | Python Code:
caffe_root = '../' # this file should be run from {caffe_root}/examples (otherwise change this line)
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
caffe.set_device(0)
caffe.set_mode_gpu()
import numpy as np
from pylab import *
%matplotlib inline
import tempfile
# Helper function for deprocessing preprocessed images, e.g., for display.
def deprocess_net_image(image):
image = image.copy() # don't modify destructively
image = image[::-1] # BGR -> RGB
image = image.transpose(1, 2, 0) # CHW -> HWC
image += [123, 117, 104] # (approximately) undo mean subtraction
# clamp values in [0, 255]
image[image < 0], image[image > 255] = 0, 255
# round and cast from float32 to uint8
image = np.round(image)
image = np.require(image, dtype=np.uint8)
return image
Explanation: Fine-tuning a Pretrained Network for Style Recognition
In this example, we'll explore a common approach that is particularly useful in real-world applications: take a pre-trained Caffe network and fine-tune the parameters on your custom data.
The advantage of this approach is that, since pre-trained networks are learned on a large set of images, the intermediate layers capture the "semantics" of the general visual appearance. Think of it as a very powerful generic visual feature that you can treat as a black box. On top of that, only a relatively small amount of data is needed for good performance on the target task.
First, we will need to prepare the data. This involves the following parts:
(1) Get the ImageNet ilsvrc pretrained model with the provided shell scripts.
(2) Download a subset of the overall Flickr style dataset for this demo.
(3) Compile the downloaded Flickr dataset into a database that Caffe can then consume.
End of explanation
# Download just a small subset of the data for this exercise.
# (2000 of 80K images, 5 of 20 labels.)
# To download the entire dataset, set `full_dataset = True`.
full_dataset = False
if full_dataset:
NUM_STYLE_IMAGES = NUM_STYLE_LABELS = -1
else:
NUM_STYLE_IMAGES = 2000
NUM_STYLE_LABELS = 5
# This downloads the ilsvrc auxiliary data (mean file, etc),
# and a subset of 2000 images for the style recognition task.
import os
os.chdir(caffe_root) # run scripts from caffe root
!data/ilsvrc12/get_ilsvrc_aux.sh
!scripts/download_model_binary.py models/bvlc_reference_caffenet
!python examples/finetune_flickr_style/assemble_data.py \
--workers=-1 --seed=1701 \
--images=$NUM_STYLE_IMAGES --label=$NUM_STYLE_LABELS
# back to examples
os.chdir('examples')
Explanation: 1. Setup and dataset download
Download data required for this exercise.
get_ilsvrc_aux.sh to download the ImageNet data mean, labels, etc.
download_model_binary.py to download the pretrained reference model
finetune_flickr_style/assemble_data.py downloadsd the style training and testing data
We'll download just a small subset of the full dataset for this exercise: just 2000 of the 80K images, from 5 of the 20 style categories. (To download the full dataset, set full_dataset = True in the cell below.)
End of explanation
import os
weights = caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'
assert os.path.exists(weights)
Explanation: Define weights, the path to the ImageNet pretrained weights we just downloaded, and make sure it exists.
End of explanation
# Load ImageNet labels to imagenet_labels
imagenet_label_file = caffe_root + 'data/ilsvrc12/synset_words.txt'
imagenet_labels = list(np.loadtxt(imagenet_label_file, str, delimiter='\t'))
assert len(imagenet_labels) == 1000
print 'Loaded ImageNet labels:\n', '\n'.join(imagenet_labels[:10] + ['...'])
# Load style labels to style_labels
style_label_file = caffe_root + 'examples/finetune_flickr_style/style_names.txt'
style_labels = list(np.loadtxt(style_label_file, str, delimiter='\n'))
if NUM_STYLE_LABELS > 0:
style_labels = style_labels[:NUM_STYLE_LABELS]
print '\nLoaded style labels:\n', ', '.join(style_labels)
Explanation: Load the 1000 ImageNet labels from ilsvrc12/synset_words.txt, and the 5 style labels from finetune_flickr_style/style_names.txt.
End of explanation
from caffe import layers as L
from caffe import params as P
weight_param = dict(lr_mult=1, decay_mult=1)
bias_param = dict(lr_mult=2, decay_mult=0)
learned_param = [weight_param, bias_param]
frozen_param = [dict(lr_mult=0)] * 2
def conv_relu(bottom, ks, nout, stride=1, pad=0, group=1,
param=learned_param,
weight_filler=dict(type='gaussian', std=0.01),
bias_filler=dict(type='constant', value=0.1)):
conv = L.Convolution(bottom, kernel_size=ks, stride=stride,
num_output=nout, pad=pad, group=group,
param=param, weight_filler=weight_filler,
bias_filler=bias_filler)
return conv, L.ReLU(conv, in_place=True)
def fc_relu(bottom, nout, param=learned_param,
weight_filler=dict(type='gaussian', std=0.005),
bias_filler=dict(type='constant', value=0.1)):
fc = L.InnerProduct(bottom, num_output=nout, param=param,
weight_filler=weight_filler,
bias_filler=bias_filler)
return fc, L.ReLU(fc, in_place=True)
def max_pool(bottom, ks, stride=1):
return L.Pooling(bottom, pool=P.Pooling.MAX, kernel_size=ks, stride=stride)
def caffenet(data, label=None, train=True, num_classes=1000,
classifier_name='fc8', learn_all=False):
Returns a NetSpec specifying CaffeNet, following the original proto text
specification (./models/bvlc_reference_caffenet/train_val.prototxt).
n = caffe.NetSpec()
n.data = data
param = learned_param if learn_all else frozen_param
n.conv1, n.relu1 = conv_relu(n.data, 11, 96, stride=4, param=param)
n.pool1 = max_pool(n.relu1, 3, stride=2)
n.norm1 = L.LRN(n.pool1, local_size=5, alpha=1e-4, beta=0.75)
n.conv2, n.relu2 = conv_relu(n.norm1, 5, 256, pad=2, group=2, param=param)
n.pool2 = max_pool(n.relu2, 3, stride=2)
n.norm2 = L.LRN(n.pool2, local_size=5, alpha=1e-4, beta=0.75)
n.conv3, n.relu3 = conv_relu(n.norm2, 3, 384, pad=1, param=param)
n.conv4, n.relu4 = conv_relu(n.relu3, 3, 384, pad=1, group=2, param=param)
n.conv5, n.relu5 = conv_relu(n.relu4, 3, 256, pad=1, group=2, param=param)
n.pool5 = max_pool(n.relu5, 3, stride=2)
n.fc6, n.relu6 = fc_relu(n.pool5, 4096, param=param)
if train:
n.drop6 = fc7input = L.Dropout(n.relu6, in_place=True)
else:
fc7input = n.relu6
n.fc7, n.relu7 = fc_relu(fc7input, 4096, param=param)
if train:
n.drop7 = fc8input = L.Dropout(n.relu7, in_place=True)
else:
fc8input = n.relu7
# always learn fc8 (param=learned_param)
fc8 = L.InnerProduct(fc8input, num_output=num_classes, param=learned_param)
# give fc8 the name specified by argument `classifier_name`
n.__setattr__(classifier_name, fc8)
if not train:
n.probs = L.Softmax(fc8)
if label is not None:
n.label = label
n.loss = L.SoftmaxWithLoss(fc8, n.label)
n.acc = L.Accuracy(fc8, n.label)
# write the net to a temporary file and return its filename
with tempfile.NamedTemporaryFile(delete=False) as f:
f.write(str(n.to_proto()))
return f.name
Explanation: 2. Defining and running the nets
We'll start by defining caffenet, a function which initializes the CaffeNet architecture (a minor variant on AlexNet), taking arguments specifying the data and number of output classes.
End of explanation
dummy_data = L.DummyData(shape=dict(dim=[1, 3, 227, 227]))
imagenet_net_filename = caffenet(data=dummy_data, train=False)
imagenet_net = caffe.Net(imagenet_net_filename, weights, caffe.TEST)
Explanation: Now, let's create a CaffeNet that takes unlabeled "dummy data" as input, allowing us to set its input images externally and see what ImageNet classes it predicts.
End of explanation
def style_net(train=True, learn_all=False, subset=None):
if subset is None:
subset = 'train' if train else 'test'
source = caffe_root + 'data/flickr_style/%s.txt' % subset
transform_param = dict(mirror=train, crop_size=227,
mean_file=caffe_root + 'data/ilsvrc12/imagenet_mean.binaryproto')
style_data, style_label = L.ImageData(
transform_param=transform_param, source=source,
batch_size=50, new_height=256, new_width=256, ntop=2)
return caffenet(data=style_data, label=style_label, train=train,
num_classes=NUM_STYLE_LABELS,
classifier_name='fc8_flickr',
learn_all=learn_all)
Explanation: Define a function style_net which calls caffenet on data from the Flickr style dataset.
The new network will also have the CaffeNet architecture, with differences in the input and output:
the input is the Flickr style data we downloaded, provided by an ImageData layer
the output is a distribution over 20 classes rather than the original 1000 ImageNet classes
the classification layer is renamed from fc8 to fc8_flickr to tell Caffe not to load the original classifier (fc8) weights from the ImageNet-pretrained model
End of explanation
untrained_style_net = caffe.Net(style_net(train=False, subset='train'),
weights, caffe.TEST)
untrained_style_net.forward()
style_data_batch = untrained_style_net.blobs['data'].data.copy()
style_label_batch = np.array(untrained_style_net.blobs['label'].data, dtype=np.int32)
Explanation: Use the style_net function defined above to initialize untrained_style_net, a CaffeNet with input images from the style dataset and weights from the pretrained ImageNet model.
Call forward on untrained_style_net to get a batch of style training data.
End of explanation
def disp_preds(net, image, labels, k=5, name='ImageNet'):
input_blob = net.blobs['data']
net.blobs['data'].data[0, ...] = image
probs = net.forward(start='conv1')['probs'][0]
top_k = (-probs).argsort()[:k]
print 'top %d predicted %s labels =' % (k, name)
print '\n'.join('\t(%d) %5.2f%% %s' % (i+1, 100*probs[p], labels[p])
for i, p in enumerate(top_k))
def disp_imagenet_preds(net, image):
disp_preds(net, image, imagenet_labels, name='ImageNet')
def disp_style_preds(net, image):
disp_preds(net, image, style_labels, name='style')
batch_index = 8
image = style_data_batch[batch_index]
plt.imshow(deprocess_net_image(image))
print 'actual label =', style_labels[style_label_batch[batch_index]]
disp_imagenet_preds(imagenet_net, image)
Explanation: Pick one of the style net training images from the batch of 50 (we'll arbitrarily choose #8 here). Display it, then run it through imagenet_net, the ImageNet-pretrained network to view its top 5 predicted classes from the 1000 ImageNet classes.
Below we chose an image where the network's predictions happen to be reasonable, as the image is of a beach, and "sandbar" and "seashore" both happen to be ImageNet-1000 categories. For other images, the predictions won't be this good, sometimes due to the network actually failing to recognize the object(s) present in the image, but perhaps even more often due to the fact that not all images contain an object from the (somewhat arbitrarily chosen) 1000 ImageNet categories. Modify the batch_index variable by changing its default setting of 8 to another value from 0-49 (since the batch size is 50) to see predictions for other images in the batch. (To go beyond this batch of 50 images, first rerun the above cell to load a fresh batch of data into style_net.)
End of explanation
disp_style_preds(untrained_style_net, image)
Explanation: We can also look at untrained_style_net's predictions, but we won't see anything interesting as its classifier hasn't been trained yet.
In fact, since we zero-initialized the classifier (see caffenet definition -- no weight_filler is passed to the final InnerProduct layer), the softmax inputs should be all zero and we should therefore see a predicted probability of 1/N for each label (for N labels). Since we set N = 5, we get a predicted probability of 20% for each class.
End of explanation
diff = untrained_style_net.blobs['fc7'].data[0] - imagenet_net.blobs['fc7'].data[0]
error = (diff ** 2).sum()
assert error < 1e-8
Explanation: We can also verify that the activations in layer fc7 immediately before the classification layer are the same as (or very close to) those in the ImageNet-pretrained model, since both models are using the same pretrained weights in the conv1 through fc7 layers.
End of explanation
del untrained_style_net
Explanation: Delete untrained_style_net to save memory. (Hang on to imagenet_net as we'll use it again later.)
End of explanation
from caffe.proto import caffe_pb2
def solver(train_net_path, test_net_path=None, base_lr=0.001):
s = caffe_pb2.SolverParameter()
# Specify locations of the train and (maybe) test networks.
s.train_net = train_net_path
if test_net_path is not None:
s.test_net.append(test_net_path)
s.test_interval = 1000 # Test after every 1000 training iterations.
s.test_iter.append(100) # Test on 100 batches each time we test.
# The number of iterations over which to average the gradient.
# Effectively boosts the training batch size by the given factor, without
# affecting memory utilization.
s.iter_size = 1
s.max_iter = 100000 # # of times to update the net (training iterations)
# Solve using the stochastic gradient descent (SGD) algorithm.
# Other choices include 'Adam' and 'RMSProp'.
s.type = 'SGD'
# Set the initial learning rate for SGD.
s.base_lr = base_lr
# Set `lr_policy` to define how the learning rate changes during training.
# Here, we 'step' the learning rate by multiplying it by a factor `gamma`
# every `stepsize` iterations.
s.lr_policy = 'step'
s.gamma = 0.1
s.stepsize = 20000
# Set other SGD hyperparameters. Setting a non-zero `momentum` takes a
# weighted average of the current gradient and previous gradients to make
# learning more stable. L2 weight decay regularizes learning, to help prevent
# the model from overfitting.
s.momentum = 0.9
s.weight_decay = 5e-4
# Display the current training loss and accuracy every 1000 iterations.
s.display = 1000
# Snapshots are files used to store networks we've trained. Here, we'll
# snapshot every 10K iterations -- ten times during training.
s.snapshot = 10000
s.snapshot_prefix = caffe_root + 'models/finetune_flickr_style/finetune_flickr_style'
# Train on the GPU. Using the CPU to train large networks is very slow.
s.solver_mode = caffe_pb2.SolverParameter.GPU
# Write the solver to a temporary file and return its filename.
with tempfile.NamedTemporaryFile(delete=False) as f:
f.write(str(s))
return f.name
Explanation: 3. Training the style classifier
Now, we'll define a function solver to create our Caffe solvers, which are used to train the network (learn its weights). In this function we'll set values for various parameters used for learning, display, and "snapshotting" -- see the inline comments for explanations of what they mean. You may want to play with some of the learning parameters to see if you can improve on the results here!
End of explanation
def run_solvers(niter, solvers, disp_interval=10):
Run solvers for niter iterations,
returning the loss and accuracy recorded each iteration.
`solvers` is a list of (name, solver) tuples.
blobs = ('loss', 'acc')
loss, acc = ({name: np.zeros(niter) for name, _ in solvers}
for _ in blobs)
for it in range(niter):
for name, s in solvers:
s.step(1) # run a single SGD step in Caffe
loss[name][it], acc[name][it] = (s.net.blobs[b].data.copy()
for b in blobs)
if it % disp_interval == 0 or it + 1 == niter:
loss_disp = '; '.join('%s: loss=%.3f, acc=%2d%%' %
(n, loss[n][it], np.round(100*acc[n][it]))
for n, _ in solvers)
print '%3d) %s' % (it, loss_disp)
# Save the learned weights from both nets.
weight_dir = tempfile.mkdtemp()
weights = {}
for name, s in solvers:
filename = 'weights.%s.caffemodel' % name
weights[name] = os.path.join(weight_dir, filename)
s.net.save(weights[name])
return loss, acc, weights
Explanation: Now we'll invoke the solver to train the style net's classification layer.
For the record, if you want to train the network using only the command line tool, this is the command:
<code>
build/tools/caffe train \
-solver models/finetune_flickr_style/solver.prototxt \
-weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel \
-gpu 0
</code>
However, we will train using Python in this example.
We'll first define run_solvers, a function that takes a list of solvers and steps each one in a round robin manner, recording the accuracy and loss values each iteration. At the end, the learned weights are saved to a file.
End of explanation
niter = 200 # number of iterations to train
# Reset style_solver as before.
style_solver_filename = solver(style_net(train=True))
style_solver = caffe.get_solver(style_solver_filename)
style_solver.net.copy_from(weights)
# For reference, we also create a solver that isn't initialized from
# the pretrained ImageNet weights.
scratch_style_solver_filename = solver(style_net(train=True))
scratch_style_solver = caffe.get_solver(scratch_style_solver_filename)
print 'Running solvers for %d iterations...' % niter
solvers = [('pretrained', style_solver),
('scratch', scratch_style_solver)]
loss, acc, weights = run_solvers(niter, solvers)
print 'Done.'
train_loss, scratch_train_loss = loss['pretrained'], loss['scratch']
train_acc, scratch_train_acc = acc['pretrained'], acc['scratch']
style_weights, scratch_style_weights = weights['pretrained'], weights['scratch']
# Delete solvers to save memory.
del style_solver, scratch_style_solver, solvers
Explanation: Let's create and run solvers to train nets for the style recognition task. We'll create two solvers -- one (style_solver) will have its train net initialized to the ImageNet-pretrained weights (this is done by the call to the copy_from method), and the other (scratch_style_solver) will start from a randomly initialized net.
During training, we should see that the ImageNet pretrained net is learning faster and attaining better accuracies than the scratch net.
End of explanation
plot(np.vstack([train_loss, scratch_train_loss]).T)
xlabel('Iteration #')
ylabel('Loss')
plot(np.vstack([train_acc, scratch_train_acc]).T)
xlabel('Iteration #')
ylabel('Accuracy')
Explanation: Let's look at the training loss and accuracy produced by the two training procedures. Notice how quickly the ImageNet pretrained model's loss value (blue) drops, and that the randomly initialized model's loss value (green) barely (if at all) improves from training only the classifier layer.
End of explanation
def eval_style_net(weights, test_iters=10):
test_net = caffe.Net(style_net(train=False), weights, caffe.TEST)
accuracy = 0
for it in xrange(test_iters):
accuracy += test_net.forward()['acc']
accuracy /= test_iters
return test_net, accuracy
test_net, accuracy = eval_style_net(style_weights)
print 'Accuracy, trained from ImageNet initialization: %3.1f%%' % (100*accuracy, )
scratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights)
print 'Accuracy, trained from random initialization: %3.1f%%' % (100*scratch_accuracy, )
Explanation: Let's take a look at the testing accuracy after running 200 iterations of training. Note that we're classifying among 5 classes, giving chance accuracy of 20%. We expect both results to be better than chance accuracy (20%), and we further expect the result from training using the ImageNet pretraining initialization to be much better than the one from training from scratch. Let's see.
End of explanation
end_to_end_net = style_net(train=True, learn_all=True)
# Set base_lr to 1e-3, the same as last time when learning only the classifier.
# You may want to play around with different values of this or other
# optimization parameters when fine-tuning. For example, if learning diverges
# (e.g., the loss gets very large or goes to infinity/NaN), you should try
# decreasing base_lr (e.g., to 1e-4, then 1e-5, etc., until you find a value
# for which learning does not diverge).
base_lr = 0.001
style_solver_filename = solver(end_to_end_net, base_lr=base_lr)
style_solver = caffe.get_solver(style_solver_filename)
style_solver.net.copy_from(style_weights)
scratch_style_solver_filename = solver(end_to_end_net, base_lr=base_lr)
scratch_style_solver = caffe.get_solver(scratch_style_solver_filename)
scratch_style_solver.net.copy_from(scratch_style_weights)
print 'Running solvers for %d iterations...' % niter
solvers = [('pretrained, end-to-end', style_solver),
('scratch, end-to-end', scratch_style_solver)]
_, _, finetuned_weights = run_solvers(niter, solvers)
print 'Done.'
style_weights_ft = finetuned_weights['pretrained, end-to-end']
scratch_style_weights_ft = finetuned_weights['scratch, end-to-end']
# Delete solvers to save memory.
del style_solver, scratch_style_solver, solvers
Explanation: 4. End-to-end finetuning for style
Finally, we'll train both nets again, starting from the weights we just learned. The only difference this time is that we'll be learning the weights "end-to-end" by turning on learning in all layers of the network, starting from the RGB conv1 filters directly applied to the input image. We pass the argument learn_all=True to the style_net function defined earlier in this notebook, which tells the function to apply a positive (non-zero) lr_mult value for all parameters. Under the default, learn_all=False, all parameters in the pretrained layers (conv1 through fc7) are frozen (lr_mult = 0), and we learn only the classifier layer fc8_flickr.
Note that both networks start at roughly the accuracy achieved at the end of the previous training session, and improve significantly with end-to-end training. To be more scientific, we'd also want to follow the same additional training procedure without the end-to-end training, to ensure that our results aren't better simply because we trained for twice as long. Feel free to try this yourself!
End of explanation
test_net, accuracy = eval_style_net(style_weights_ft)
print 'Accuracy, finetuned from ImageNet initialization: %3.1f%%' % (100*accuracy, )
scratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights_ft)
print 'Accuracy, finetuned from random initialization: %3.1f%%' % (100*scratch_accuracy, )
Explanation: Let's now test the end-to-end finetuned models. Since all layers have been optimized for the style recognition task at hand, we expect both nets to get better results than the ones above, which were achieved by nets with only their classifier layers trained for the style task (on top of either ImageNet pretrained or randomly initialized weights).
End of explanation
plt.imshow(deprocess_net_image(image))
disp_style_preds(test_net, image)
Explanation: We'll first look back at the image we started with and check our end-to-end trained model's predictions.
End of explanation
batch_index = 1
image = test_net.blobs['data'].data[batch_index]
plt.imshow(deprocess_net_image(image))
print 'actual label =', style_labels[int(test_net.blobs['label'].data[batch_index])]
disp_style_preds(test_net, image)
Explanation: Whew, that looks a lot better than before! But note that this image was from the training set, so the net got to see its label at training time.
Finally, we'll pick an image from the test set (an image the model hasn't seen) and look at our end-to-end finetuned style model's predictions for it.
End of explanation
disp_style_preds(scratch_test_net, image)
Explanation: We can also look at the predictions of the network trained from scratch. We see that in this case, the scratch network also predicts the correct label for the image (Pastel), but is much less confident in its prediction than the pretrained net.
End of explanation
disp_imagenet_preds(imagenet_net, image)
Explanation: Of course, we can again look at the ImageNet model's predictions for the above image:
End of explanation |
10,228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calling Functions from Functions
Step2: Interating over a collection
Make the 12 x 12 times table
Step3: Iterating over a list of strings
Shows how to iterate over strings and that you can iterate over sub collections as well
Step4: Default Parameters
Step6: Bonus | Python Code:
def add_together(one, two):
one = one + two
return one
def mutiply_and_add(one, two):
one = add_together(one, two)
return one * one
temparary_value = mutiply_and_add(2, 3)
print(temparary_value)
print(mutiply_and_add(2, 3))
Explanation: Calling Functions from Functions
End of explanation
number_1 = 10
number_2 = 30
print(len(str(number_1 * number_2)))
print(len(number_1 * number_2))
some_string = 'cow'
print(some_string)
some_string = some_string + ' town'
print(some_string)
print('\n\n\n\n')
print('meow cats'.ljust(10))
words = ['one', 'two']
words_two = words
words[0] = 'dinasour'
print(words)
print(words_two)
print(ord('z'))
print(ord('a'))
print(ord('z') - ord('a'))
import math
# Cannot range over infinity
for i in range(math.inf):
print(i)
def get_rows():
return [1,2,3,4,5,6,7,8,9,10,11,12]
def get_columns():
return [1,2,3,4,5,6,7,8,9,10,11,12]
def get_max_width(first_number, second_number):
Return the widest a given mutiplication times table cell should be
highest_value_str = str(first_number * second_number)
return len(highest_value_str) + 2 # Add two to make it appear to be one bigger on each side
rows = get_rows()
columns = get_columns()
max_width = get_max_width(max(rows), max(columns))
output = ''
# Go over the numbers to produce each row in the times table
for row_value in rows:
# Create each column in the times table with this for loop
for col_value in columns:
product_str = str(row_value * col_value)
output = output + product_str.rjust(max_width)
# Add a new line after each set of numbers to ensure the next
# number in the times table gets its own row.
output += '\n'
print(output)
Explanation: Interating over a collection
Make the 12 x 12 times table
End of explanation
# For each word in our list of words I want to "translate" it
from random import shuffle
new_alphabet = ['l', 'i', 'g', 'a', 'f', 'x', 'o', 'e', 'v',
'y', 'r', 'b', 'd', 'h', 'm', 'p', 'k', 'u',
'w', 'j', 's', 'q', 'c', 'z', 't', 'n']
words = ['The', 'quick', 'brown', 'fox',
'jumps', 'over', 'the', 'lazy', 'dog']
# Normalize the words
position = 0
for word in words:
words[position] = word.lower()
position += 1
# Go over each word and replace it with the 'modified' word
# Create a new list so that we do not modify the original
word_index = 0
new_words = list(words)
for word in words:
# Initialize the new word
new_words[word_index] = ''
# Go over and setup each new word replacing the old letters
for letter in word:
new_alphabet_position = ord(letter) - ord('a')
new_words[word_index] += new_alphabet[new_alphabet_position]
# Increase the word index so we populate the word in the list correctly
word_index += 1
print('Existing words: ', words)
print('New words: ', new_words)
Explanation: Iterating over a list of strings
Shows how to iterate over strings and that you can iterate over sub collections as well
End of explanation
import math
def times_x(value, x=10):
return value * x
def divide_x(value, x=None):
if x == None:
return 0
elif x == 0:
return math.inf # This is infinity
return value / x
print('one', times_x(10))
print('two', times_x(10,5))
print('two', divide_x(20, 10))
print('two', divide_x(20, 0))
print('one', divide_x(20))
def broken_times(x, y):
if y == None:
y = 10
return x * y
# print(broken_times(10))
def x_times_y(x, y):
if type(x) == int and type(y) == int:
return x * y
return None
product = x_times_y(2, 4)
print(product)
product = x_times_y(None, 2)
if type(product) == type(None):
print('I have no product')
else:
print(product)
Explanation: Default Parameters
End of explanation
def test_values(first_value, second_value):
Asserts that the first and second parameters have the same value
Raises an AssertionError if they are different with a message
failed_message = "{0} is not equal to {1}".format(first_value, second_value)
assert first_value == second_value, failed_message
def is_prime(x):
if x in [1,2,3,5,7,11,13,15,17]:
return True
return False
primes = [1,2,3,5,7,11,13,15,17,19,23]
for prime in primes:
print(prime)
test_values(is_prime(prime), True)
Explanation: Bonus: Testing your code
Here is an easy way to test that your code does what you want
End of explanation |
10,229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Quantum Double-slit Experiment
Step2: Now define the double_slit function and make it interactive | Python Code:
%pylab inline
import numpy as np
import matplotlib.pyplot as plot
from scipy.integrate import trapz,cumtrapz
from IPython.html.widgets import interact, interactive
def distribute1D(x,prob,N):
takes any distribution which is directly proportional
to the number of particles, and returns data that is
statistically the same as the input data.
CDF = cumtrapz(prob)/np.sum(prob)
xsamples = np.zeros(N,float)
for i in range(0,N):
r = np.random.ranf()
xsamples[i] = x[CDF.searchsorted(r)]
return xsamples
Explanation: Quantum Double-slit Experiment: A Monte Carlo Model
Let’s consider the double-slit experiment as an example of the "Monte Carlo" simulation technique. In the lab, we relate the intensity of the observed beam (either photons or electrons) to the sum of the two waves, one from each slit. Each slit gives us a diffraction pattern,
$$
I_{SS_{diffraction}} = \text{sinc}^2(a x / \lambda L)
$$
where $\text{sinc}(x) = \sin(\pi x)/(\pi x)$ is the normalized sinc function.
The double slit, however, is not just the sum of the two single slits, but rather includes an interference term,
$$
I_{DS_{interference}} = \cos^2(2\pi d x/\lambda L)
$$
due to the wave-nature of the photons or electrons.
The observed intensity includes both the probability that the diffraction and interference are appreciable.
$$
I_{DS_{total}} = I_{SS_{diffraction}} * I_{DS_{interference}}
$$
<table>
<tr>
<td>
Here is a diagram to illustrate the concept and define the variables.
<img src = "../img/QuantumDoubleSlit.png" width = 300>
</td>
<td>The intensity on the screen will look something like this:
<img src = "../img/DSIntensity.png" width = 400></td>
</tr>
</table>
Now, let’s do the quantum mechanics problem. What if we let just one photon or electron pass through the slit at a time? What would we see on the screen?
Instead of seeing the addition of waves, we’d see the location of the individual photon or electron. Because $E = h\nu$, the intensity plotted above is also the un-normalized probability distribution of finding a photon or electron at any single location.
To simulate this experiment, we’ll define the experimental parameters, create the probability distribution, and then throw random numbers to distribute photons based on their probability. To make it awesome, we'll set the parameters up as an interactive widget so we can explore the system in detail.
Let the initial distance between the slits $d$ = 15 $\mu$m and the slit width $a$ = 10 $\mu$m. The screen is 1 m from the plate with the slits. We will use a He-Neon laser with wavelength $\lambda$ = 632.8 nm. The screen size is (0.2 $\times$ 0.2) m.
We can set up the probability distribution for this situation and use a Monte-Carlo simulation technique to generate $N$ photons in the range (0,10000).
End of explanation
#Quantum double-slit
#define the experimental parameters
#d = 15. # (micron) dist. between slits
#a = 10. # (micron) slit width.
#L = 1. # (m) dist. from slit to screen
#lam = 632.8 # (nm) He-Neon laser
def double_slit(d=15.,a=10.,L=3.,lam=632.8,N=0):
#convert d and a in microns to meters
dm = d*1.e-6
am = a*1.e-6
#convert wavelength from nm to m
wave=lam*1.e-9
# create the probability distribution
x = np.linspace(-0.2,0.2,10000)
#Isingle = np.sin(np.pi*am*x/wave/L)**2./(np.pi*am*x/wave/L)**2
Isingle = np.sinc(am*x/wave/L)**2.
Idouble = (np.cos(2*np.pi*dm*x/wave/L)**2)
Itot = Isingle*Idouble
#generate the random photon locations on the screen
#x according to the intensity distribution
xsamples = distribute1D(x,Itot,N)
#y randomly over the full screen height
ysamples = -0.2 + 0.4*np.random.ranf(N)
#Make subplot of the intensity and the screen distribution
fig = plt.figure(1,(10,6))
plt.subplot(2,1,1)
plt.plot(x,Itot)
plt.xlim(-0.2,0.2)
plt.ylim(0.,1.2)
plt.ylabel("Intensity",fontsize=20)
plt.subplot(2,1,2)
plt.xlim(-0.2,0.2)
plt.ylim(-0.2,0.2)
plt.scatter(xsamples,ysamples)
plt.xlabel("x (m)",fontsize=20)
plt.ylabel("y (m)",fontsize=20)
v5 = interact(double_slit,d=(1.,20.,1.), a=(5,50.,1.), L=(1.0,3.0),
lam=(435.,700.),N=(0,10000))
Explanation: Now define the double_slit function and make it interactive:
End of explanation |
10,230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Cookbook-for-cantera_tools-module" data-toc-modified-id="Cookbook-for-cantera_tools-module-1"><span class="toc-item-num">1 </span>Cookbook for cantera_tools module</a></div><div class="lev2 toc-item"><a href="#better-names-for-RMG-mechanisms" data-toc-modified-id="better-names-for-RMG-mechanisms-11"><span class="toc-item-num">1.1 </span>better names for RMG mechanisms</a></div><div class="lev2 toc-item"><a href="#reducing-a-mechanism-by-reactions" data-toc-modified-id="reducing-a-mechanism-by-reactions-12"><span class="toc-item-num">1.2 </span>reducing a mechanism by reactions</a></div><div class="lev2 toc-item"><a href="#running-a-simulation" data-toc-modified-id="running-a-simulation-13"><span class="toc-item-num">1.3 </span>running a simulation</a></div><div class="lev3 toc-item"><a href="#run_simulation-example" data-toc-modified-id="run_simulation-example-131"><span class="toc-item-num">1.3.1 </span><code>run_simulation</code> example</a></div><div class="lev3 toc-item"><a href="#run_simulation_till_conversion-example" data-toc-modified-id="run_simulation_till_conversion-example-132"><span class="toc-item-num">1.3.2 </span><code>run_simulation_till_conversion</code> example</a></div><div class="lev3 toc-item"><a href="#find_ignition_delay-example" data-toc-modified-id="find_ignition_delay-example-133"><span class="toc-item-num">1.3.3 </span><code>find_ignition_delay</code> example</a></div><div class="lev3 toc-item"><a href="#set-specific-state-variables-with-time" data-toc-modified-id="set-specific-state-variables-with-time-134"><span class="toc-item-num">1.3.4 </span>set specific state variables with time</a></div><div class="lev2 toc-item"><a href="#analyzing-data" data-toc-modified-id="analyzing-data-14"><span class="toc-item-num">1.4 </span>analyzing data</a></div><div class="lev3 toc-item"><a href="#obtaining-reaction-and-species-data" data-toc-modified-id="obtaining-reaction-and-species-data-141"><span class="toc-item-num">1.4.1 </span>obtaining reaction and species data</a></div><div class="lev3 toc-item"><a href="#looking-at-a-list-of-reactions-consuming/producing-a-molecule" data-toc-modified-id="looking-at-a-list-of-reactions-consuming/producing-a-molecule-142"><span class="toc-item-num">1.4.2 </span>looking at a list of reactions consuming/producing a molecule</a></div><div class="lev3 toc-item"><a href="#view-branching-ratio" data-toc-modified-id="view-branching-ratio-143"><span class="toc-item-num">1.4.3 </span>view branching ratio</a></div><div class="lev3 toc-item"><a href="#creating-flux-diagrams" data-toc-modified-id="creating-flux-diagrams-144"><span class="toc-item-num">1.4.4 </span>creating flux diagrams</a></div>
# Cookbook for cantera_tools module
This notebook describes some of the methods in this package and how they can be used.
Step1: better names for RMG mechanisms
Many RMG models have poorly-named species, due in part to restrictions of CHEMKIN names. Cantera have fewer restrictions, so mechanisms produced with it can have more understandable names. This example converts an RMG CHEMKIN file to a Cantera file which uses SMILES to names species.
This method will place an input_nicely_named.cti and a species_dictionary_nicely_named.txt into the folder specified in the method
Step2: reducing a mechanism by reactions
The modules can create a reduced mechanism given a list of desired reaction strings, using how cantera represents the reaction strings (this can be found by solution.reaction_equations()). It will remove any unused species as well.
Step3: running a simulation
Simulations can be run in the following ways
Step4: run_simulation_till_conversion example
Step5: find_ignition_delay example
Step6: set specific state variables with time
Specific state variables (like temperature) can be set across a simulation.
To use this, change the condition_type to the string that describes the
situation (list of acceptable strings is described in the docstring of run_simulation.
Typically you also need to supply a list of the state variable to change which corresponds with the times in the times variable.
Step7: analyzing data
obtaining reaction and species data
Step8: viewing species concentrations
species concentration with time can be accessed from the dataframe contained in outputs['species']
Step9: view reactions consuming/producing a molecule
Negative values indicate that the reaction consumes the molecule. Positive values indicate that the reaction produces the molecule.
Step10: view branching ratio
Step11: creating flux diagrams
The method save_flux_diagrams, shown below, runs a simulation saving the diagrams at various times. The method save_flux_diagram can be integrated into another simulation solver. | Python Code:
import cantera_tools as ctt
import numpy as np
from scipy import integrate
import cantera as ct
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Cookbook-for-cantera_tools-module" data-toc-modified-id="Cookbook-for-cantera_tools-module-1"><span class="toc-item-num">1 </span>Cookbook for cantera_tools module</a></div><div class="lev2 toc-item"><a href="#better-names-for-RMG-mechanisms" data-toc-modified-id="better-names-for-RMG-mechanisms-11"><span class="toc-item-num">1.1 </span>better names for RMG mechanisms</a></div><div class="lev2 toc-item"><a href="#reducing-a-mechanism-by-reactions" data-toc-modified-id="reducing-a-mechanism-by-reactions-12"><span class="toc-item-num">1.2 </span>reducing a mechanism by reactions</a></div><div class="lev2 toc-item"><a href="#running-a-simulation" data-toc-modified-id="running-a-simulation-13"><span class="toc-item-num">1.3 </span>running a simulation</a></div><div class="lev3 toc-item"><a href="#run_simulation-example" data-toc-modified-id="run_simulation-example-131"><span class="toc-item-num">1.3.1 </span><code>run_simulation</code> example</a></div><div class="lev3 toc-item"><a href="#run_simulation_till_conversion-example" data-toc-modified-id="run_simulation_till_conversion-example-132"><span class="toc-item-num">1.3.2 </span><code>run_simulation_till_conversion</code> example</a></div><div class="lev3 toc-item"><a href="#find_ignition_delay-example" data-toc-modified-id="find_ignition_delay-example-133"><span class="toc-item-num">1.3.3 </span><code>find_ignition_delay</code> example</a></div><div class="lev3 toc-item"><a href="#set-specific-state-variables-with-time" data-toc-modified-id="set-specific-state-variables-with-time-134"><span class="toc-item-num">1.3.4 </span>set specific state variables with time</a></div><div class="lev2 toc-item"><a href="#analyzing-data" data-toc-modified-id="analyzing-data-14"><span class="toc-item-num">1.4 </span>analyzing data</a></div><div class="lev3 toc-item"><a href="#obtaining-reaction-and-species-data" data-toc-modified-id="obtaining-reaction-and-species-data-141"><span class="toc-item-num">1.4.1 </span>obtaining reaction and species data</a></div><div class="lev3 toc-item"><a href="#looking-at-a-list-of-reactions-consuming/producing-a-molecule" data-toc-modified-id="looking-at-a-list-of-reactions-consuming/producing-a-molecule-142"><span class="toc-item-num">1.4.2 </span>looking at a list of reactions consuming/producing a molecule</a></div><div class="lev3 toc-item"><a href="#view-branching-ratio" data-toc-modified-id="view-branching-ratio-143"><span class="toc-item-num">1.4.3 </span>view branching ratio</a></div><div class="lev3 toc-item"><a href="#creating-flux-diagrams" data-toc-modified-id="creating-flux-diagrams-144"><span class="toc-item-num">1.4.4 </span>creating flux diagrams</a></div>
# Cookbook for cantera_tools module
This notebook describes some of the methods in this package and how they can be used.
End of explanation
ctt.obtain_cti_file_nicely_named('cookbook_files/',original_ck_file='chem.inp')
Explanation: better names for RMG mechanisms
Many RMG models have poorly-named species, due in part to restrictions of CHEMKIN names. Cantera have fewer restrictions, so mechanisms produced with it can have more understandable names. This example converts an RMG CHEMKIN file to a Cantera file which uses SMILES to names species.
This method will place an input_nicely_named.cti and a species_dictionary_nicely_named.txt into the folder specified in the method
End of explanation
model_link = 'cookbook_files/model.cti'
desired_reactions = ['CH3OH + O2 <=> CH2OH(29) + HO2(12)',
'C3H8 + O2 <=> C3H7(61) + HO2(12)',
'C3H8 + O2 <=> C3H7(60) + HO2(12)',
'CH3OH + OH(10) <=> CH2OH(29) + H2O(11)',
'C3H8 + OH(10) <=> C3H7(60) + H2O(11)',
'C3H8 + OH(10) <=> C3H7(61) + H2O(11)',
'CH3OH + HO2(12) <=> CH2OH(29) + H2O2(13)',
'C3H8 + HO2(12) <=> C3H7(61) + H2O2(13)',
'C3H8 + HO2(12) <=> C3H7(60) + H2O2(13)',
'C3H7(60) + O2 <=> C3H7O2(78)',
'C3H7(61) + O2 <=> C3H7O2(80)',]
# make the reduced mechanism using the full mechanism `.cti` file.
solution_reduced = ctt.create_mechanism(model_link, kept_reaction_equations=desired_reactions)
# NOTE: this cantera Solution object can now be used like any other
Explanation: reducing a mechanism by reactions
The modules can create a reduced mechanism given a list of desired reaction strings, using how cantera represents the reaction strings (this can be found by solution.reaction_equations()). It will remove any unused species as well.
End of explanation
model_link = 'cookbook_files/model.cti'
# creates the cantera Solution object
solution = ctt.create_mechanism(model_link)
#initial mole fractions
mole_fractions = {'N2':5, 'O2':1, 'C3H8': 0.3}
# set initial conditions of solution in kelvin pascals and mole fractions
conditions = 800, 10**6, mole_fractions
solution.TPX = conditions
# store 100 times between 10^-8s and 1s, with an initial point at t=0
times = np.logspace(-8,0,num=100)
times = np.insert(times,0,0)
# run the simulation
outputs = ctt.run_simulation(solution, times,
condition_type = 'constant-temperature-and-pressure',
output_reactions = True,
output_directional_reactions = True,
output_rop_roc=True)
# you can combine outputs how you would like with pd.concat
result = pd.concat([outputs['conditions'], outputs['species'], outputs['directional_reactions']], axis = 'columns')
# data can be saved to avoid rerunning the simulation for data analysis (in most cases). these can be loaded using pandas.from_pickle() and pandas.from_csv()
result.to_pickle('cookbook_files/{}.pic'.format('run_simulation_example'))
result.to_csv('cookbook_files/{}.csv'.format('run_simulation_example'))
Explanation: running a simulation
Simulations can be run in the following ways:
run_simulation - you give the method times which you want data saved, and it saves data at each time.
run_simulation_till_conversion - this method will run a simulation until the specified conversion is reached for a target species.
find_ignition_delay - you give this method the initial conditions and it outputs the ignition delay determined by the maximum of $\frac{dT}{dt}$, as well as simulation data given every so many iterator steps.
These methods currently work for constant temperature and pressure or adiabatic constant volume.
It's also possible to adapt these methods to your specific situation. If you think your adaption will be useful for others, consider talking with the author (posting a issue or in person) or just making a pull request.
run_simulation example
End of explanation
model_link = 'cookbook_files/model.cti'
# creates the cantera Solution object
solution = ctt.create_mechanism(model_link)
# finds initial mole fraction for a fuel-air ratio
mole_fractions = ctt.get_initial_mole_fractions(stoich_ratio = 1,
fuel_mole_ratios=[1],
oxygen_per_fuel_at_stoich_list = [5],
fuels = ['C3H8'])
# set initial conditions of solution in kelvin pascals and mole fractions
conditions = 950, 10**6, mole_fractions
solution.TPX = conditions
# run simulation
output_till_conversion = ctt.run_simulation_till_conversion(solution,
species='C3H8',
conversion=0.5,
condition_type = 'constant-temperature-and-pressure',
output_species = True,
output_reactions = True,
output_directional_reactions = True,
output_rop_roc = True,
skip_data = 25)
Explanation: run_simulation_till_conversion example
End of explanation
model_link = 'cookbook_files/model.cti'
# creates the cantera Solution object
solution = ctt.create_mechanism(model_link)
# finds initial mole fraction for a fuel-air ratio of 1 with 30%/70% methanol/propane blend
# for non-combustion conditions, this can be replaced by a dictionary of values {'CH3OH': 0.3, 'C3H8':0.7}
mole_fractions = ctt.get_initial_mole_fractions(stoich_ratio = 1,
fuel_mole_ratios = [.3,.7],
oxygen_per_fuel_at_stoich_list = [1.5,5],
fuels = ['CH3OH','C3H8'])
# set initial conditions of solution in kelvin pascals and mole fractions
conditions = 750, 10**6, mole_fractions
# run simulation
outputs = ctt.find_ignition_delay(solution, conditions,
output_profile = True,
output_directional_reactions = True,
skip_data = 1000)
# obtain the ignition delays
ignition_delay = outputs['ignition_delay']
Explanation: find_ignition_delay example
End of explanation
model_link = 'cookbook_files/model.cti'
# creates the cantera Solution object
solution = ctt.create_mechanism(model_link)
#initial mole fractions
mole_fractions = {'N2':5, 'O2':1, 'C3H8': 0.3}
# set initial conditions of solution in kelvin pascals and mole fractions
conditions = 800, 10**6, mole_fractions
solution.TPX = conditions
# store 100 times between 10^-8s and 0.01s, with an initial point at t=0
times = np.logspace(-8,-2,num=100)
times = np.insert(times,0,0)
# set a linear ramp temperature from 800 to 1000 at 1e-5s followed by constant temperature
ramp_temperatures = 800 + 2000000 * times[:50]
constant_temperatures = np.ones(51) * 1000
temperatures = np.concatenate((ramp_temperatures,constant_temperatures))
# run the simulation
outputs = ctt.run_simulation(solution, times,
condition_type = 'specified-temperature-constant-volume',
output_reactions = True,
output_directional_reactions = True,
output_rop_roc= False,
temperature_values = temperatures)
Explanation: set specific state variables with time
Specific state variables (like temperature) can be set across a simulation.
To use this, change the condition_type to the string that describes the
situation (list of acceptable strings is described in the docstring of run_simulation.
Typically you also need to supply a list of the state variable to change which corresponds with the times in the times variable.
End of explanation
# this outputs a dataframe of just species
species = outputs['species']
reactions = outputs['net_reactions']
forward_and_reverse_reactions = outputs['directional_reactions']
net_observables = outputs['conditions']
# obtain reactions with a specific molecule
reactions_with_propane = ctt.find_reactions(df=reactions,
solution=solution,
species = 'C3H8')
Explanation: analyzing data
obtaining reaction and species data
End of explanation
species['C3H8'].plot()
Explanation: viewing species concentrations
species concentration with time can be accessed from the dataframe contained in outputs['species']
End of explanation
propane_production = ctt.consumption_pathways(df=reactions,
solution=solution,
species = 'C3H8')
f, ax = plt.subplots()
reactions_with_propane.plot.line(ax=ax)
import plot_tools as ptt
ptt.place_legend_outside_plot(axis=ax)
ax.set_ylabel('production rate (kmol/m3s)')
Explanation: view reactions consuming/producing a molecule
Negative values indicate that the reaction consumes the molecule. Positive values indicate that the reaction produces the molecule.
End of explanation
# this outputs the branching ratio of propane
branching = ctt.branching_ratios(df=reactions,
solution=solution,
compound='C3H8')
f, ax = plt.subplots()
# plot only the top 6 branching ratios
branching.iloc[:,:6].plot.area(ax=ax)
import plot_tools as ptt
ptt.place_legend_outside_plot(axis=ax)
ax.set_ylabel('branching ratio')
Explanation: view branching ratio
End of explanation
model_link = 'cookbook_files/model.cti'
solution = ctt.create_mechanism(model_link)
mole_fractions = {'N2':5, 'O2':1, 'C3H8': 0.3}
conditions = 800, 10**6, mole_fractions
solution.TPX = conditions
#only specify the times you want a flux diagram at
times = np.logspace(-8,0,num=3)
# run the simulation & create flux diagrams
outputs = ctt.save_flux_diagrams(solution, times,
condition_type = 'constant-temperature-and-pressure',
path='cookbook_files/',
filename='cookbook_fluxes',
filetype = 'svg',
element='C')
Explanation: creating flux diagrams
The method save_flux_diagrams, shown below, runs a simulation saving the diagrams at various times. The method save_flux_diagram can be integrated into another simulation solver.
End of explanation |
10,231 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
Step1: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE
Step2: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A
Step3: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
Step4: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
Step5: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
Step7: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks
Step8: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
Step10: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
Step11: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image. | Python Code:
# As usual, a bit of setup
import time, os, json
import numpy as np
import skimage.io
import matplotlib.pyplot as plt
from cs231n.classifiers.pretrained_cnn import PretrainedCNN
from cs231n.data_utils import load_tiny_imagenet
from cs231n.image_utils import blur_image, deprocess_image
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
End of explanation
data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A', subtract_mean=True)
Explanation: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE: The full TinyImageNet-100-A dataset will take up about 250MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory.
End of explanation
for i, names in enumerate(data['class_names']):
print i, ' '.join('"%s"' % name for name in names)
Explanation: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A:
End of explanation
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(data['y_train'] == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = deprocess_image(data['X_train'][train_idx], data['mean_image'])
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(data['class_names'][class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
Explanation: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
End of explanation
model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5')
Explanation: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
End of explanation
batch_size = 100
# Test the model on training data
mask = np.random.randint(data['X_train'].shape[0], size=batch_size)
X, y = data['X_train'][mask], data['y_train'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Training accuracy: ', (y_pred == y).mean()
# Test the model on validation data
mask = np.random.randint(data['X_val'].shape[0], size=batch_size)
X, y = data['X_val'][mask], data['y_val'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Validation accuracy: ', (y_pred == y).mean()
Explanation: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
End of explanation
def compute_saliency_maps(X, y, model):
Compute a class saliency map using the model for images X and labels y.
Input:
- X: Input images, of shape (N, 3, H, W)
- y: Labels for X, of shape (N,)
- model: A PretrainedCNN that will be used to compute the saliency map.
Returns:
- saliency: An array of shape (N, H, W) giving the saliency maps for the input
images.
saliency = None
##############################################################################
# TODO: Implement this function. You should use the forward and backward #
# methods of the PretrainedCNN class, and compute gradients with respect to #
# the unnormalized class score of the ground-truth classes in y. #
##############################################################################
N, _, H, W = X.shape
saliency = np.zeros((N, H, W))
scores, cache = model.forward(X, mode='test')
dscores = np.zeros_like(scores)
dscores[np.arange(N), y] = 1
dX, grads = model.backward(dscores, cache)
# max along the channel dimension
saliency = np.max(np.absolute(dX), axis=1)
pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
return saliency
Explanation: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps", ICLR Workshop 2014.
End of explanation
def show_saliency_maps(mask):
mask = np.asarray(mask)
X = data['X_val'][mask]
y = data['y_val'][mask]
saliency = compute_saliency_maps(X, y, model)
for i in xrange(mask.size):
plt.subplot(2, mask.size, i + 1)
plt.imshow(deprocess_image(X[i], data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[i]][0])
plt.subplot(2, mask.size, mask.size + i + 1)
plt.title(mask[i])
plt.imshow(saliency[i])
plt.axis('off')
plt.gcf().set_size_inches(10, 4)
plt.show()
# Show some random images
mask = np.random.randint(data['X_val'].shape[0], size=5)
show_saliency_maps(mask)
# These are some cherry-picked images that should give good results
show_saliency_maps([128, 3225, 2417, 1640, 4619])
Explanation: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
End of explanation
def make_fooling_image(X, target_y, model):
Generate a fooling image that is close to X, but that the model classifies
as target_y.
Inputs:
- X: Input image, of shape (1, 3, 64, 64)
- target_y: An integer in the range [0, 100)
- model: A PretrainedCNN
Returns:
- X_fooling: An image that is close to X, but that is classifed as target_y
by the model.
X_fooling = X.copy()
##############################################################################
# TODO: Generate a fooling image X_fooling that the model will classify as #
# the class target_y. Use gradient ascent on the target class score, using #
# the model.forward method to compute scores and the model.backward method #
# to compute image gradients. #
# #
# HINT: For most examples, you should be able to generate a fooling image #
# in fewer than 100 iterations of gradient ascent. #
##############################################################################
N = X.shape[0]
lr = 500
reg = 2e-5
i = 0
while True:
i += 1
scores, cache = model.forward(X_fooling)
if np.argmax(scores, axis=-1) == np.array([target_y]):
print 'Fooled image in %dth iteration' % (i)
break
if not i % 100:
print('iteration %d iteration, y_pred: %s, target_y: %s' %
(i, np.argmax(scores, axis=-1), [target_y]))
dscores = np.zeros_like(scores)
dscores[np.arange(N), [target_y]] = 1
dX, grads = model.backward(dscores, cache)
X_fooling += lr * (dX + reg * (X_fooling - X))
pass
pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
return X_fooling
Explanation: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
End of explanation
# Find a correctly classified validation image
while True:
i = np.random.randint(data['X_val'].shape[0])
X = data['X_val'][i:i+1]
y = data['y_val'][i:i+1]
y_pred = model.loss(X)[0].argmax()
if y_pred == y: break
target_y = 67
X_fooling = make_fooling_image(X, target_y, model)
# Make sure that X_fooling is classified as y_target
scores = model.loss(X_fooling)
assert scores[0].argmax() == target_y, 'The network is not fooled!'
# Show original image, fooling image, and difference
plt.subplot(1, 3, 1)
plt.imshow(deprocess_image(X, data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[0]][0])
plt.subplot(1, 3, 2)
plt.imshow(deprocess_image(X_fooling, data['mean_image'], renorm=True))
plt.title(data['class_names'][target_y][0])
plt.axis('off')
plt.subplot(1, 3, 3)
plt.title('Difference')
plt.imshow(deprocess_image(X - X_fooling, data['mean_image']))
plt.axis('off')
plt.show()
Explanation: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image.
End of explanation |
10,232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 3
Step1: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x
Step2: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
Step3: Polynomial_sframe function
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree
Step4: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call
Step5: Visualizing polynomial regression
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
Step6: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step7: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
Step8: NOTE
Step9: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?
Step10: The resulting model looks like half a parabola. Try on your own to see what the cubic looks like
Step11: Now try a 15th degree polynomial
Step12: What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look.
Step13: Changing the data and re-learning
We're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results.
To split the sales data into four subsets, we perform the following steps
Step14: Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model.
Step15: Some questions you will be asked on your quiz
Step16: Next you should write a loop that does the following
Step17: Quiz Question
Step18: Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 3: Assessing Fit (polynomial regression)
In this notebook you will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular you will:
* Write a function to take an SArray and a degree and return an SFrame where each column is the SArray to a polynomial value up to the total degree e.g. degree = 3 then column 1 is the SArray column 2 is the SArray squared and column 3 is the SArray cubed
* Use matplotlib to visualize polynomial regressions
* Use matplotlib to visualize the same polynomial degree on different subsets of the data
* Use a validation set to select a polynomial degree
* Assess the final fit using test data
We will continue to use the House data from previous notebooks.
Fire up graphlab create
End of explanation
tmp = graphlab.SArray([1., 2., 3.])
tmp_cubed = tmp.apply(lambda x: x**3)
print tmp
print tmp_cubed
print tmp*tmp
Explanation: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x: functions.
For example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads graphlab)
End of explanation
ex_sframe = graphlab.SFrame()
ex_sframe['power_1'] = tmp
print ex_sframe
Explanation: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
End of explanation
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
name_left = 'power_' + str(power-1)
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe[name] = feature * poly_sframe[name_left]
return poly_sframe
Explanation: Polynomial_sframe function
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree:
End of explanation
print polynomial_sframe(tmp, 3)
Explanation: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call:
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Visualizing polynomial regression
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
End of explanation
sales = sales.sort(['sqft_living', 'price'])
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
poly1_data = polynomial_sframe(sales['sqft_living'], 1)
poly1_data['price'] = sales['price'] # add price to the data since it's the target
Explanation: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
End of explanation
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None)
#let's take a look at the weights before we plot
model1.get("coefficients")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(poly1_data['power_1'],poly1_data['price'],'.',
poly1_data['power_1'], model1.predict(poly1_data),'-')
Explanation: NOTE: for all the models in this notebook use validation_set = None to ensure that all results are consistent across users.
End of explanation
poly2_data = polynomial_sframe(sales['sqft_living'], 2)
my_features = poly2_data.column_names() # get the name of the features
poly2_data['price'] = sales['price'] # add price to the data since it's the target
model2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None)
model2.get("coefficients")
plt.plot(poly2_data['power_1'],poly2_data['price'],'.',
poly2_data['power_1'], model2.predict(poly2_data),'-')
Explanation: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?
End of explanation
poly3_data = polynomial_sframe(sales['sqft_living'], 3)
my_features = poly3_data.column_names() # get the name of the features
poly3_data['price'] = sales['price'] # add price to the data since it's the target
model3 = graphlab.linear_regression.create(poly3_data, target = 'price', features = my_features, validation_set = None)
plt.plot(poly3_data['power_1'],poly3_data['price'],'.',
poly3_data['power_1'], model3.predict(poly3_data),'-')
Explanation: The resulting model looks like half a parabola. Try on your own to see what the cubic looks like:
End of explanation
poly15_data = polynomial_sframe(sales['sqft_living'], 15)
my_features = poly15_data.column_names() # get the name of the features
poly15_data['price'] = sales['price'] # add price to the data since it's the target
model15 = graphlab.linear_regression.create(poly15_data, target = 'price', features = my_features, validation_set = None)
plt.plot(poly15_data['power_1'],poly15_data['price'],'.',
poly15_data['power_1'], model15.predict(poly15_data),'-')
def fitAndPlot(data, degree):
poly_data = polynomial_sframe(data['sqft_living'], degree)
my_features = poly_data.column_names() # get the name of the features
poly_data['price'] = data['price'] # add price to the data since it's the target
model = graphlab.linear_regression.create(poly_data, target = 'price', features = my_features, validation_set = None)
plt.plot(poly_data['power_1'],poly_data['price'],'.',
poly_data['power_1'], model.predict(poly_data),'-')
model.get("coefficients").print_rows(num_rows = 16)
Explanation: Now try a 15th degree polynomial:
End of explanation
fitAndPlot(sales, 15)
Explanation: What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look.
End of explanation
sub1, sub2 = sales.random_split(0.5, seed=0)
set_1, set_2 = sub1.random_split(0.5, seed=0)
set_3, set_4 = sub2.random_split(0.5, seed=0)
Explanation: Changing the data and re-learning
We're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results.
To split the sales data into four subsets, we perform the following steps:
* First split sales into 2 subsets with .random_split(0.5, seed=0).
* Next split the resulting subsets into 2 more subsets each. Use .random_split(0.5, seed=0).
We set seed=0 in these steps so that different users get consistent results.
You should end up with 4 subsets (set_1, set_2, set_3, set_4) of approximately equal size.
End of explanation
fitAndPlot(set_1, 15)
fitAndPlot(set_2, 15)
fitAndPlot(set_3, 15)
fitAndPlot(set_4, 15)
Explanation: Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model.
End of explanation
training_and_validation, testing = sales.random_split(0.9, seed=1)
training, validation = training_and_validation.random_split(0.5, seed=1)
Explanation: Some questions you will be asked on your quiz:
Quiz Question: Is the sign (positive or negative) for power_15 the same in all four models?
Quiz Question: (True/False) the plotted fitted lines look the same in all four plots
Selecting a Polynomial Degree
Whenever we have a "magic" parameter like the degree of the polynomial there is one well-known way to select these parameters: validation set. (We will explore another approach in week 4).
We split the sales dataset 3-way into training set, test set, and validation set as follows:
Split our sales data into 2 sets: training_and_validation and testing. Use random_split(0.9, seed=1).
Further split our training data into two sets: training and validation. Use random_split(0.5, seed=1).
Again, we set seed=1 to obtain consistent results for different users.
End of explanation
def fitAndValidate(training, validation, degree):
poly_train = polynomial_sframe(training['sqft_living'], degree)
my_features = poly_train.column_names() # get the name of the features
poly_train['price'] = training['price'] # add price to the data since it's the target
model = graphlab.linear_regression.create(poly_train, target = 'price', verbose=False,
features = my_features, validation_set = None)
poly_validation = polynomial_sframe(validation['sqft_living'], degree)
predictions = model.predict(poly_validation)
errors = predictions - validation['price']
RSS = (errors * errors).sum()
return model, RSS
lowest_RSS_degree = 0
RSS = float("inf")
for degree in range(1, 16):
model, modelRSS = fitAndValidate(training, validation, degree)
if modelRSS < RSS:
lowest_RSS_degree = degree
RSS = modelRSS
print lowest_RSS_degree
Explanation: Next you should write a loop that does the following:
* For degree in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (to get this in python type range(1, 15+1))
* Build an SFrame of polynomial data of train_data['sqft_living'] at the current degree
* hint: my_features = poly_data.column_names() gives you a list e.g. ['power_1', 'power_2', 'power_3'] which you might find useful for graphlab.linear_regression.create( features = my_features)
* Add train_data['price'] to the polynomial SFrame
* Learn a polynomial regression model to sqft vs price with that degree on TRAIN data
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that degree and you will need to make a polynmial SFrame using validation data.
* Report which degree had the lowest RSS on validation data (remember python indexes from 0)
(Note you can turn off the print out of linear_regression.create() with verbose = False)
End of explanation
model, RSS = fitAndValidate(training, validation, 6)
Explanation: Quiz Question: Which degree (1, 2, …, 15) had the lowest RSS on Validation data?
Now that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data. Report the RSS on your quiz.
End of explanation
poly_test = polynomial_sframe(testing['sqft_living'], degree)
poly_test['price'] = testing['price']
model.evaluate(poly_test)
errors = model.predict(poly_test) - testing['price']
testRSS = (errors * errors).sum()
print testRSS
Explanation: Quiz Question: what is the RSS on TEST data for the model with the degree selected from Validation data?
End of explanation |
10,233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Insights from medical posts
In this notebook, I try to find characteristics of medical posts.
What is the ratio of post from professionals vs. those from general public?
What are the characteristics that well-separate professional-level posts?
Length of text
Usage of offending vocabulary
Writing level
Step1: Only 10 out of 505 users are experts!
This corresponds to 2 % of users.
Step2: Length ot text | Python Code:
# Set up paths/ os
import os
import sys
this_path=os.getcwd()
os.chdir("../data")
sys.path.insert(0, this_path)
# Load datasets
import pandas as pd
df = pd.read_csv("MedHelp-posts.csv",index_col=0)
df.head(2)
df_users = pd.read_csv("MedHelp-users.csv",index_col=0)
df_users.head(2)
# 1 classify users as professionals and general public:
df_users['is expert']=0
for user_id in df_users.index:
user_description=df_users.loc[user_id,['user description']].values
if ( "," in user_description[0]):
print(user_description[0])
df_users.loc[user_id,['is expert']]=1
# Save database:
df_users.to_csv("MedHelp-users-class.csv")
is_expert=df_users['is expert'] == 1
is_expert.value_counts()
Explanation: Insights from medical posts
In this notebook, I try to find characteristics of medical posts.
What is the ratio of post from professionals vs. those from general public?
What are the characteristics that well-separate professional-level posts?
Length of text
Usage of offending vocabulary
Writing level
End of explanation
# Select user_id from DB where is_professional = 1
experts_ids = df_users[df_users['is expert'] == 1 ].index.values
experts_ids
non_experts_ids = df_users[df_users['is expert'] == 0 ].index.values
# Select * where user_id in experts_ids
#df_users.loc[df_users.index.isin(experts_ids)]
df_experts=df.loc[df['user id'].isin(experts_ids)]
print('Total of posts from expert users {}'.format(len(df_experts)))
print('Total of posts {}'.format(len(df)))
print('Ratio {}'.format(len(df_experts)/len(df)))
del df_experts
Explanation: Only 10 out of 505 users are experts!
This corresponds to 2 % of users.
End of explanation
# Tokenize data
import nltk
tokenizer = nltk.RegexpTokenizer(r'\w+')
# Get the length of tokens into a columns
df_text = df['text'].str.lower()
df_token = df_text.apply(tokenizer.tokenize)
df['token length'] = df_token.apply(len)
# Get list of tokens from text in first article:
#for text in df_text.values:
# ttext = tokenizer.tokenize(text.lower())
# lenght_text=len(ttext)
# break
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.mlab as mlab
from matplotlib import gridspec
from scipy.stats import norm
import numpy as np
from scipy.optimize import curve_fit
from lognormal import lognormal, lognormal_stats,truncated_normal
from scipy.stats import truncnorm
plt.rcParams['text.usetex'] = True
plt.rcParams['text.latex.unicode'] = True
plt.rcParams.update({'font.size': 24})
nbins=100
fig = plt.figure()
#fig=plt.figure(figsize=(2,1))
#fig.set_size_inches(6.6,3.3)
gs = gridspec.GridSpec(2, 1)
#plt.subplots_adjust(left=0.1,right=1.0,bottom=0.17,top=0.9)
#plt.suptitle('Text length (words count)')
fig.text(0.04,0.5,'Distribution',va='center',rotation='vertical')
#X ticks
xmax=200
x=np.arange(0,xmax,10) #xtics
xx=np.arange(1,xmax,1)
# Panel 1
ax1=plt.subplot(gs[0])
ax1.set_xlim([0, xmax])
ax1.set_xticks(x)
ax1.tick_params(labelbottom='off')
#plt.ylabel('')
#Class 0
X=df.loc[df['user id'].isin(non_experts_ids)]['token length'].values
n,bins,patches=plt.hist(X,nbins,normed=1,facecolor='cyan',align='mid')
popt,pcov = curve_fit(truncated_normal,bins[:nbins],n)
c0,=plt.plot(xx,truncated_normal(xx,*popt),color='blue',label='non expert')
plt.legend(handles=[c0],bbox_to_anchor=(0.45, 0.95), loc=2, borderaxespad=0.)
print(popt)
mu=X.mean()
var=X.var()
print("Class 0: Mean,variance: ({},{})".format(mu,var))
# Panel 2
ax2=plt.subplot(gs[1])
ax2.set_xlim([0, xmax])
ax2.set_xticks(x)
#ax2.set_yticks(np.arange(0,8,2))
#plt.ylabel('Normal distribution')
#Class 1
X=df.loc[df['user id'].isin(experts_ids)]['token length'].values
#(mu,sigma) = norm.fit(X)
n,bins,patches=plt.hist(X,nbins,normed=1,facecolor='orange',align='mid')
popt,pcov = curve_fit(lognormal,bins[:nbins],n)
#c1,=plt.plot(xx,mlab.normpdf(xx, mu, sigma),color='darkorange',label='layered')
c1,=plt.plot(xx,lognormal(xx,*popt),color='red',label='expert')
plt.legend(handles=[c1],bbox_to_anchor=(0.45, 0.95), loc=2, borderaxespad=0.)
print("Class 1: Mean,variance:",lognormal_stats(*popt))
#plt.xlabel('Volume ratio (theor./expt.)')
plt.show()
# What is the 5% for distribution of experts?
X=df.loc[df['user id'].isin(experts_ids)]['token length'].values
total=len(X)
for ix in range(10,500,10):
this_sum=0
for xx in X:
if xx < ix:
this_sum = this_sum + 1
percentile = this_sum/total * 100
print("Value {} percentile {}".format(ix,percentile))
Explanation: Length ot text
End of explanation |
10,234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep LSTM RNNs
Step1: Dataset
Step2: Check the data real quick
Step3: Preparing the data for training
Step4: Long short-term memory (LSTM) RNNs
An LSTM block has mechanisms to enable "memorizing" information for an extended number of time steps. We use the LSTM block with the following transformations that map inputs to outputs across blocks at consecutive layers and consecutive time steps
Step5: Attach the gradients
Step6: Softmax Activation
Step7: Cross-entropy loss function
Step8: Averaging the loss over the sequence
Step9: Optimizer
Step11: Define the model
Step12: Test and visualize predictions | Python Code:
from __future__ import print_function
import mxnet as mx
from mxnet import nd, autograd
import numpy as np
from collections import defaultdict
mx.random.seed(1)
# ctx = mx.gpu(0)
ctx = mx.cpu(0)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from datetime import datetime
# import mpld3
sns.set_style('whitegrid')
#sns.set_context('notebook')
sns.set_context('poster')
# Make inline plots vector graphics instead of raster graphics
from IPython.display import set_matplotlib_formats
#set_matplotlib_formats('pdf', 'svg')
set_matplotlib_formats('pdf', 'png')
SEQ_LENGTH = 100 + 1 # needs to be at least the seq_length for training + 1 because of the time shift between inputs and labels
NUM_SAMPLES_TRAINING = 5000 + 1
NUM_SAMPLES_TESTING = 100 + 1
CREATE_DATA_SETS = False # True if you don't have the data files or re-create them
Explanation: Deep LSTM RNNs
End of explanation
def gimme_one_random_number():
return nd.random_uniform(low=0, high=1, shape=(1,1)).asnumpy()[0][0]
def create_one_time_series(seq_length=10):
freq = (gimme_one_random_number()*0.5) + 0.1 # 0.1 to 0.6
ampl = gimme_one_random_number() + 0.5 # 0.5 to 1.5
x = np.sin(np.arange(0, seq_length) * freq) * ampl
return x
def create_batch_time_series(seq_length=10, num_samples=4):
column_labels = ['t'+str(i) for i in range(0, seq_length)]
df = pd.DataFrame(create_one_time_series(seq_length=seq_length)).transpose()
df.columns = column_labels
df.index = ['s'+str(0)]
for i in range(1, num_samples):
more_df = pd.DataFrame(create_one_time_series(seq_length=seq_length)).transpose()
more_df.columns = column_labels
more_df.index = ['s'+str(i)]
df = pd.concat([df, more_df], axis=0)
return df # returns a dataframe of shape (num_samples, seq_length)
# Create some time-series
# uncomment below to force predictible random numbers
# mx.random.seed(1)
if CREATE_DATA_SETS:
data_train = create_batch_time_series(seq_length=SEQ_LENGTH, num_samples=NUM_SAMPLES_TRAINING)
data_test = create_batch_time_series(seq_length=SEQ_LENGTH, num_samples=NUM_SAMPLES_TESTING)
# Write data to csv
data_train.to_csv("../data/timeseries/train.csv")
data_test.to_csv("../data/timeseries/test.csv")
else:
data_train = pd.read_csv("../data/timeseries/train.csv", index_col=0)
data_test = pd.read_csv("../data/timeseries/test.csv", index_col=0)
Explanation: Dataset: "Some time-series"
End of explanation
# num_sampling_points = min(SEQ_LENGTH, 400)
# (data_train.sample(4).transpose().iloc[range(0, SEQ_LENGTH, SEQ_LENGTH//num_sampling_points)]).plot()
Explanation: Check the data real quick
End of explanation
# print(data_train.loc[:,data_train.columns[:-1]]) # inputs
# print(data_train.loc[:,data_train.columns[1:]]) # outputs (i.e. inputs shift by +1)
batch_size = 64
batch_size_test = 1
seq_length = 16
num_batches_train = data_train.shape[0] // batch_size
num_batches_test = data_test.shape[0] // batch_size_test
num_features = 1 # we do 1D time series for now, this is like vocab_size = 1 for characters
# inputs are from t0 to t_seq_length - 1. because the last point is kept for the output ("label") of the penultimate point
data_train_inputs = data_train.loc[:,data_train.columns[:-1]]
data_train_labels = data_train.loc[:,data_train.columns[1:]]
data_test_inputs = data_test.loc[:,data_test.columns[:-1]]
data_test_labels = data_test.loc[:,data_test.columns[1:]]
train_data_inputs = nd.array(data_train_inputs.values).reshape((num_batches_train, batch_size, seq_length, num_features))
train_data_labels = nd.array(data_train_labels.values).reshape((num_batches_train, batch_size, seq_length, num_features))
test_data_inputs = nd.array(data_test_inputs.values).reshape((num_batches_test, batch_size_test, seq_length, num_features))
test_data_labels = nd.array(data_test_labels.values).reshape((num_batches_test, batch_size_test, seq_length, num_features))
train_data_inputs = nd.swapaxes(train_data_inputs, 1, 2)
train_data_labels = nd.swapaxes(train_data_labels, 1, 2)
test_data_inputs = nd.swapaxes(test_data_inputs, 1, 2)
test_data_labels = nd.swapaxes(test_data_labels, 1, 2)
print('num_samples_training={0} | num_batches_train={1} | batch_size={2} | seq_length={3}'.format(NUM_SAMPLES_TRAINING, num_batches_train, batch_size, seq_length))
print('train_data_inputs shape: ', train_data_inputs.shape)
print('train_data_labels shape: ', train_data_labels.shape)
# print(data_train_inputs.values)
# print(train_data_inputs[0]) # see what one batch looks like
Explanation: Preparing the data for training
End of explanation
num_inputs = num_features # for a 1D time series, this is just a scalar equal to 1.0
num_outputs = num_features # same comment
num_hidden_units = [8, 8] # num of hidden units in each hidden LSTM layer
num_hidden_layers = len(num_hidden_units) # num of hidden LSTM layers
num_units_layers = [num_features] + num_hidden_units
########################
# Weights connecting the inputs to the hidden layer
########################
Wxg, Wxi, Wxf, Wxo, Whg, Whi, Whf, Who, bg, bi, bf, bo = {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}
for i_layer in range(1, num_hidden_layers+1):
num_inputs = num_units_layers[i_layer-1]
num_hidden_units = num_units_layers[i_layer]
Wxg[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01
Wxi[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01
Wxf[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01
Wxo[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01
########################
# Recurrent weights connecting the hidden layer across time steps
########################
Whg[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01
Whi[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01
Whf[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01
Who[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01
########################
# Bias vector for hidden layer
########################
bg[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01
bi[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01
bf[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01
bo[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01
########################
# Weights to the output nodes
########################
Why = nd.random_normal(shape=(num_units_layers[-1], num_outputs), ctx=ctx) * .01
by = nd.random_normal(shape=num_outputs, ctx=ctx) * .01
Explanation: Long short-term memory (LSTM) RNNs
An LSTM block has mechanisms to enable "memorizing" information for an extended number of time steps. We use the LSTM block with the following transformations that map inputs to outputs across blocks at consecutive layers and consecutive time steps: $\newcommand{\xb}{\mathbf{x}} \newcommand{\RR}{\mathbb{R}}$
$$g_t = \text{tanh}(X_t W_{xg} + h_{t-1} W_{hg} + b_g),$$
$$i_t = \sigma(X_t W_{xi} + h_{t-1} W_{hi} + b_i),$$
$$f_t = \sigma(X_t W_{xf} + h_{t-1} W_{hf} + b_f),$$
$$o_t = \sigma(X_t W_{xo} + h_{t-1} W_{ho} + b_o),$$
$$c_t = f_t \odot c_{t-1} + i_t \odot g_t,$$
$$h_t = o_t \odot \text{tanh}(c_t),$$
where $\odot$ is an element-wise multiplication operator, and
for all $\xb = [x_1, x_2, \ldots, x_k]^\top \in \RR^k$ the two activation functions:
$$\sigma(\xb) = \left[\frac{1}{1+\exp(-x_1)}, \ldots, \frac{1}{1+\exp(-x_k)}]\right]^\top,$$
$$\text{tanh}(\xb) = \left[\frac{1-\exp(-2x_1)}{1+\exp(-2x_1)}, \ldots, \frac{1-\exp(-2x_k)}{1+\exp(-2x_k)}\right]^\top.$$
In the transformations above, the memory cell $c_t$ stores the "long-term" memory in the vector form.
In other words, the information accumulatively captured and encoded until time step $t$ is stored in $c_t$ and is only passed along the same layer over different time steps.
Given the inputs $c_t$ and $h_t$, the input gate $i_t$ and forget gate $f_t$ will help the memory cell to decide how to overwrite or keep the memory information. The output gate $o_t$ further lets the LSTM block decide how to retrieve the memory information to generate the current state $h_t$ that is passed to both the next layer of the current time step and the next time step of the current layer. Such decisions are made using the hidden-layer parameters $W$ and $b$ with different subscripts: these parameters will be inferred during the training phase by gluon.
Allocate parameters
End of explanation
params = []
for i_layer in range(1, num_hidden_layers+1):
params += [Wxg[i_layer], Wxi[i_layer], Wxf[i_layer], Wxo[i_layer], Whg[i_layer], Whi[i_layer], Whf[i_layer], Who[i_layer], bg[i_layer], bi[i_layer], bf[i_layer], bo[i_layer]]
params += [Why, by] # add the output layer
for param in params:
param.attach_grad()
Explanation: Attach the gradients
End of explanation
def softmax(y_linear, temperature=1.0):
lin = (y_linear-nd.max(y_linear)) / temperature
exp = nd.exp(lin)
partition = nd.sum(exp, axis=0, exclude=True).reshape((-1,1))
return exp / partition
Explanation: Softmax Activation
End of explanation
def cross_entropy(yhat, y):
return - nd.mean(nd.sum(y * nd.log(yhat), axis=0, exclude=True))
def rmse(yhat, y):
return nd.mean(nd.sqrt(nd.sum(nd.power(y - yhat, 2), axis=0, exclude=True)))
Explanation: Cross-entropy loss function
End of explanation
def average_ce_loss(outputs, labels):
assert(len(outputs) == len(labels))
total_loss = 0.
for (output, label) in zip(outputs,labels):
total_loss = total_loss + cross_entropy(output, label)
return total_loss / len(outputs)
def average_rmse_loss(outputs, labels):
assert(len(outputs) == len(labels))
total_loss = 0.
for (output, label) in zip(outputs,labels):
total_loss = total_loss + rmse(output, label)
return total_loss / len(outputs)
Explanation: Averaging the loss over the sequence
End of explanation
def SGD(params, learning_rate):
for param in params:
# print('grrrrr: ', param.grad)
param[:] = param - learning_rate * param.grad
def adam(params, learning_rate, M , R, index_adam_call, beta1, beta2, eps):
k = -1
for param in params:
k += 1
M[k] = beta1 * M[k] + (1. - beta1) * param.grad
R[k] = beta2 * R[k] + (1. - beta2) * (param.grad)**2
# bias correction since we initilized M & R to zeros, they're biased toward zero on the first few iterations
m_k_hat = M[k] / (1. - beta1**(index_adam_call))
r_k_hat = R[k] / (1. - beta2**(index_adam_call))
if((np.isnan(M[k].asnumpy())).any() or (np.isnan(R[k].asnumpy())).any()):
# print('GRRRRRR ', M, K)
stop()
# print('grrrrr: ', param.grad)
param[:] = param - learning_rate * m_k_hat / (nd.sqrt(r_k_hat) + eps)
# print('m_k_hat r_k_hat', m_k_hat, r_k_hat)
return params, M, R
# def adam(params, learning_rate, M, R, index_iteration, beta1=0.9, beta2=0.999, eps=1e-8):
# for k, param in enumerate(params):
# if k==0:
# print('batch_iteration {}: {}'.format(index_iteration, param))
# M[k] = beta1 * M[k] + (1. - beta1) * param.grad
# R[k] = beta2 * R[k] + (1. - beta2) * (param.grad)**2
# m_k_hat = M[k] / (1. - beta1**(index_iteration))
# r_k_hat = R[k] / (1. - beta2**(index_iteration))
# param[:] = param - learning_rate * m_k_hat / (nd.sqrt(r_k_hat) + eps)
# # print(beta1, beta2, M, R)
# if k==0:
# print('batch_iteration {}: {}'.format(index_iteration, param.grad))
# for k, param in enumerate(params):
# print('batch_iteration {}: {}'.format(index_iteration, param))
# return M, R
Explanation: Optimizer
End of explanation
def single_lstm_unit_calcs(X, c, Wxg, h, Whg, bg, Wxi, Whi, bi, Wxf, Whf, bf, Wxo, Who, bo):
g = nd.tanh(nd.dot(X, Wxg) + nd.dot(h, Whg) + bg)
i = nd.sigmoid(nd.dot(X, Wxi) + nd.dot(h, Whi) + bi)
f = nd.sigmoid(nd.dot(X, Wxf) + nd.dot(h, Whf) + bf)
o = nd.sigmoid(nd.dot(X, Wxo) + nd.dot(h, Who) + bo)
#######################
c = f * c + i * g
h = o * nd.tanh(c)
return c, h
def deep_lstm_rnn(inputs, h, c, temperature=1.0):
h: dict of nd.arrays, each key is the index of a hidden layer (from 1 to whatever).
Index 0, if any, is the input layer
outputs = []
# inputs is one BATCH of sequences so its shape is number_of_seq, seq_length, features_dim
# (latter is 1 for a time series, vocab_size for a character, n for a n different times series)
for X in inputs:
# X is batch of one time stamp. E.g. if each batch has 37 sequences, then the first value of X will be a set of the 37 first values of each of the 37 sequences
# that means each iteration on X corresponds to one time stamp, but it is done in batches of different sequences
h[0] = X # the first hidden layer takes the input X as input
for i_layer in range(1, num_hidden_layers+1):
# lstm units now have the 2 following inputs:
# i) h_t from the previous layer (equivalent to the input X for a non-deep lstm net),
# ii) h_t-1 from the current layer (same as for non-deep lstm nets)
c[i_layer], h[i_layer] = single_lstm_unit_calcs(h[i_layer-1], c[i_layer], Wxg[i_layer], h[i_layer], Whg[i_layer], bg[i_layer], Wxi[i_layer], Whi[i_layer], bi[i_layer], Wxf[i_layer], Whf[i_layer], bf[i_layer], Wxo[i_layer], Who[i_layer], bo[i_layer])
yhat_linear = nd.dot(h[num_hidden_layers], Why) + by
# yhat is a batch of several values of the same time stamp
# this is basically the prediction of the sequence, which overlaps most of the input sequence, plus one point (character or value)
# yhat = softmax(yhat_linear, temperature=temperature)
# yhat = nd.sigmoid(yhat_linear)
# yhat = nd.tanh(yhat_linear)
yhat = yhat_linear # we cant use a 1.0-bounded activation function since amplitudes can be greater than 1.0
outputs.append(yhat) # outputs has same shape as inputs, i.e. a list of batches of data points.
# print('some shapes... yhat outputs', yhat.shape, len(outputs) )
return (outputs, h, c)
Explanation: Define the model
End of explanation
def test_prediction(one_input_seq, one_label_seq, temperature=1.0):
#####################################
# Set the initial state of the hidden representation ($h_0$) to the zero vector
##################################### # some better initialization needed??
h, c = {}, {}
for i_layer in range(1, num_hidden_layers+1):
h[i_layer] = nd.zeros(shape=(batch_size_test, num_units_layers[i_layer]), ctx=ctx)
c[i_layer] = nd.zeros(shape=(batch_size_test, num_units_layers[i_layer]), ctx=ctx)
outputs, h, c = deep_lstm_rnn(one_input_seq, h, c, temperature=temperature)
loss = rmse(outputs[-1][0], one_label_seq)
return outputs[-1][0].asnumpy()[-1], one_label_seq.asnumpy()[-1], loss.asnumpy()[-1], outputs, one_label_seq
def check_prediction(index):
o, label, loss, outputs, labels = test_prediction(test_data_inputs[index], test_data_labels[index], temperature=1.0)
prediction = round(o, 3)
true_label = round(label, 3)
outputs = [float(i.asnumpy().flatten()) for i in outputs]
true_labels = list(test_data_labels[index].asnumpy().flatten())
# print(outputs, '\n----\n', true_labels)
df = pd.DataFrame([outputs, true_labels]).transpose()
df.columns = ['predicted', 'true']
# print(df)
rel_error = round(100. * (prediction / true_label - 1.0), 2)
# print('\nprediction = {0} | actual_value = {1} | rel_error = {2}'.format(prediction, true_label, rel_error))
return df
epochs = 48 # at some point, some nans appear in M, R matrices of Adam. TODO investigate why
moving_loss = 0.
learning_rate = 0.001 # 0.1 works for a [8, 8] after about 70 epochs of 32-sized batches
# Adam Optimizer stuff
beta1 = .9
beta2 = .999
index_adam_call = 0
# M & R arrays to keep track of momenta in adam optimizer. params is a list that contains all ndarrays of parameters
M = {k: nd.zeros_like(v) for k, v in enumerate(params)}
R = {k: nd.zeros_like(v) for k, v in enumerate(params)}
df_moving_loss = pd.DataFrame(columns=['Loss', 'Error'])
df_moving_loss.index.name = 'Epoch'
# needed to update plots on the fly
%matplotlib notebook
fig, axes_fig1 = plt.subplots(1,1, figsize=(6,3))
fig2, axes_fig2 = plt.subplots(1,1, figsize=(6,3))
for e in range(epochs):
############################
# Attenuate the learning rate by a factor of 2 every 100 epochs
############################
if ((e+1) % 80 == 0):
learning_rate = learning_rate / 2.0 # TODO check if its ok to adjust learning_rate when using Adam Optimizer
h, c = {}, {}
for i_layer in range(1, num_hidden_layers+1):
h[i_layer] = nd.zeros(shape=(batch_size, num_units_layers[i_layer]), ctx=ctx)
c[i_layer] = nd.zeros(shape=(batch_size, num_units_layers[i_layer]), ctx=ctx)
for i in range(num_batches_train):
data_one_hot = train_data_inputs[i]
label_one_hot = train_data_labels[i]
with autograd.record():
outputs, h, c = deep_lstm_rnn(data_one_hot, h, c)
loss = average_rmse_loss(outputs, label_one_hot)
loss.backward()
# SGD(params, learning_rate)
index_adam_call += 1 # needed for bias correction in Adam optimizer
params, M, R = adam(params, learning_rate, M, R, index_adam_call, beta1, beta2, 1e-8)
##########################
# Keep a moving average of the losses
##########################
if (i == 0) and (e == 0):
moving_loss = nd.mean(loss).asscalar()
else:
moving_loss = .99 * moving_loss + .01 * nd.mean(loss).asscalar()
df_moving_loss.loc[e] = round(moving_loss, 4)
############################
# Predictions and plots
############################
data_prediction_df = check_prediction(index=e)
axes_fig1.clear()
data_prediction_df.plot(ax=axes_fig1)
fig.canvas.draw()
prediction = round(data_prediction_df.tail(1)['predicted'].values.flatten()[-1], 3)
true_label = round(data_prediction_df.tail(1)['true'].values.flatten()[-1], 3)
rel_error = round(100. * np.abs(prediction / true_label - 1.0), 2)
print("Epoch = {0} | Loss = {1} | Prediction = {2} True = {3} Error = {4}".format(e, moving_loss, prediction, true_label, rel_error ))
axes_fig2.clear()
if e == 0:
moving_rel_error = rel_error
else:
moving_rel_error = .9 * moving_rel_error + .1 * rel_error
df_moving_loss.loc[e, ['Error']] = moving_rel_error
axes_loss_plot = df_moving_loss.plot(ax=axes_fig2, secondary_y='Loss', color=['r','b'])
axes_loss_plot.right_ax.grid(False)
# axes_loss_plot.right_ax.set_yscale('log')
fig2.canvas.draw()
%matplotlib inline
Explanation: Test and visualize predictions
End of explanation |
10,235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ATM 623
Step1: Contents
Emission temperature and lapse rates
Solar Radiation
Terrestrial Radiation and absorption spectra
<a id='section1'></a>
1. Emission temperature and lapse rates
Planetary energy balance is the foundation for all climate modeling. So far we have expressed this through a globally averaged budget
$$C \frac{d T_s}{dt} = (1-\alpha) Q - OLR$$
and we have written the OLR in terms of an emission temperature $T_e$ where by definition
$$ OLR = \sigma T_e^4 $$
Using values from the observed planetary energy budget, we found that $T_e = 255$ K
The emission temperature of the planet is thus about 33 K colder than the mean surface temperature (288 K).
Where in the atmosphere do we find $T = T_e = 255$ K?
That's about -18ºC.
Let's plot global, annual average observed air temperature from NCEP reanalysis data.
Step2: Let's make a better plot.
Here we're going to use a package called metpy to automate plotting this temperature profile in a way that's more familiar to meteorologists
Step3: Note that surface temperature in global mean is indeed about 288 K or 15ºC as we keep saying.
So where do we find temperature $T_e=255$ K or -18ºC?
Actually in mid-troposphere, near 500 hPa or about 5 km height.
We can infer that much of the outgoing longwave radiation actually originates far above the surface.
Recall that our observed global energy budget diagram shows 217 out of 239 W m$^{-2}$ total OLR emitted by the atmosphere and clouds, only 22 W m$^{-2}$ directly from the surface.
This is due to the greenhouse effect.
So far we have dealt with the greenhouse in a very artificial way in our energy balance model by simply assuming
$$ \text{OLR} = \tau \sigma T_s^4 $$
i.e., the OLR is reduced by a constant factor from the value it would have if the Earth emitted as a blackbody at the surface temperature.
Now it's time to start thinking a bit more about how the radiative transfer process actually occurs in the atmosphere, and how to model it.
<a id='section2'></a>
2. Solar Radiation
Let's plot a spectrum of solar radiation.
For details, see code in notebook!
Step4: Spectrum peaks in the visible range
most energy at these wavelength.
No coincidence that our eyes are sensitive to this range of wavelengths!
Longer wavelengths called “infrared”, shorter wavelengths called “ultraviolet”.
The shape of the spectrum is a fundamental characteristic of radiative emissions
(think about the color of burning coals in a fire – cooler = red, hotter = white)
Theory and experiments tell us that both the total flux of emitted radiation, and the wavelength of maximum emission, depend only on the temperature of the source!
The theoretical spectrum was worked out by Max Planck and is therefore known as the “Planck” spectrum (or simply blackbody spectrum).
Step5: Going from cool to warm
Step6: There is essentially no overlap between the two spectra.
This is the fundamental reason we can discuss the solar “shortwave” and terrestrial “longwave” radiation as two distinct phenomena.
In reality all radiation exists on a continuum of different wavelengths. But in climate science we can get a long way by thinking in terms of a very simple “two-stream” approximation (short and longwave). We’ve already been doing this throughout the course so far!
Atmospheric absorption spectra
Now look at the atmospheric absorption spectra.
(fraction of radiation at each wavelength that is absorbed on a single vertical path through the atmosphere)
<img src='../images/MarshallPlumbFig2.5.png'>
Figure reproduced from Marshall and Plumb (2008) | Python Code:
# Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
Explanation: ATM 623: Climate Modeling
Brian E. J. Rose, University at Albany
Lecture 6: A Brief Review of Radiation
Warning: content out of date and not maintained
You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date.
Here you are likely to find broken links and broken code.
About these notes:
This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways:
The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware
The latest versions can be viewed as static web pages rendered on nbviewer
A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website.
Also here is a legacy version from 2015.
Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
## The NOAA ESRL server is shutdown! January 2019
ncep_url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/"
ncep_air = xr.open_dataset( ncep_url + "pressure/air.mon.1981-2010.ltm.nc",
decode_times=False)
#url = 'http://apdrc.soest.hawaii.edu:80/dods/public_data/Reanalysis_Data/NCEP/NCEP/clima/pressure/air'
#air = xr.open_dataset(url)
## The name of the vertical axis is different than the NOAA ESRL version..
#ncep_air = air.rename({'lev': 'level'})
print( ncep_air)
# Take global, annual average and convert to Kelvin
coslat = np.cos(np.deg2rad(ncep_air.lat))
weight = coslat / coslat.mean(dim='lat')
Tglobal = (ncep_air.air * weight).mean(dim=('lat','lon','time'))
Tglobal
# a "quick and dirty" visualization of the data
Tglobal.plot()
Explanation: Contents
Emission temperature and lapse rates
Solar Radiation
Terrestrial Radiation and absorption spectra
<a id='section1'></a>
1. Emission temperature and lapse rates
Planetary energy balance is the foundation for all climate modeling. So far we have expressed this through a globally averaged budget
$$C \frac{d T_s}{dt} = (1-\alpha) Q - OLR$$
and we have written the OLR in terms of an emission temperature $T_e$ where by definition
$$ OLR = \sigma T_e^4 $$
Using values from the observed planetary energy budget, we found that $T_e = 255$ K
The emission temperature of the planet is thus about 33 K colder than the mean surface temperature (288 K).
Where in the atmosphere do we find $T = T_e = 255$ K?
That's about -18ºC.
Let's plot global, annual average observed air temperature from NCEP reanalysis data.
End of explanation
from metpy.plots import SkewT
fig = plt.figure(figsize=(9, 9))
skew = SkewT(fig, rotation=30)
skew.plot(Tglobal.level, Tglobal, color='black', linestyle='-', linewidth=2, label='Observations')
skew.ax.set_ylim(1050, 10)
skew.ax.set_xlim(-75, 45)
# Add the relevant special lines
skew.plot_dry_adiabats(linewidth=0.5)
skew.plot_moist_adiabats(linewidth=0.5)
#skew.plot_mixing_lines()
skew.ax.legend()
skew.ax.set_title('Global, annual mean sounding from NCEP Reanalysis',
fontsize = 16)
Explanation: Let's make a better plot.
Here we're going to use a package called metpy to automate plotting this temperature profile in a way that's more familiar to meteorologists: a so-called skew-T plot.
End of explanation
# Using pre-defined code for the Planck function from the climlab package
from climlab.utils.thermo import Planck_wavelength
# approximate emission temperature of the sun in Kelvin
Tsun = 5780.
# boundaries of visible region in nanometers
UVbound = 390.
IRbound = 700.
# array of wavelengths
wavelength_nm = np.linspace(0.1, 3500, 400)
to_meters = 1E-9 # conversion factor
label_size = 16
fig, ax = plt.subplots(figsize=(14,7))
ax.plot(wavelength_nm,
Planck_wavelength(wavelength_nm * to_meters, Tsun))
ax.grid()
ax.set_xlabel('Wavelength (nm)', fontsize=label_size)
ax.set_ylabel('Spectral radiance (W sr$^{-1}$ m$^{-3}$)', fontsize=label_size)
# Mask out points outside of this range
wavelength_vis = np.ma.masked_outside(wavelength_nm, UVbound, IRbound)
# Shade the visible region
ax.fill_between(wavelength_vis, Planck_wavelength(wavelength_vis * to_meters, Tsun))
title = 'Blackbody emission curve for the sun (T = {:.0f} K)'.format(Tsun)
ax.set_title(title, fontsize=label_size);
ax.text(280, 0.8E13, 'Ultraviolet', rotation='vertical', fontsize=12)
ax.text(500, 0.8E13, 'Visible', rotation='vertical', fontsize=16, color='w')
ax.text(800, 0.8E13, 'Infrared', rotation='vertical', fontsize=12);
Explanation: Note that surface temperature in global mean is indeed about 288 K or 15ºC as we keep saying.
So where do we find temperature $T_e=255$ K or -18ºC?
Actually in mid-troposphere, near 500 hPa or about 5 km height.
We can infer that much of the outgoing longwave radiation actually originates far above the surface.
Recall that our observed global energy budget diagram shows 217 out of 239 W m$^{-2}$ total OLR emitted by the atmosphere and clouds, only 22 W m$^{-2}$ directly from the surface.
This is due to the greenhouse effect.
So far we have dealt with the greenhouse in a very artificial way in our energy balance model by simply assuming
$$ \text{OLR} = \tau \sigma T_s^4 $$
i.e., the OLR is reduced by a constant factor from the value it would have if the Earth emitted as a blackbody at the surface temperature.
Now it's time to start thinking a bit more about how the radiative transfer process actually occurs in the atmosphere, and how to model it.
<a id='section2'></a>
2. Solar Radiation
Let's plot a spectrum of solar radiation.
For details, see code in notebook!
End of explanation
fig, ax = plt.subplots(figsize=(14,7))
wavelength_um = wavelength_nm / 1000
for T in [24000,12000,6000,3000]:
ax.plot(wavelength_um,
(Planck_wavelength(wavelength_nm * to_meters, T) / T**4),
label=str(T) + ' K')
ax.legend(fontsize=label_size)
ax.set_xlabel('Wavelength (um)', fontsize=label_size)
ax.set_ylabel('Normalized spectral radiance (W sr$^{-1}$ m$^{-3}$ K$^{-4}$)', fontsize=label_size)
ax.set_title("Normalized blackbody emission spectra $T^{-4} B_{\lambda}$ for different temperatures");
Explanation: Spectrum peaks in the visible range
most energy at these wavelength.
No coincidence that our eyes are sensitive to this range of wavelengths!
Longer wavelengths called “infrared”, shorter wavelengths called “ultraviolet”.
The shape of the spectrum is a fundamental characteristic of radiative emissions
(think about the color of burning coals in a fire – cooler = red, hotter = white)
Theory and experiments tell us that both the total flux of emitted radiation, and the wavelength of maximum emission, depend only on the temperature of the source!
The theoretical spectrum was worked out by Max Planck and is therefore known as the “Planck” spectrum (or simply blackbody spectrum).
End of explanation
fig, ax = plt.subplots(figsize=(14,7))
wavelength_um = np.linspace(0.1, 200, 10000)
wavelength_meters = wavelength_um / 1E6
for T in [6000, 255]:
ax.semilogx(wavelength_um,
(Planck_wavelength(wavelength_meters, T) / T**4 * wavelength_meters),
label=str(T) + ' K')
ax.legend(fontsize=label_size)
ax.set_xlabel('Wavelength (um)', fontsize=label_size)
ax.set_ylabel('Normalized spectral radiance (W sr$^{-1}$ m$^{-2}$ K$^{-4}$)', fontsize=label_size)
ax.set_title("Normalized blackbody emission spectra $T^{-4} \lambda B_{\lambda}$ for the sun ($T_e = 6000$ K) and Earth ($T_e = 255$ K)",
fontsize=label_size);
Explanation: Going from cool to warm:
total emission increases
maximum emission occurs at shorter wavelengths.
The integral of these curves over all wavelengths gives us our familiar $\sigma T^4$
Mathematically it turns out that
$$ λ_{max} T = \text{constant} $$
(known as Wien’s displacement law).
By fitting the observed solar emission to a blackbody curve, we can deduce that the emission temperature of the sun is about 6000 K.
Knowing this, and knowing that the solar spectrum peaks at 0.6 micrometers, we can calculate the wavelength of maximum terrestrial radiation as
$$ λ_{max}^{Earth} = 0.6 ~ \mu m \frac{6000}{255} = 14 ~ \mu m $$
This is in the far-infrared part of the spectrum.
<a id='section3'></a>
3. Terrestrial Radiation and absorption spectra
Terrestrial versus solar wavelengths
Now let's look at normalized blackbody curves for Sun and Earth:
End of explanation
%load_ext version_information
%version_information numpy, matplotlib, xarray, metpy, climlab
Explanation: There is essentially no overlap between the two spectra.
This is the fundamental reason we can discuss the solar “shortwave” and terrestrial “longwave” radiation as two distinct phenomena.
In reality all radiation exists on a continuum of different wavelengths. But in climate science we can get a long way by thinking in terms of a very simple “two-stream” approximation (short and longwave). We’ve already been doing this throughout the course so far!
Atmospheric absorption spectra
Now look at the atmospheric absorption spectra.
(fraction of radiation at each wavelength that is absorbed on a single vertical path through the atmosphere)
<img src='../images/MarshallPlumbFig2.5.png'>
Figure reproduced from Marshall and Plumb (2008): Atmosphere, Ocean, and Climate Dynamics
Atmosphere is almost completely transparent in the visible range, right at the peak of the solar spectrum
Atmosphere is very opaque in the UV
Opacity across the IR spectrum is highly variable!
Look at the gases associated with various absorption features:
Main players include H$_2$O, CO$_2$, N$_2$O, O$_2$.
Compare to major constituents of atmosphere, in decreasing order:
78% N$_2$
21% O$_2$
1% Ar
H$_2$O (variable)
The dominant constituent gases N$_2$ and O$_2$ are nearly completely transparent across the entire spectrum (there are O$_2$ absorption features in far UV, but little energy at these wavelengths).
The greenhouse effect mostly involves trace constituents:
O$_3$ = 500 ppb
N$_2$O = 310 ppb
CO$_2$ = 400 ppm (but rapidly increasing!)
CH$_4$ = 1.7 ppm
Note that most of these are tri-atomic molecules! There are fundamental reasons for this: these molecules have modes of rotational and vibration that are easily excited at IR wavelengths. See courses in radiative transfer!
<div class="alert alert-success">
[Back to ATM 623 notebook home](../index.ipynb)
</div>
Version information
End of explanation |
10,236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Google Hashcode 2022
Google Hashcode is a team programming competition to solve a complex engineering problem.
In this notebook we are showing how Mathematical Optimization methods as Mixed Integer Programming (MIP) are useful to solve this kind of problems, as they are really easy to implement and give optimal solutions (not only trade-off ones), as opposed to greedy approaches or heuristics. We are solving the pizza warm-up exercise.
We are using AMPL as the modeling language to formulate the problem from two different approaches (not all the formulations are the same in terms of complexity), coming up with enhancements or alternative approaches is an important part of the solving process.
As an instructive example of how to face this kind of problems, we are using the AMPL API for Python (AMPLPY), so we can read the input of the problem, translate easily to a data file for AMPL, and retrieve the solution to get the score. Because of using MIP approach, the score will be the highest possible for the problem.
Problem statement
The statement of this year is related to a pizzeria, the goal is to maximize the number of customers coming, and we want to pick the ingredients for the only pizza that is going to be sold
Step1: Let's use AMPL to formulate the previous problem. The following section setup AMPL to run in also in the cloud (not only locally) with Google Colab.
AMPLPY Setup in the cloud
Here is some documentation and examples of the API
Step2: Solving problem with AMPL
First, we need to write the model file (.mod) containing the mathematical formulation. After that, we will write a data file (.dat) to solve the different instances of the Hashcode problem.
Step3: Translate input with Python
The input files are in the folder input_data/, but they do not have the AMPL data format. Fortunately, we can easily parse the original input files to generate AMPL data files.
Step4: The file written can be displayed with ampl
Step5: Now, solve the problem usign AMPL and Gurobi (MIP solver)
Step6: So the ingredients we should pick are
Step7: You can try this with the other practice instances!
The big ones can take several hours to get the optimal solution, as MIP problems are usually hard because of the integrity constraints of the variables. That's why it is often necessary to reformulate the problem, or try to improve an existing formulation by adding of combining constraints / variables. In the following section, we present an alternative point of view to attack the Hashcode practice problem, hoping the solver finds a solution earlier this way.
Alternative formulation
We could exploit the relations between customers and see if we can figure out of them. Actually, the goal is to get the biggest set of independent customers that are compatible (so none of their favourite ingredients are in the pizza). The ingredients we are picking may be deduced from the particular customers preferences we want to have.
With this idea, let's propose a graph approach where each customer is represented by node, and two nodes are connected by an edge if and only if the two customers are compatible. This is translated to the problem as
Step8: We can still use the same data files. | Python Code:
import os
if not os.path.isdir('input_data'):
os.system('git clone https://github.com/ampl/amplpy.git')
os.chdir('amplpy/notebooks/hashcode')
if not os.path.isdir('ampl_input'):
os.mkdir('ampl_input')
Explanation: Google Hashcode 2022
Google Hashcode is a team programming competition to solve a complex engineering problem.
In this notebook we are showing how Mathematical Optimization methods as Mixed Integer Programming (MIP) are useful to solve this kind of problems, as they are really easy to implement and give optimal solutions (not only trade-off ones), as opposed to greedy approaches or heuristics. We are solving the pizza warm-up exercise.
We are using AMPL as the modeling language to formulate the problem from two different approaches (not all the formulations are the same in terms of complexity), coming up with enhancements or alternative approaches is an important part of the solving process.
As an instructive example of how to face this kind of problems, we are using the AMPL API for Python (AMPLPY), so we can read the input of the problem, translate easily to a data file for AMPL, and retrieve the solution to get the score. Because of using MIP approach, the score will be the highest possible for the problem.
Problem statement
The statement of this year is related to a pizzeria, the goal is to maximize the number of customers coming, and we want to pick the ingredients for the only pizza that is going to be sold:
Each customer has a list of ingredients he loves, and a list of those he does not like.
A customer will come to the pizzeria if the pizza has all the ingredients he likes, and does not have any disgusting ingredient for him.
Task: choose the exact ingredients the pizza should have so it maximizes the number of customers given their lists of preferences. The score is the number of customers coming to eat the pizza.
(The statement can be found here)
First formulation
The first MIP formulation will be straightforward. We have to define the variables we are going to use, and then the objective function and constraints will be easy to figure out.
Variables
We have to decide which ingredients to pick, so
* $x_i$ = 1 if the ingredient i is in the pizza, 0 otherwise.
* $y_j$ = 1 if the customer will come to the pizzeria, 0 otherwise.
Where $i = 1, .., I$ and $j = 1, .., c$ (c = total of customers and I = total of ingredients).
Objective function
The goal is to maximize the number of customers, so this is clear:
$$maximize \ \sum \limits_{j = 1}^c y_j$$
Finally, we need to tie the variables to have the meaning we need by using constraints.
Constraints
If the j customer comes, his favourite ingredients should be picked (mathematically $y_j=1$ implies all the $x_i = 1$). So, for each $j = 1, .., c$:
$$|Likes_j| \cdot y_j \leq \sum \limits_{i \in Likes_j} x[i]$$
Where $Likes_j$ is the set of ingredients $j$ customer likes, and $|Likes_j|$ the number of elements of the set.
If any of the disliked ingredients is in the pizza, customer $j$ can't come (any $x_i = 1$ implies $y_j = 0$). For each customer $j = 1, .., c$:
$$\sum \limits_{i \in Dislikes_j} x_i \leq \frac{1}{2}+(|Dislikes_j|+\frac{1}{2})\cdot(1-y_j)$$
So when customer $j$ comes, the right side is equal to
$$\frac{1}{2}+(|Dislikes_j|+\frac{1}{2})\cdot(1-1) = \frac{1}{2} + 0 = \frac{1}{2}$$
This implies the left side to be zero, because the $x_i$ variables are binary. If the customer $j$ does not come, the inequality is satisfied trivially.
We will need the input data files from the problem, they are available in the amplpy Github repository:
End of explanation
!pip install -q amplpy ampltools
MODULES=['ampl', 'gurobi']
from ampltools import cloud_platform_name, ampl_notebook
from amplpy import AMPL, register_magics
if cloud_platform_name() is None:
ampl = AMPL() # Use local installation of AMPL
else:
ampl = ampl_notebook(modules=MODULES) # Install AMPL and use it
register_magics(ampl_object=ampl) # Evaluate %%ampl_eval cells with ampl.eval()
Explanation: Let's use AMPL to formulate the previous problem. The following section setup AMPL to run in also in the cloud (not only locally) with Google Colab.
AMPLPY Setup in the cloud
Here is some documentation and examples of the API: Documentation, GitHub Repository, PyPI Repository, other Jupyter Notebooks. The following cell is enough to install it. We are using ampl (modeling language) and gurobi (solver) modules.
End of explanation
%%writefile pizza.mod
# PARAMETERS AND SETS
param total_customers;
# Set of ingredients
set INGR;
# Customers lists of preferences
set Likes{1..total_customers};
set Dislikes{1..total_customers};
# VARIABLES
# Take or not to take the ingredient
var x{i in INGR}, binary;
# customer comes OR NOT
var y{j in 1..total_customers}, binary;
# OBJECTIVE FUNCTION
maximize Total_Customers: sum{j in 1..total_customers} y[j];
s.t.
Customer_Likes{j in 1..total_customers}:
card(Likes[j])*y[j] <= sum{i in Likes[j]} x[i];
param eps := 0.5;
Customer_Dislikes{j in 1..total_customers}:
sum{i in Dislikes[j]} x[i] <= 1-eps+(card(Dislikes[j])+eps)*(1-y[j]);
Explanation: Solving problem with AMPL
First, we need to write the model file (.mod) containing the mathematical formulation. After that, we will write a data file (.dat) to solve the different instances of the Hashcode problem.
End of explanation
import sys
# dict to map chars to hashcode original filenames
filename = {
'a':'input_data/a_an_example.in.txt',
'b':'input_data/b_basic.in.txt',
'c':'input_data/c_coarse.in.txt',
'd':'input_data/d_difficult.in.txt',
'e':'input_data/e_elaborate.in.txt'
}
def read(testcase):
original_stdout = sys.stdout
with open(filename[testcase]) as input_file, open('ampl_input/pizza_'+testcase+'.dat', 'w+') as output_data_file:
sys.stdout = output_data_file # Change the standard output to the file we created.
# total_customers
total_customers = int(input_file.readline())
print('param total_customers :=',total_customers,';')
# loop over customers
ingr=set()
for c in range(1, total_customers+1):
likes = input_file.readline().split()
likes.pop(0)
print('set Likes['+str(c)+'] := ',end='')
print(*likes, end = ' ')
print(';')
dislikes = input_file.readline().split()
dislikes.pop(0)
print('set Dislikes['+str(c)+'] := ',end='')
print(*dislikes, end = ' ')
print(';')
ingr = ingr.union(set(likes))
ingr = ingr.union(set(dislikes))
print('set INGR :=')
print(*sorted(ingr), end = '\n')
print(';')
sys.stdout = original_stdout
# Let's try with problem 'c' from hashcode
read('c')
Explanation: Translate input with Python
The input files are in the folder input_data/, but they do not have the AMPL data format. Fortunately, we can easily parse the original input files to generate AMPL data files.
End of explanation
%%ampl_eval
shell 'cat ampl_input/pizza_c.dat';
Explanation: The file written can be displayed with ampl:
End of explanation
os.listdir('ampl_input')
%%ampl_eval
model pizza.mod;
data ampl_input/pizza_c.dat;
option solver gurobi;
solve;
display x, y;
Explanation: Now, solve the problem usign AMPL and Gurobi (MIP solver)
End of explanation
%%ampl_eval
printf "%d ", sum{i in INGR} x[i] > output_file.out;
for{i in INGR}{
if x[i] = 1 then printf "%s ", i >> output_file.out;
}
shell 'cat output_file.out';
Explanation: So the ingredients we should pick are:
* byyii, dlust, luncl, tfeej, vxglq, xdozp and xveqd.
* Customers coming are: 4, 5, 7, 8, 10. Total score: 5.
We can write an output file in the hashcode format:
End of explanation
%%writefile pizza_alternative.mod
# PARAMETERS AND SETS
param total_customers;
# Set of ingredients
set INGR;
# Customers lists of preferences
set Likes{1..total_customers};
set Dislikes{1..total_customers};
# VARIABLES
# customer comes OR NOT <=> node in the clique or not
var x{i in 1..total_customers}, binary;
# OBJECTIVE FUNCTION
maximize Total_Customers: sum{i in 1..total_customers} x[i];
s.t.
# Using the set operations to check if two nodes are not connected
Compatible{i in 1..total_customers-1, j in i+1..total_customers : card(Likes[i] inter Dislikes[j]) >= 1 or card(Likes[j] inter Dislikes[i]) >= 1}:
x[i]+x[j] <= 1;
Explanation: You can try this with the other practice instances!
The big ones can take several hours to get the optimal solution, as MIP problems are usually hard because of the integrity constraints of the variables. That's why it is often necessary to reformulate the problem, or try to improve an existing formulation by adding of combining constraints / variables. In the following section, we present an alternative point of view to attack the Hashcode practice problem, hoping the solver finds a solution earlier this way.
Alternative formulation
We could exploit the relations between customers and see if we can figure out of them. Actually, the goal is to get the biggest set of independent customers that are compatible (so none of their favourite ingredients are in the pizza). The ingredients we are picking may be deduced from the particular customers preferences we want to have.
With this idea, let's propose a graph approach where each customer is represented by node, and two nodes are connected by an edge if and only if the two customers are compatible. This is translated to the problem as:
Customer i loved ingredients are not in the disliked ingredients list of j (and vice versa).
With sets, this is:
$$Liked_i \cap Disliked_j = Liked_j \cap Disliked_i = \emptyset $$
So the problem is reduced to find the maximal clique in the graph (a clique is a subset of nodes and edges such as every pair of nodes are connected by an edge), which is an NP-Complete problem. The clique is maximal respect to the number of nodes.
New variables
To solve the clique problem we may use the binary variables:
* $x_i$ = 1 if the node belongs to the maximal clique, 0 otherwise. For each $i = 1, .., c$.
Objective function
It is the same as in the previous approach, as a node $i$ is in the maximal clique if and only if the customer $i$ is coming to the pizzeria in the corresponding optimal solution to the original problem. A bigger clique would induce a better solution, or a better solution would imply the solution customers to generate a bigger clique as all of them are compatible.
$$maximize \ \sum \limits_{i = 1}^c x_i$$
New constraints
The constraints are quite simple now. Two nodes that are not connected can't be in the same clique. For each pair of nodes not connected $i$ and $j$:
$$x_i + x_j \leq 1$$
Formulation with AMPL
We are writing a new model file (very similar to the previous one). In order to reuse the data files and not get any errors, we will keep the INGR set although it is not going to be used anymore.
The most interesting feature in the model could be the condition to check that two customers are incompatible to generate a constraint. The condition is:
$$Liked_i \cap Disliked_j \neq \emptyset \ \text{ or } \ Liked_j \cap Disliked_i \neq \emptyset$$
A set is not empty if its cardinality is greater or equal to one, so in AMPL we could write:
card(Likes[i] inter Dislikes[j]) >= 1 or card(Likes[j] inter Dislikes[i]) >= 1
End of explanation
%%ampl_eval
reset;
model pizza_alternative.mod;
data ampl_input/pizza_c.dat;
option solver gurobi;
solve;
display x;
%%ampl_eval
set picked_ingr default {};
for{i in 1..total_customers}{
if x[i] = 1 then let picked_ingr := picked_ingr union Likes[i];
}
printf "%d ", card(picked_ingr) > output_file.out;
for{i in picked_ingr}{
printf "%s ", i >> output_file.out;
}
shell 'cat output_file.out';
Explanation: We can still use the same data files.
End of explanation |
10,237 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
在 python 中,下划线命名规则往往令人相当疑惑:单下划线、双下划线、双下划线还分前后……那它们的作用与使用场景到底有何区别呢?
1、单下划线(_)
通常情况下,单下划线(_)会在以下3种场景中使用:
1.1 在解释器中:
在这种情况下,“_”代表交互式解释器会话中上一条执行的语句的结果。这种用法首先被标准CPython解释器采用,然后其他类型的解释器也先后采用。
Step1: 1.2 作为一个名称:
这与上面一点稍微有些联系,此时“”作为临时性的名称使用。这样,当其他人阅读你的代码时将会知道,你分配了一个特定的名称,但是并不会在后面再次用到该名称。例如,下面的例子中,你可能对循环计数中的实际值并不感兴趣,此时就可以使用“”。
Step2: 1.3 国际化:
也许你也曾看到”_“会被作为一个函数来使用。这种情况下,它通常用于实现国际化和本地化字符串之间翻译查找的函数名称,这似乎源自并遵循相应的C约定。例如,在Django文档“转换”章节中,你将能看到如下代码:
Step3: 可以发现,场景二和场景三中的使用方法可能会相互冲突,所以我们需要避免在使用“”作为国际化查找转换功能的代码块中同时使用“”作为临时名称。
2、名称前的单下划线(如:_shahriar)
程序员使用名称前的单下划线,用于指定该名称属性为“私有”。这有点类似于惯例,为了使其他人(或你自己)使用这些代码时将会知道以“_”开头的名称只供内部使用。正如Python文档中所述:
以下划线“_”为前缀的名称(如_spam)应该被视为API中非公开的部分(不管是函数、方法还是数据成员)。此时,应该将它们看作是一种实现细节,在修改它们时无需对外部通知。
正如上面所说,这确实类似一种惯例,因为它对解释器来说确实有一定的意义,如果你写了代码“from <模块/包名> import *”,那么以“_”开头的名称都不会被导入,除非模块或包中的“all”列表显式地包含了它们。不过值得注意的是,如果使用 import a_module 这样导入模块,仍然可以用 a_module._some_var 这样的形式访问到这样的对象。
另外单下划线开头还有一种一般不会用到的情况在于使用一个 C 编写的扩展库有时会用下划线开头命名,然后使用一个去掉下划线的 Python 模块进行包装。如 struct 这个模块实际上是 C 模块 _struct 的一个 Python 包装。
3、名称前的双下划线(如:__shahriar)
名称(具体为一个方法名)前双下划线(__)的用法并不是一种惯例,对解释器来说它有特定的意义。Python中的这种用法是为了避免与子类定义的名称冲突。Python文档指出,“__spam”这种形式(至少两个前导下划线,最多一个后续下划线)的任何标识符将会被“_classname__spam”这种形式原文取代,在这里“classname”是去掉前导下划线的当前类名。例如下面的例子:
Step4: 正如所预料的,“_internal_use”并未改变,而“__method_name”却被变成了"_ClassName__method_name"__开头。
私有变量会在代码生成之前被转换为长格式(变为公有)。转换机制是这样的:在变量前端插入类名,并在前端加入一个下划线字符。这就是所谓的私有变量名字改编(Private name mangling)。 此时,如果你创建A的一个子类B,那么你将不能轻易地覆写A中的方法“__method_name”,
无论是单下划线还是双下划线开头的成员,都是希望外部程序开发者不要直接使用这些成员变量和这些成员函数,只是双下划线从语法上能够更直接的避免错误的使用,但是如果按照 _类名__成员名 则依然可以访问到。单下划线的在动态调试时可能会方便一些,只要项目组的人都遵守下划线开头的成员不直接使用,那使用单下划线或许会更好。
4、名称前后的双下划线(如:init)
这种用法表示Python中特殊的方法名。其实,这只是一种惯例,对Python系统来说,这将确保不会与用户自定义的名称冲突。通常,你将会覆写这些方法,并在里面实现你所需要的功能,以便Python调用它们。例如,当定义一个类时,你经常会覆写“init”方法。
双下划线开头双下划线结尾的是一些 Python 的“魔术”对象,如类成员的 init、del、add、getitem 等,以及全局的 file、name 等。 Python 官方推荐永远不要将这样的命名方式应用于自己的变量或函数,而是按照文档说明来使用。虽然你也可以编写自己的特殊方法名,但不要这样做。
5、题外话 if name == "main"
Step5: 一旦了解到这一点, 就可以在模块内部为您的模块设计一个测试套件, 在其中加入这个 if 语句。当您直接运行模块, name 的值是 main, 所以测试套件执行。当您导入模块, __name__的值就是别的东西了, 所以测试套件被忽略。这样使得在将新的模块集成到一个大程序之前开发和调试容易多了。
6、用 all 暴露接口
Python 可以在模块级别暴露接口:
很多时候这么做还是很有好处的,提供了哪些是公开接口的约定。
不像 Ruby 或者 Java,Python 没有语言原生的可见性控制,而是靠一套需要大家自觉遵守的”约定“下工作。比如下划线开头的应该对外部不可见。同样,all 也是对于模块公开接口的一种约定,比起下划线,all 提供了暴露接口用的”白名单“。一些不以下划线开头的变量(比如从其他地方 import 到当前模块的成员)可以同样被排除出去。
6.1 控制 from xxx import * 的行为
代码中当然是不提倡用 from xxx import * 的写法的(因为判定一个特殊的函数或属性是从哪来的有些困难,并且会造成调试和重构都更困难。),但是在 console 调试的时候图个方便还是很常见的。如果一个模块 spam 没有定义 all,执行 from spam import * 的时候会将 spam 中非下划线开头的成员都导入当前命名空间中,这样当然就有可能弄脏当前命名空间。如果显式声明了 all,import * 就只会导入 all 列出的成员。如果 all 定义有误,列出的成员不存在,还会明确地抛出异常,而不是默默忽略。
6.3 定义 all 需要注意的地方
如上所述,all 应该是 list 类型的
不应该动态生成 all,比如使用列表解析式。all 的作用就是定义公开接口,如果不以字面量的形式显式写出来,就失去意义了。
即使有了 all 也不应该在非临时代码中使用 from xxx import * 语法,或者用元编程手段模拟 Ruby 的自动 import。Python 不像 Ruby,没有 Module 这种成员,模块就是命名空间隔离的执行者。如果打破了这一层,而且引入诸多动态因素,生产环境跑的代码就充满了不确定性,调试也会非常困难。
按照 PEP8 建议的风格,all 应该写在所有 import 语句下面,和函数、常量等模块成员定义的上面。
如果一个模块需要暴露的接口改动频繁,all 可以这样定义: | Python Code:
8 * 9
_ + 8
Explanation: 在 python 中,下划线命名规则往往令人相当疑惑:单下划线、双下划线、双下划线还分前后……那它们的作用与使用场景到底有何区别呢?
1、单下划线(_)
通常情况下,单下划线(_)会在以下3种场景中使用:
1.1 在解释器中:
在这种情况下,“_”代表交互式解释器会话中上一条执行的语句的结果。这种用法首先被标准CPython解释器采用,然后其他类型的解释器也先后采用。
End of explanation
for _ in range(1, 11):
print(_, end='、 ')
Explanation: 1.2 作为一个名称:
这与上面一点稍微有些联系,此时“”作为临时性的名称使用。这样,当其他人阅读你的代码时将会知道,你分配了一个特定的名称,但是并不会在后面再次用到该名称。例如,下面的例子中,你可能对循环计数中的实际值并不感兴趣,此时就可以使用“”。
End of explanation
from django.utils.translation import ugettext as _
from django.http import HttpResponse
def my_view(request):
output = _("Welcome to my site.")
return HttpResponse(output)
Explanation: 1.3 国际化:
也许你也曾看到”_“会被作为一个函数来使用。这种情况下,它通常用于实现国际化和本地化字符串之间翻译查找的函数名称,这似乎源自并遵循相应的C约定。例如,在Django文档“转换”章节中,你将能看到如下代码:
End of explanation
class A(object):
def _internal_use(self):
pass
def __method_name(self):
pass
print(dir(A()))
Explanation: 可以发现,场景二和场景三中的使用方法可能会相互冲突,所以我们需要避免在使用“”作为国际化查找转换功能的代码块中同时使用“”作为临时名称。
2、名称前的单下划线(如:_shahriar)
程序员使用名称前的单下划线,用于指定该名称属性为“私有”。这有点类似于惯例,为了使其他人(或你自己)使用这些代码时将会知道以“_”开头的名称只供内部使用。正如Python文档中所述:
以下划线“_”为前缀的名称(如_spam)应该被视为API中非公开的部分(不管是函数、方法还是数据成员)。此时,应该将它们看作是一种实现细节,在修改它们时无需对外部通知。
正如上面所说,这确实类似一种惯例,因为它对解释器来说确实有一定的意义,如果你写了代码“from <模块/包名> import *”,那么以“_”开头的名称都不会被导入,除非模块或包中的“all”列表显式地包含了它们。不过值得注意的是,如果使用 import a_module 这样导入模块,仍然可以用 a_module._some_var 这样的形式访问到这样的对象。
另外单下划线开头还有一种一般不会用到的情况在于使用一个 C 编写的扩展库有时会用下划线开头命名,然后使用一个去掉下划线的 Python 模块进行包装。如 struct 这个模块实际上是 C 模块 _struct 的一个 Python 包装。
3、名称前的双下划线(如:__shahriar)
名称(具体为一个方法名)前双下划线(__)的用法并不是一种惯例,对解释器来说它有特定的意义。Python中的这种用法是为了避免与子类定义的名称冲突。Python文档指出,“__spam”这种形式(至少两个前导下划线,最多一个后续下划线)的任何标识符将会被“_classname__spam”这种形式原文取代,在这里“classname”是去掉前导下划线的当前类名。例如下面的例子:
End of explanation
import time
print(time.__name__)
Explanation: 正如所预料的,“_internal_use”并未改变,而“__method_name”却被变成了"_ClassName__method_name"__开头。
私有变量会在代码生成之前被转换为长格式(变为公有)。转换机制是这样的:在变量前端插入类名,并在前端加入一个下划线字符。这就是所谓的私有变量名字改编(Private name mangling)。 此时,如果你创建A的一个子类B,那么你将不能轻易地覆写A中的方法“__method_name”,
无论是单下划线还是双下划线开头的成员,都是希望外部程序开发者不要直接使用这些成员变量和这些成员函数,只是双下划线从语法上能够更直接的避免错误的使用,但是如果按照 _类名__成员名 则依然可以访问到。单下划线的在动态调试时可能会方便一些,只要项目组的人都遵守下划线开头的成员不直接使用,那使用单下划线或许会更好。
4、名称前后的双下划线(如:init)
这种用法表示Python中特殊的方法名。其实,这只是一种惯例,对Python系统来说,这将确保不会与用户自定义的名称冲突。通常,你将会覆写这些方法,并在里面实现你所需要的功能,以便Python调用它们。例如,当定义一个类时,你经常会覆写“init”方法。
双下划线开头双下划线结尾的是一些 Python 的“魔术”对象,如类成员的 init、del、add、getitem 等,以及全局的 file、name 等。 Python 官方推荐永远不要将这样的命名方式应用于自己的变量或函数,而是按照文档说明来使用。虽然你也可以编写自己的特殊方法名,但不要这样做。
5、题外话 if name == "main":
所有的 Python 模块都是对象并且有几个有用的属性,你可以使用这些属性方便地测试你所书写的模块。
模块是对象, 并且所有的模块都有一个内置属性 name。一个模块的 name 的值要看您如何应用模块。如果 import 模块, 那么 name__的值通常为模块的文件名, 不带路径或者文件扩展名。但是您也可以像一个标准的程序一样直接运行模块, 在这种情况下 __name__的值将是一个特别的缺省值:__main。
End of explanation
__all__ = [
"foo",
"bar",
"egg",
]
Explanation: 一旦了解到这一点, 就可以在模块内部为您的模块设计一个测试套件, 在其中加入这个 if 语句。当您直接运行模块, name 的值是 main, 所以测试套件执行。当您导入模块, __name__的值就是别的东西了, 所以测试套件被忽略。这样使得在将新的模块集成到一个大程序之前开发和调试容易多了。
6、用 all 暴露接口
Python 可以在模块级别暴露接口:
很多时候这么做还是很有好处的,提供了哪些是公开接口的约定。
不像 Ruby 或者 Java,Python 没有语言原生的可见性控制,而是靠一套需要大家自觉遵守的”约定“下工作。比如下划线开头的应该对外部不可见。同样,all 也是对于模块公开接口的一种约定,比起下划线,all 提供了暴露接口用的”白名单“。一些不以下划线开头的变量(比如从其他地方 import 到当前模块的成员)可以同样被排除出去。
6.1 控制 from xxx import * 的行为
代码中当然是不提倡用 from xxx import * 的写法的(因为判定一个特殊的函数或属性是从哪来的有些困难,并且会造成调试和重构都更困难。),但是在 console 调试的时候图个方便还是很常见的。如果一个模块 spam 没有定义 all,执行 from spam import * 的时候会将 spam 中非下划线开头的成员都导入当前命名空间中,这样当然就有可能弄脏当前命名空间。如果显式声明了 all,import * 就只会导入 all 列出的成员。如果 all 定义有误,列出的成员不存在,还会明确地抛出异常,而不是默默忽略。
6.3 定义 all 需要注意的地方
如上所述,all 应该是 list 类型的
不应该动态生成 all,比如使用列表解析式。all 的作用就是定义公开接口,如果不以字面量的形式显式写出来,就失去意义了。
即使有了 all 也不应该在非临时代码中使用 from xxx import * 语法,或者用元编程手段模拟 Ruby 的自动 import。Python 不像 Ruby,没有 Module 这种成员,模块就是命名空间隔离的执行者。如果打破了这一层,而且引入诸多动态因素,生产环境跑的代码就充满了不确定性,调试也会非常困难。
按照 PEP8 建议的风格,all 应该写在所有 import 语句下面,和函数、常量等模块成员定义的上面。
如果一个模块需要暴露的接口改动频繁,all 可以这样定义:
End of explanation |
10,238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OpenStreetMap的OSM文件对象数据分类捡取器
by [email protected], 2016-03-21.
功能:
输出三个单行存储的json文件,可在Spark中使用,通过spark的sc.read.json()直接读入处理。
本工具将osm文件按照tag快速分类,直接转为node/way/relation三个json文件,并按行存储。
说明:
Spark默认按行处理,因此处理xml尤其是多行XML比较麻烦,如OpenStreetMap的OSM格式。
对于xml文件较大、无法全部载入内存的情况,需要预处理成行方式,然后在Spark中分布式载入。
后续工作:
1、映射way的nd节点坐标,构建出几何对象的坐标串。
2、每一几何对象数据转为wkt格式,保存到JSON的geometry域。
3、数据表按区域分割后,转为GeoPandas,然后进一步转为shape file。
XML快速转为JSON格式,使用lxml进行流式处理。
流方式递归读取xml结构数据(占用资源少)
Step1: 执行osm的xml到json转换,一次扫描提取为三个文件。
context = etree.iterparse(osmfile,tag=["node","way"])的tag参数必须要给值,否则取出来的sub element全部是none。
使用了3个打开的全局文件:fnode、fway、frelation
Step2: 执行转换。 | Python Code:
import os
import time
import json
from pprint import *
import lxml
from lxml import etree
import xmltodict, sys, gc
from pymongo import MongoClient
gc.enable() #Enable Garbadge Collection
# 将指定tag的对象提取,写入json文件。
def process_element(elem):
elem_data = etree.tostring(elem)
elem_dict = xmltodict.parse(elem_data,attr_prefix="",cdata_key="")
#print(elem_dict)
if (elem.tag == "node"):
elem_jsonStr = json.dumps(elem_dict["node"])
fnode.write(elem_jsonStr + "\n")
elif (elem.tag == "way"):
elem_jsonStr = json.dumps(elem_dict["way"])
fway.write(elem_jsonStr + "\n")
elif (elem.tag == "relation"):
elem_jsonStr = json.dumps(elem_dict["relation"])
frelation.write(elem_jsonStr + "\n")
# 遍历所有对象,然后调用process_element处理。
# 迭代处理,func为迭代的element处理函数。
def fast_iter(context, func_element, maxline):
placement = 0
try:
for event, elem in context:
placement += 1
if (maxline > 0): # 最多的转换对象限制,大数据调试时使用于抽样检查。
print(etree.tostring(elem))
if (placement >= maxline): break
func_element(elem) #处理每一个元素,调用process_element.
elem.clear()
while elem.getprevious() is not None:
del elem.getparent()[0]
except Exception as ex:
print(time.strftime(ISOTIMEFORMAT),", Error:",ex)
del context
Explanation: OpenStreetMap的OSM文件对象数据分类捡取器
by [email protected], 2016-03-21.
功能:
输出三个单行存储的json文件,可在Spark中使用,通过spark的sc.read.json()直接读入处理。
本工具将osm文件按照tag快速分类,直接转为node/way/relation三个json文件,并按行存储。
说明:
Spark默认按行处理,因此处理xml尤其是多行XML比较麻烦,如OpenStreetMap的OSM格式。
对于xml文件较大、无法全部载入内存的情况,需要预处理成行方式,然后在Spark中分布式载入。
后续工作:
1、映射way的nd节点坐标,构建出几何对象的坐标串。
2、每一几何对象数据转为wkt格式,保存到JSON的geometry域。
3、数据表按区域分割后,转为GeoPandas,然后进一步转为shape file。
XML快速转为JSON格式,使用lxml进行流式处理。
流方式递归读取xml结构数据(占用资源少): http://www.ibm.com/developerworks/xml/library/x-hiperfparse/
XML字符串转为json对象支持库 : https://github.com/martinblech/xmltodict
xmltodict.parse()会将字段名输出添加@和#,在Spark查询中会引起问题,需要去掉。如下设置即可:
xmltodict.parse(elem_data,attr_prefix="",cdata_key="")
编码和错误xml文件恢复,如下:
magical_parser = lxml.etree.XMLParser(encoding='utf-8', recover=True)
tree = etree.parse(StringIO(your_xml_string), magical_parser) #or pass in an open file object
先将element转为string,然后生成dict,再用json.dump()产生json字符串。
elem_data = etree.tostring(elem)
elem_dict = xmltodict.parse(elem_data)
elem_jsonStr = json.dumps(elem_dict)
可以使用json.loads(elem_jsonStr)创建出可编程的json对象。
End of explanation
#maxline = 0 #抽样调试使用,最多转换的对象,设为0则转换文件的全部。
def transform(osmfile,maxline = 0):
ISOTIMEFORMAT="%Y-%m-%d %X"
print(time.strftime( ISOTIMEFORMAT),", Process osm XML...",osmfile," =>MaxLine:",maxline)
global fnode
global fway
global frelation
fnode = open(osmfile + "_node.json","w+")
fway = open(osmfile + "_way.json","w+")
frelation = open(osmfile + "_relation.json","w+")
context = etree.iterparse(osmfile,tag=["node","way","relation"])
fast_iter(context, process_element, maxline)
fnode.close()
fway.close()
frelation.close()
print(time.strftime( ISOTIMEFORMAT),", OSM to JSON, Finished.")
Explanation: 执行osm的xml到json转换,一次扫描提取为三个文件。
context = etree.iterparse(osmfile,tag=["node","way"])的tag参数必须要给值,否则取出来的sub element全部是none。
使用了3个打开的全局文件:fnode、fway、frelation
End of explanation
# 需要处理的osm文件名,自行修改。
osmfile = '../data/osm/muenchen.osm'
transform(osmfile,0)
Explanation: 执行转换。
End of explanation |
10,239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning Scikit-learn
Step1: Import the digits dataset (http
Step2: Let's show how the digits look like...
Step3: Now, let's define a function that will plot a scatter with the two-dimensional points that will be obtained by a PCA transformation. Our data points will also be colored according to their classes. Recall that the target class will not be used to perform the transformation; we want to investigate if the distribution after PCA reveals the distribution of the different classes, and if they are clearly separable. We will use ten different colors for each of the digits, from 0 to 9.
Find components and plot first and second components
Step4: At this point, we are ready to perform the PCA transformation. In scikit-learn, PCA is implemented as a transformer object that learns n number of components through the fit method, and can be used on new data to project it onto these components. In scikit-learn, we have various classes that implement different kinds of PCA decompositions. In our case, we will work with the PCA class from the sklearn.decomposition module. The most important parameter we can change is n_components, which allows us to specify the number of features that the obtained instances will have.
Step5: To finish, let us look at principal component transformations. We will take the principal components from the estimator by accessing the components attribute. Each of its components is a matrix that is used to transform a vector from the original space to the transformed space. In the scatter we previously plotted, we only took into account the first two components. | Python Code:
%pylab inline
import IPython
import sklearn as sk
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
print 'IPython version:', IPython.__version__
print 'numpy version:', np.__version__
print 'scikit-learn version:', sk.__version__
print 'matplotlib version:', matplotlib.__version__
Explanation: Learning Scikit-learn: Machine Learning in Python
IPython Notebook for Chapter 3: Unsupervised Learning - Principal Component Analysis
Principal Component Analysis (PCA) is useful for exploratory data analysis before building predictive models.
For our learning methods, PCA will allow us to reduce a high-dimensional space into a low-dimensional one while preserving as much variance as possible. We will use the handwritten digits recognition problem to show how it can be used
Start by importing numpy, scikit-learn, and pyplot, the Python libraries we will be using in this chapter. Show the versions we will be using (in case you have problems running the notebooks).
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
X_digits, y_digits = digits.data, digits.target
print digits.keys()
Explanation: Import the digits dataset (http://scikit-learn.org/stable/auto_examples/datasets/plot_digits_last_image.html) and show its attributes
End of explanation
n_row, n_col = 2, 5
def print_digits(images, y, max_n=10):
# set up the figure size in inches
fig = plt.figure(figsize=(2. * n_col, 2.26 * n_row))
i=0
while i < max_n and i < images.shape[0]:
p = fig.add_subplot(n_row, n_col, i + 1, xticks=[], yticks=[])
p.imshow(images[i], cmap=plt.cm.bone, interpolation='nearest')
# label the image with the target value
p.text(0, -1, str(y[i]))
i = i + 1
print_digits(digits.images, digits.target, max_n=10)
Explanation: Let's show how the digits look like...
End of explanation
def plot_pca_scatter():
colors = ['black', 'blue', 'purple', 'yellow', 'white', 'red', 'lime', 'cyan', 'orange', 'gray']
for i in xrange(len(colors)):
px = X_pca[:, 0][y_digits == i]
py = X_pca[:, 1][y_digits == i]
plt.scatter(px, py, c=colors[i])
plt.legend(digits.target_names)
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
Explanation: Now, let's define a function that will plot a scatter with the two-dimensional points that will be obtained by a PCA transformation. Our data points will also be colored according to their classes. Recall that the target class will not be used to perform the transformation; we want to investigate if the distribution after PCA reveals the distribution of the different classes, and if they are clearly separable. We will use ten different colors for each of the digits, from 0 to 9.
Find components and plot first and second components
End of explanation
from sklearn.decomposition import PCA
n_components = n_row * n_col # 10
estimator = PCA(n_components=n_components)
X_pca = estimator.fit_transform(X_digits)
plot_pca_scatter() # Note that we only plot the first and second principal component
Explanation: At this point, we are ready to perform the PCA transformation. In scikit-learn, PCA is implemented as a transformer object that learns n number of components through the fit method, and can be used on new data to project it onto these components. In scikit-learn, we have various classes that implement different kinds of PCA decompositions. In our case, we will work with the PCA class from the sklearn.decomposition module. The most important parameter we can change is n_components, which allows us to specify the number of features that the obtained instances will have.
End of explanation
def print_pca_components(images, n_col, n_row):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(comp.reshape((8, 8)), interpolation='nearest')
plt.text(0, -1, str(i + 1) + '-component')
plt.xticks(())
plt.yticks(())
print_pca_components(estimator.components_[:n_components], n_col, n_row)
Explanation: To finish, let us look at principal component transformations. We will take the principal components from the estimator by accessing the components attribute. Each of its components is a matrix that is used to transform a vector from the original space to the transformed space. In the scatter we previously plotted, we only took into account the first two components.
End of explanation |
10,240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Redcard Exploratory Data Analysis
This dataset is taken from a fantastic paper that looks to see how analytical choices made by different data science teams on the same dataset in an attempt to answer the same research question affect the final outcome.
Many analysts, one dataset
Step1: About the Data
The dataset is available as a list with 146,028 dyads of players and referees and includes details from players, details from referees and details regarding the interactions of player-referees. A summary of the variables of interest can be seen below. A detailed description of all variables included can be seen in the README file on the project website.
From a company for sports statistics, we obtained data and profile photos from all soccer players (N = 2,053) playing in the first male divisions of England, Germany, France and Spain in the 2012-2013 season and all referees (N = 3,147) that these players played under in their professional career (see Figure 1). We created a dataset of playerâreferee dyads including the number of matches players and referees encountered each other and our dependent variable, the number of red cards given to a player by a particular referee throughout all matches the two encountered each other.
-- https
Step2: Joining and further considerations | Python Code:
%matplotlib inline
%config InlineBackend.figure_format='retina'
from __future__ import absolute_import, division, print_function
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.pyplot import GridSpec
import seaborn as sns
import numpy as np
import pandas as pd
import os, sys
from tqdm import tqdm
import warnings
warnings.filterwarnings('ignore')
sns.set_context("poster", font_scale=1.3)
import missingno as msno
import pandas_profiling
from sklearn.datasets import make_blobs
import time
Explanation: Redcard Exploratory Data Analysis
This dataset is taken from a fantastic paper that looks to see how analytical choices made by different data science teams on the same dataset in an attempt to answer the same research question affect the final outcome.
Many analysts, one dataset: Making transparent how variations in analytical choices affect results
The data can be found here.
The Task
Do an Exploratory Data Analysis on the redcard dataset. Keeping in mind the question is the following: Are soccer referees more likely to give red cards to dark-skin-toned players than light-skin-toned players?
Before plotting/joining/doing something, have a question or hypothesis that you want to investigate
Draw a plot of what you want to see on paper to sketch the idea
Write it down, then make the plan on how to get there
How do you know you aren't fooling yourself
What else can I check if this is actually true?
What evidence could there be that it's wrong?
End of explanation
# Uncomment one of the following lines and run the cell:
# df = pd.read_csv("redcard.csv.gz", compression='gzip')
# df = pd.read_csv("https://github.com/cmawer/pycon-2017-eda-tutorial/raw/master/data/redcard/redcard.csv.gz", compression='gzip')
def save_subgroup(dataframe, g_index, subgroup_name, prefix='raw_'):
save_subgroup_filename = "".join([prefix, subgroup_name, ".csv.gz"])
dataframe.to_csv(save_subgroup_filename, compression='gzip', encoding='UTF-8')
test_df = pd.read_csv(save_subgroup_filename, compression='gzip', index_col=g_index, encoding='UTF-8')
# Test that we recover what we send in
if dataframe.equals(test_df):
print("Test-passed: we recover the equivalent subgroup dataframe.")
else:
print("Warning -- equivalence test!!! Double-check.")
def load_subgroup(filename, index_col=[0]):
return pd.read_csv(filename, compression='gzip', index_col=index_col)
clean_players = load_subgroup("cleaned_players.csv.gz")
players = load_subgroup("raw_players.csv.gz", )
countries = load_subgroup("raw_countries.csv.gz")
referees = load_subgroup("raw_referees.csv.gz")
agg_dyads = pd.read_csv("raw_dyads.csv.gz", compression='gzip', index_col=[0, 1])
# tidy_dyads = load_subgroup("cleaned_dyads.csv.gz")
tidy_dyads = pd.read_csv("cleaned_dyads.csv.gz", compression='gzip', index_col=[0, 1])
Explanation: About the Data
The dataset is available as a list with 146,028 dyads of players and referees and includes details from players, details from referees and details regarding the interactions of player-referees. A summary of the variables of interest can be seen below. A detailed description of all variables included can be seen in the README file on the project website.
From a company for sports statistics, we obtained data and profile photos from all soccer players (N = 2,053) playing in the first male divisions of England, Germany, France and Spain in the 2012-2013 season and all referees (N = 3,147) that these players played under in their professional career (see Figure 1). We created a dataset of playerâreferee dyads including the number of matches players and referees encountered each other and our dependent variable, the number of red cards given to a player by a particular referee throughout all matches the two encountered each other.
-- https://docs.google.com/document/d/1uCF5wmbcL90qvrk_J27fWAvDcDNrO9o_APkicwRkOKc/edit
| Variable Name: | Variable Description: |
| -- | -- |
| playerShort | short player ID |
| player | player name |
| club | player club |
| leagueCountry | country of player club (England, Germany, France, and Spain) |
| height | player height (in cm) |
| weight | player weight (in kg) |
| position | player position |
| games | number of games in the player-referee dyad |
| goals | number of goals in the player-referee dyad |
| yellowCards | number of yellow cards player received from the referee |
| yellowReds | number of yellow-red cards player received from the referee |
| redCards | number of red cards player received from the referee |
| photoID | ID of player photo (if available) |
| rater1 | skin rating of photo by rater 1 |
| rater2 | skin rating of photo by rater 2 |
| refNum | unique referee ID number (referee name removed for anonymizing purposes) |
| refCountry | unique referee country ID number |
| meanIAT | mean implicit bias score (using the race IAT) for referee country |
| nIAT | sample size for race IAT in that particular country |
| seIAT | standard error for mean estimate of race IAT |
| meanExp | mean explicit bias score (using a racial thermometer task) for referee country |
| nExp | sample size for explicit bias in that particular country |
| seExp | standard error for mean estimate of explicit bias measure |
End of explanation
!conda install pivottablejs -y
from pivottablejs import pivot_ui
clean_players = load_subgroup("cleaned_players.csv.gz")
temp = tidy_dyads.reset_index().set_index('playerShort').merge(clean_players, left_index=True, right_index=True)
temp.shape
# This does not work on Azure notebooks out of the box
# pivot_ui(temp[['skintoneclass', 'position_agg', 'redcard']], )
# How many games has each player played in?
games = tidy_dyads.groupby(level=1).count()
sns.distplot(games);
(tidy_dyads.groupby(level=0)
.count()
.sort_values('redcard', ascending=False)
.rename(columns={'redcard':'total games refereed'})).head()
(tidy_dyads.groupby(level=0)
.sum()
.sort_values('redcard', ascending=False)
.rename(columns={'redcard':'total redcards given'})).head()
(tidy_dyads.groupby(level=1)
.sum()
.sort_values('redcard', ascending=False)
.rename(columns={'redcard':'total redcards received'})).head()
tidy_dyads.head()
tidy_dyads.groupby(level=0).size().sort_values(ascending=False)
total_ref_games = tidy_dyads.groupby(level=0).size().sort_values(ascending=False)
total_player_games = tidy_dyads.groupby(level=1).size().sort_values(ascending=False)
total_ref_given = tidy_dyads.groupby(level=0).sum().sort_values(ascending=False,by='redcard')
total_player_received = tidy_dyads.groupby(level=1).sum().sort_values(ascending=False, by='redcard')
sns.distplot(total_player_received, kde=False);
sns.distplot(total_ref_given, kde=False);
tidy_dyads.groupby(level=1).sum().sort_values(ascending=False, by='redcard').head()
tidy_dyads.sum(), tidy_dyads.count(), tidy_dyads.sum()/tidy_dyads.count()
player_ref_game = (tidy_dyads.reset_index()
.set_index('playerShort')
.merge(clean_players,
left_index=True,
right_index=True)
)
player_ref_game.head()
player_ref_game.shape
bootstrap = pd.concat([player_ref_game.sample(replace=True,
n=10000).groupby('skintone').mean()
for _ in range(100)])
ax = sns.regplot(bootstrap.index.values,
y='redcard',
data=bootstrap,
lowess=True,
scatter_kws={'alpha':0.4,},
x_jitter=(0.125 / 4.0))
ax.set_xlabel("Skintone");
Explanation: Joining and further considerations
End of explanation |
10,241 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
N.B., Cannot use 32-bit programmable interrupt timer (PIT) to trigger periodic DMA due to hardware bug.
See here.
The solution shown below uses the 16-bit programmable delay block (PDB).
Disadvantages to using PDB
Step1: Overview
Use linked DMA channels to perform "scan" across multiple ADC input channels.
After each scan, use DMA scatter chain to write the converted ADC values to a
separate output array for each ADC channel. The length of the output array to
allocate for each ADC channel is determined by the sample_count in the
example below.
See diagram below.
Channel configuration ##
DMA channel $i$ copies conesecutive SC1A configurations to the ADC SC1A
register. Each SC1A configuration selects an analog input channel.
Channel $i$ is initially triggered by software trigger
(i.e., DMA_SSRT = i), starting the ADC conversion for the first ADC
channel configuration.
Loading of subsequent ADC channel configurations is triggered through
minor loop linking of DMA channel $ii$ to DMA channel $i$.
DMA channel $ii$ is triggered by ADC conversion complete (i.e., COCO), and
copies the output result of the ADC to consecutive locations in the result
array.
Channel $ii$ has minor loop link set to channel $i$, which triggers the
loading of the next channel SC1A configuration to be loaded immediately
after the current ADC result has been copied to the result array.
After $n$ triggers of channel $i$, the result array contains $n$ ADC results,
one result per channel in the SC1A table.
N.B., Only the trigger for the first ADC channel is an explicit
software trigger. All remaining triggers occur through minor-loop DMA
channel linking from channel $ii$ to channel $i$.
After each scan through all ADC channels is complete, the ADC readings are
scattered using the selected "scatter" DMA channel through a major-loop link
between DMA channel $ii$ and the "scatter" channel.
<img src="multi-channel_ADC_multi-samples_using_DMA.jpg" style="max-height
Step2: Test periodic ADC scan using PDB
Step3: Configure ADC sample rate, etc.
Step4: Pseudo-code to set DMA channel $i$ to be triggered by ADC0 conversion complete.
DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
DMA_ERQ[i] = 1 // DMA request input signals and this enable request flag
// must be asserted before a channel’s hardware service
// request is accepted (21.3.3/394).
DMA_SERQ = i // Can use memory mapped convenience register to set instead.
Set DMA mux source for channel 0 to ADC0
Step5: Analog channel list
List of channels to sample.
Map channels from Teensy references (e.g., A0, A1, etc.) to the Kinetis analog
pin numbers using the adc.CHANNEL_TO_SC1A_ADC0 mapping.
Step6: Allocate and initialize device arrays
SD1A register configuration for each ADC channel in the channel_sc1as list.
Copy channel_sc1as list to device.
ADC result array
Initialize to zero.
Step7: Configure DMA channel $i$
Step8: Configure DMA channel $ii$
Step9: Trigger sample scan across selected ADC channels
Step10: Set DMA channel $i$ to be triggered by PDB (when PDB enabled)
Step11: Seems to work at 100kHz..... | Python Code:
import pandas as pd
def get_pdb_divide_params(frequency, F_BUS=int(48e6)):
mult_factor = np.array([1, 10, 20, 40])
prescaler = np.arange(8)
clock_divide = (pd.DataFrame([[i, m, p, m * (1 << p)]
for i, m in enumerate(mult_factor)
for p in prescaler],
columns=['mult_', 'mult_factor',
'prescaler', 'combined'])
.drop_duplicates(subset=['combined'])
.sort_values('combined', ascending=True))
clock_divide['clock_mod'] = (F_BUS / frequency
/ clock_divide.combined).astype(int)
return clock_divide.loc[clock_divide.clock_mod <= 0xffff]
Explanation: N.B., Cannot use 32-bit programmable interrupt timer (PIT) to trigger periodic DMA due to hardware bug.
See here.
The solution shown below uses the 16-bit programmable delay block (PDB).
Disadvantages to using PDB:
Lower resolution counter compared to PIT (16-bit vs 32-bit).
In practice, this limits the maximum timer period to about 1.5 seconds.
There is only one PDB. Using it for ADC means it cannot be used for
another task. Note that there are four different programmable interrupt
timers.
Advantages to using PDB:
It works!
End of explanation
import arduino_helpers.hardware.teensy as teensy
from arduino_rpc.protobuf import resolve_field_values
from teensy_minimal_rpc import SerialProxy
import teensy_minimal_rpc.DMA as DMA
import teensy_minimal_rpc.ADC as ADC
import teensy_minimal_rpc.SIM as SIM
import teensy_minimal_rpc.PIT as PIT
# Disconnect from existing proxy (if available)
try:
del proxy
except NameError:
pass
proxy = SerialProxy()
proxy.pin_mode(teensy.LED_BUILTIN, 1)
from IPython.display import display
proxy.update_sim_SCGC6(SIM.R_SCGC6(PDB=True))
sim_scgc6 = SIM.R_SCGC6.FromString(proxy.read_sim_SCGC6().tostring())
display(resolve_field_values(sim_scgc6)[['full_name', 'value']].T)
Explanation: Overview
Use linked DMA channels to perform "scan" across multiple ADC input channels.
After each scan, use DMA scatter chain to write the converted ADC values to a
separate output array for each ADC channel. The length of the output array to
allocate for each ADC channel is determined by the sample_count in the
example below.
See diagram below.
Channel configuration ##
DMA channel $i$ copies conesecutive SC1A configurations to the ADC SC1A
register. Each SC1A configuration selects an analog input channel.
Channel $i$ is initially triggered by software trigger
(i.e., DMA_SSRT = i), starting the ADC conversion for the first ADC
channel configuration.
Loading of subsequent ADC channel configurations is triggered through
minor loop linking of DMA channel $ii$ to DMA channel $i$.
DMA channel $ii$ is triggered by ADC conversion complete (i.e., COCO), and
copies the output result of the ADC to consecutive locations in the result
array.
Channel $ii$ has minor loop link set to channel $i$, which triggers the
loading of the next channel SC1A configuration to be loaded immediately
after the current ADC result has been copied to the result array.
After $n$ triggers of channel $i$, the result array contains $n$ ADC results,
one result per channel in the SC1A table.
N.B., Only the trigger for the first ADC channel is an explicit
software trigger. All remaining triggers occur through minor-loop DMA
channel linking from channel $ii$ to channel $i$.
After each scan through all ADC channels is complete, the ADC readings are
scattered using the selected "scatter" DMA channel through a major-loop link
between DMA channel $ii$ and the "scatter" channel.
<img src="multi-channel_ADC_multi-samples_using_DMA.jpg" style="max-height: 600px" />
Device
Connect to device
End of explanation
dma_channel_scatter = 0
dma_channel_i = 1
dma_channel_ii = 2
import numpy as np
PDB0_IDLY = 0x4003600C # Interrupt Delay Register
PDB0_SC = 0x40036000 # Status and Control Register
PDB0_MOD = 0x40036004 # Modulus Register
PDB_SC_PDBEIE = 0x00020000 # Sequence Error Interrupt Enable
PDB_SC_SWTRIG = 0x00010000 # Software Trigger
PDB_SC_DMAEN = 0x00008000 # DMA Enable
PDB_SC_PDBEN = 0x00000080 # PDB Enable
PDB_SC_PDBIF = 0x00000040 # PDB Interrupt Flag
PDB_SC_PDBIE = 0x00000020 # PDB Interrupt Enable.
PDB_SC_CONT = 0x00000002 # Continuous Mode Enable
PDB_SC_LDOK = 0x00000001 # Load OK
def PDB_SC_TRGSEL(n): return (((n) & 15) << 8) # Trigger Input Source Select
def PDB_SC_PRESCALER(n): return (((n) & 7) << 12) # Prescaler Divider Select
def PDB_SC_MULT(n): return (((n) & 3) << 2) # Multiplication Factor
def PDB_SC_LDMOD(n): return (((n) & 3) << 18) # Load Mode Select
# PDB0_IDLY = 1; // the pdb interrupt happens when IDLY is equal to CNT+1
proxy.mem_cpy_host_to_device(PDB0_IDLY, np.uint32(1).tostring())
# software trigger enable PDB continuous
PDB_CONFIG = (PDB_SC_TRGSEL(15) | PDB_SC_PDBEN | PDB_SC_CONT | PDB_SC_LDMOD(0))
clock_divide = get_pdb_divide_params(1).iloc[0]
PDB0_SC_ = (PDB_CONFIG | PDB_SC_PRESCALER(clock_divide.prescaler) |
PDB_SC_MULT(clock_divide.mult_) |
PDB_SC_DMAEN | PDB_SC_LDOK) # load all new values
proxy.mem_cpy_host_to_device(PDB0_SC, np.uint32(PDB0_SC_).tostring())
# PDB0_MOD = (uint16_t)(mod-1);
proxy.mem_cpy_host_to_device(PDB0_MOD, np.uint32(clock_divide.clock_mod).tostring())
Explanation: Test periodic ADC scan using PDB
End of explanation
# Set ADC parameters
proxy.setAveraging(4, teensy.ADC_0)
proxy.setResolution(10, teensy.ADC_0)
proxy.setConversionSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.setSamplingSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.update_adc_registers(
teensy.ADC_0,
ADC.Registers(CFG2=ADC.R_CFG2(MUXSEL=ADC.R_CFG2.B)))
Explanation: Configure ADC sample rate, etc.
End of explanation
DMAMUX_SOURCE_ADC0 = 40 # from `kinetis.h`
DMAMUX_SOURCE_ADC1 = 41 # from `kinetis.h`
# DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
# DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
# DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
proxy.update_dma_mux_chcfg(dma_channel_ii,
DMA.MUX_CHCFG(SOURCE=DMAMUX_SOURCE_ADC0,
TRIG=False,
ENBL=True))
proxy.enableDMA(teensy.ADC_0)
proxy.DMA_registers().loc['']
Explanation: Pseudo-code to set DMA channel $i$ to be triggered by ADC0 conversion complete.
DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
DMA_ERQ[i] = 1 // DMA request input signals and this enable request flag
// must be asserted before a channel’s hardware service
// request is accepted (21.3.3/394).
DMA_SERQ = i // Can use memory mapped convenience register to set instead.
Set DMA mux source for channel 0 to ADC0
End of explanation
import re
import numpy as np
import pandas as pd
import arduino_helpers.hardware.teensy.adc as adc
# The number of samples to record for each ADC channel.
sample_count = 32
teensy_analog_channels = ['A0', 'A1', 'A0', 'A3', 'A0']
sc1a_pins = pd.Series(dict([(v, adc.CHANNEL_TO_SC1A_ADC0[getattr(teensy, v)])
for v in dir(teensy) if re.search(r'^A\d+', v)]))
channel_sc1as = np.array(sc1a_pins[teensy_analog_channels].tolist(), dtype='uint32')
Explanation: Analog channel list
List of channels to sample.
Map channels from Teensy references (e.g., A0, A1, etc.) to the Kinetis analog
pin numbers using the adc.CHANNEL_TO_SC1A_ADC0 mapping.
End of explanation
proxy.free_all()
N = np.dtype('uint16').itemsize * channel_sc1as.size
# Allocate source array
adc_result_addr = proxy.mem_alloc(N)
# Fill result array with zeros
proxy.mem_fill_uint8(adc_result_addr, 0, N)
# Copy channel SC1A configurations to device memory
adc_sda1s_addr = proxy.mem_aligned_alloc_and_set(4, channel_sc1as.view('uint8'))
# Allocate source array
samples_addr = proxy.mem_alloc(sample_count * N)
tcds_addr = proxy.mem_aligned_alloc(32, sample_count * 32)
hw_tcds_addr = 0x40009000
tcd_addrs = [tcds_addr + 32 * i for i in xrange(sample_count)]
hw_tcd_addrs = [hw_tcds_addr + 32 * i for i in xrange(sample_count)]
# Fill result array with zeros
proxy.mem_fill_uint8(samples_addr, 0, sample_count * N)
# Create Transfer Control Descriptor configuration for first chunk, encoded
# as a Protocol Buffer message.
tcd0_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1),
BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._16_BIT,
DSIZE=DMA.R_TCD_ATTR._16_BIT),
NBYTES_MLNO=channel_sc1as.size * 2,
SADDR=int(adc_result_addr),
SOFF=2,
SLAST=-channel_sc1as.size * 2,
DADDR=int(samples_addr),
DOFF=2 * sample_count,
DLASTSGA=int(tcd_addrs[1]),
CSR=DMA.R_TCD_CSR(START=0, DONE=False, ESG=True))
# Convert Protocol Buffer encoded TCD to bytes structure.
tcd0 = proxy.tcd_msg_to_struct(tcd0_msg)
# Create binary TCD struct for each TCD protobuf message and copy to device
# memory.
for i in xrange(sample_count):
tcd_i = tcd0.copy()
tcd_i['SADDR'] = adc_result_addr
tcd_i['DADDR'] = samples_addr + 2 * i
tcd_i['DLASTSGA'] = tcd_addrs[(i + 1) % len(tcd_addrs)]
tcd_i['CSR'] |= (1 << 4)
if i == (sample_count - 1): # Last sample, so trigger major loop interrupt
print 'Enable major loop interrupt for sample %d' % i
tcd_i['CSR'] |= (1 << 1) # Set `INTMAJOR` (21.3.29/426)
proxy.mem_cpy_host_to_device(tcd_addrs[i], tcd_i.tostring())
# Load initial TCD in scatter chain to DMA channel chosen to handle scattering.
proxy.mem_cpy_host_to_device(hw_tcd_addrs[dma_channel_scatter],
tcd0.tostring())
proxy.attach_dma_interrupt(dma_channel_scatter)
print 'ADC results:', proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16')
print 'Analog pins:', proxy.mem_cpy_device_to_host(adc_sda1s_addr, len(channel_sc1as) *
channel_sc1as.dtype.itemsize).view('uint32')
Explanation: Allocate and initialize device arrays
SD1A register configuration for each ADC channel in the channel_sc1as list.
Copy channel_sc1as list to device.
ADC result array
Initialize to zero.
End of explanation
ADC0_SC1A = 0x4003B000 # ADC status and control registers 1
sda1_tcd_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._32_BIT,
DSIZE=DMA.R_TCD_ATTR._32_BIT),
NBYTES_MLNO=4,
SADDR=int(adc_sda1s_addr),
SOFF=4,
SLAST=-channel_sc1as.size * 4,
DADDR=int(ADC0_SC1A),
DOFF=0,
DLASTSGA=0,
CSR=DMA.R_TCD_CSR(START=0, DONE=False))
proxy.update_dma_TCD(dma_channel_i, sda1_tcd_msg)
Explanation: Configure DMA channel $i$
End of explanation
ADC0_RA = 0x4003B010 # ADC data result register
ADC0_RB = 0x4003B014 # ADC data result register
tcd_msg = DMA.TCD(CITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
BITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._16_BIT,
DSIZE=DMA.R_TCD_ATTR._16_BIT),
NBYTES_MLNO=2,
SADDR=ADC0_RA,
SOFF=0,
SLAST=0,
DADDR=int(adc_result_addr),
DOFF=2,
DLASTSGA=-channel_sc1as.size * 2,
CSR=DMA.R_TCD_CSR(START=0, DONE=False,
MAJORELINK=True,
MAJORLINKCH=dma_channel_scatter))
proxy.update_dma_TCD(dma_channel_ii, tcd_msg)
# DMA request input signals and this enable request flag
# must be asserted before a channel’s hardware service
# request is accepted (21.3.3/394).
# DMA_SERQ = i
proxy.update_dma_registers(DMA.Registers(SERQ=dma_channel_ii))
Explanation: Configure DMA channel $ii$
End of explanation
# Clear output array to zero.
proxy.mem_fill_uint8(adc_result_addr, 0, N)
proxy.mem_fill_uint8(samples_addr, 0, sample_count * N)
# Software trigger channel $i$ to copy *first* SC1A configuration, which
# starts ADC conversion for the first channel.
#
# Conversions for subsequent ADC channels are triggered through minor-loop
# linking from DMA channel $ii$ to DMA channel $i$ (*not* through explicit
# software trigger).
print 'ADC results:'
for i in xrange(sample_count):
proxy.update_dma_registers(DMA.Registers(SSRT=dma_channel_i))
# Display converted ADC values (one value per channel in `channel_sd1as` list).
print ' Iteration %s:' % i, proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16')
print ''
Explanation: Trigger sample scan across selected ADC channels
End of explanation
proxy.update_dma_mux_chcfg(dma_channel_i,
DMA.MUX_CHCFG(SOURCE=48,
TRIG=False,
ENBL=True))
proxy.update_dma_registers(DMA.Registers(SERQ=dma_channel_i))
%matplotlib inline
PDB0_SC_ = 0
proxy.mem_cpy_host_to_device(PDB0_SC, np.uint32(PDB0_SC_).tostring())
proxy.mem_cpy_device_to_host(PDB0_SC, 4)
Explanation: Set DMA channel $i$ to be triggered by PDB (when PDB enabled)
End of explanation
# Set sampling frequency
f_sample = 150e3
# Determine timing parameters to meet specified sampling frequency.
clock_divide = get_pdb_divide_params(f_sample).iloc[0]
# Configure Programmable Delay Block (PDB) register state for sampling frequency.
PDB0_SC_ = (PDB_CONFIG | PDB_SC_PRESCALER(clock_divide.prescaler) |
PDB_SC_MULT(clock_divide.mult_) |
PDB_SC_DMAEN | PDB_SC_LDOK) # load all new values
proxy.mem_cpy_host_to_device(PDB0_SC, np.uint32(PDB0_SC_).tostring())
PDB0_SC_ = (PDB_CONFIG | PDB_SC_PRESCALER(clock_divide.prescaler) |
PDB_SC_DMAEN | PDB_SC_MULT(clock_divide.mult_) |
PDB_SC_SWTRIG) # start the counter!
# Copy configured PDB register state to device hardware register.
proxy.mem_cpy_host_to_device(PDB0_SC, np.uint32(PDB0_SC_).tostring())
# **N.B.,** Timer will be stopped by the scatter DMA channel major loop interrupt
# handler after `sample_count` samples have been collected.
print 'Samples by channel:'
device_dst_data = proxy.mem_cpy_device_to_host(samples_addr, sample_count * N)
df_adc_results = pd.DataFrame(device_dst_data.view('uint16').reshape(-1, sample_count).T,
columns=teensy_analog_channels)
df_adc_results.plot(ylim=(-5, 1030))
# df_adc_results
proxy.last_dma_channel_done()
proxy.DMA_registers().loc['']
Explanation: Seems to work at 100kHz.....
End of explanation |
10,242 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick Guide
Toytree is a Python tree plotting library designed for use inside
jupyter notebooks. In fact, this entire tutorial was created using notebooks, and assumes that you are following along in a notebook of your own. To begin, we will import toytree, and the plotting library it is built on, toyplot, as well as numpy for generating some numerical data.
Step1: Load and draw your first tree
The main Class object in toytree is a ToyTree, which provides plotting functionality in addition to a number of useful functions and attributes for returning values and statistics about trees. As we'll see below, you can generate a ToyTree object in many ways, but generally it is done by reading in a newick formatted string of text. The example below shows the simplest way to load a ToyTree which is to use the toytree.tree() convenience function to parse a file, URL, or string.
Step2: Parsing Newick/Nexus data
ToyTrees can be flexibly loaded from a range of text formats. Below are two newick strings in different tree_formats. The first has edge lengths and support values, the second has edge-lengths and node-labels. These are two different ways of writing tree data in a serialized format. Format 0 expects the internal node values to be integers or floats to represent support values, format 1 expects internal node values to be strings as node labels.
Step4: To parse either format you can tell toytree the format of the newick string following the tree parsing formats in ete. The default option, and most common format is 0. If you don't enter a tree_format argument the default format will usually parse it just fine. Toytree can also parse extended newick format (nhx) files, which store additional metadata, as well as mrbayes formatted files (tree_format=10) which are a variant of NHX. Any of these formats can be parsed from a NEXUS file automatically.
Step5: Accessing tree data
You can use tab-completion by typing the name of the tree variable (e.g., rtre below) followed by a dot and then pressing <tab> to see the many attributes of ToyTrees. Below I print a few of them as examples.
Step6: Tree Classes
The main Class objects in toytree exist as a nested hierarchy. The core of any tree is the TreeNode object, which stores the tree structure in memory and allows fast traversal over nodes of the tree to describe its structure. This object is wrapped inside of ToyTree objects, which provide convenient access to TreeNodes while also providing plotting and tree modification functions. And multiple ToyTrees can be grouped together into MultiTree objects, which are useful for iterating over multiple trees, or for generating plots that overlay and compare trees.
The underlying TreeNode object of Toytrees will be familiar to users of the ete3 Python library, since it is pretty much a stripped-down forked version of their TreeNode class object. This is useful since ete has great documentation. You can access the TreeNode of any ToyTree using its .treenode attribute, like below. Beginner toytree users are unlikely to need to access TreeNode objects directly, and instead will mostly access the tree structure through ToyTree objects.
Step7: Drawing trees
Step8: Drawing trees
Step9: Drawing trees
Step10: In the example above the labels on each node indicate their "idx" value, which is simply a unique identifier given to every node. We could alternatively select one of the features that you could see listed on the node when you hovered over it and toytree will display that value on the node instead. In the example below we plot the node support values. You'll notice that in this context no values were shown for the tip nodes, but instead only for internal nodes. More on this below.
Step11: You can also create plots with the nodes shown, but without node labels. This is often most useful when combined with mapping different colors to nodes to represent different classes of data. In the example below we pass a single color and size for all nodes.
Step12: You can draw values on all the nodes, or only on non-tip nodes, or only on internal nodes (not tips or root). Use the .get_node_values function of ToyTrees to build a list of values for plotting on the tree. Because the data are extracted from the same tree they will be plotted on the values will always be ordered properly.
Step13: Because .get_node_values() returns values in node plot order, it is especially useful for building lists of values for color mapping on nodes. Here we map different colors to nodes depending on whether the support value is 100 or not.
Step14: Drawing
Step15: HTML rendering is the default format. This will save the figure as a vector graphic (SVG) wrapped in HTML with additional optional javascript wrapping for interactive features. You can share the file with others and anyone can open it in a browser. You can embed it on your website, or even display it in emails!
Step16: Optional formats
Step17: Despite the advantages of working with the SVG or HTML formats (e.g., vector graphics and interactive pop-ups), if you're like me you still sometimes love to have an old-fashioned PDF. Again, you can import this from toyplot.
Step18: Drawing
Step19: The Coordinates
Toytrees drawings are designed to use a set coordinate space within the axes to make it easy to situate additional plots to align with tree drawings. Regardless of whether the tree drawing is oriented 'right' or 'down' the farthest tip of the tree (not tip label but tip) will align at the zero-axis. For right-facing trees this means at x=0, for down-facing trees this means y=0. On the other axis, tree tips will be spaced from zero to ntips with a unit of 1 between each tip. For tips on aligning additional plotting methods (barplots, scatterplots, etc.) with toytree drawings see the Cookbook gallery. Below I add a grid to overlay tree plots in both orientations to highlight the coordinate space. | Python Code:
import toytree # a tree plotting library
import toyplot # a general plotting library
import numpy as np # numerical library
print(toytree.__version__)
print(toyplot.__version__)
print(np.__version__)
Explanation: Quick Guide
Toytree is a Python tree plotting library designed for use inside
jupyter notebooks. In fact, this entire tutorial was created using notebooks, and assumes that you are following along in a notebook of your own. To begin, we will import toytree, and the plotting library it is built on, toyplot, as well as numpy for generating some numerical data.
End of explanation
# load a toytree from a newick string at a URL
tre = toytree.tree("https://eaton-lab.org/data/Cyathophora.tre")
# root and draw the tree (more details on this coming up...)
rtre = tre.root(wildcard="prz")
rtre.draw(tip_labels_align=True);
Explanation: Load and draw your first tree
The main Class object in toytree is a ToyTree, which provides plotting functionality in addition to a number of useful functions and attributes for returning values and statistics about trees. As we'll see below, you can generate a ToyTree object in many ways, but generally it is done by reading in a newick formatted string of text. The example below shows the simplest way to load a ToyTree which is to use the toytree.tree() convenience function to parse a file, URL, or string.
End of explanation
# newick with edge-lengths & support values
newick = "((a:1,b:1)90:3,(c:3,(d:1, e:1)100:2)100:1)100;"
tre0 = toytree.tree(newick, tree_format=0)
# newick with edge-lengths & string node-labels
newick = "((a:1,b:1)A:3,(c:3,(d:1, e:1)B:2)C:1)root;"
tre1 = toytree.tree(newick, tree_format=1)
Explanation: Parsing Newick/Nexus data
ToyTrees can be flexibly loaded from a range of text formats. Below are two newick strings in different tree_formats. The first has edge lengths and support values, the second has edge-lengths and node-labels. These are two different ways of writing tree data in a serialized format. Format 0 expects the internal node values to be integers or floats to represent support values, format 1 expects internal node values to be strings as node labels.
End of explanation
# parse an NHX format string with node supports and names
nhx = "((a:3[&&NHX:name=a:support=100],b:2[&&NHX:name=b:support=100]):4[&&NHX:name=ab:support=60],c:5[&&NHX:name=c:support=100]);"
ntre = toytree.tree(nhx)
# parse a mrbayes format file with NHX-like node and edge info
mb = "((a[&prob=100]:0.1[&length=0.1],b[&prob=100]:0.2[&length=0.2])[&prob=90]:0.4[&length=0.4],c[&prob=100]:0.6[&length=0.6]);"
mtre = toytree.tree(mb, tree_format=10)
# parse a NEXUS formatted file containing a tree of any supported format
nex =
#NEXUS
begin trees;
translate;
1 apple,
2 blueberry,
3 cantaloupe,
4 durian,
;
tree tree0 = [&U] ((1,2),(3,4));
end;
xtre = toytree.tree(nex)
Explanation: To parse either format you can tell toytree the format of the newick string following the tree parsing formats in ete. The default option, and most common format is 0. If you don't enter a tree_format argument the default format will usually parse it just fine. Toytree can also parse extended newick format (nhx) files, which store additional metadata, as well as mrbayes formatted files (tree_format=10) which are a variant of NHX. Any of these formats can be parsed from a NEXUS file automatically.
End of explanation
rtre.ntips
rtre.nnodes
tre.is_rooted(), rtre.is_rooted()
rtre.get_tip_labels()
rtre.get_edges()
Explanation: Accessing tree data
You can use tab-completion by typing the name of the tree variable (e.g., rtre below) followed by a dot and then pressing <tab> to see the many attributes of ToyTrees. Below I print a few of them as examples.
End of explanation
# a TreeNode object is contained within every ToyTree at .tree
tre.treenode
# a ToyTree object
toytree.tree("((a, b), c);")
# a MultiTree object
toytree.mtree([tre, tre, tre])
Explanation: Tree Classes
The main Class objects in toytree exist as a nested hierarchy. The core of any tree is the TreeNode object, which stores the tree structure in memory and allows fast traversal over nodes of the tree to describe its structure. This object is wrapped inside of ToyTree objects, which provide convenient access to TreeNodes while also providing plotting and tree modification functions. And multiple ToyTrees can be grouped together into MultiTree objects, which are useful for iterating over multiple trees, or for generating plots that overlay and compare trees.
The underlying TreeNode object of Toytrees will be familiar to users of the ete3 Python library, since it is pretty much a stripped-down forked version of their TreeNode class object. This is useful since ete has great documentation. You can access the TreeNode of any ToyTree using its .treenode attribute, like below. Beginner toytree users are unlikely to need to access TreeNode objects directly, and instead will mostly access the tree structure through ToyTree objects.
End of explanation
rtre.draw()
# the semicolon hides the returned text of the Canvas and Cartesian objects
rtre.draw();
# or, we can store them as variables (this allows more editing on them later)
canvas, axes, mark = rtre.draw()
Explanation: Drawing trees: basics
When you call .draw() on a tree it returns three objects, a Canvas, a Cartesian axes object, and a Mark. This follows the design principle of the toyplot plotting library on which toytree is based. The Canvas describes the plot space, and the Cartesian coordinates define how to project points onto that space. One canvas can have multiple cartesian coordinates, and each cartesian object can have multiple Marks. This will be demonstrated more later.
As you will see below, I end many toytree drawing commands with a semicolon (;), this simply hides the printed return statement showing that the Canvas and Cartesian objects were returned. The Canvas will automatically render in the cell below the plot even if you do not save the return Canvas as a variable. Below I do not use a semicolon and so the three returned objects are shown as text (e.g., <toyplot.canvas.Canvas...>), and the plot is displayed.
End of explanation
# drawing with pre-built tree_styles
rtre.draw(tree_style='n'); # normal-style
rtre.draw(tree_style='d'); # dark-style
# 'ts' is also a shortcut for tree_style
rtre.draw(ts='o'); # umlaut-style
# define a style dictionary
mystyle = {
"layout": 'd',
"edge_type": 'p',
"edge_style": {
"stroke": toytree.colors[2],
"stroke-width": 2.5,
},
"tip_labels_align": True,
"tip_labels_colors": toytree.colors[0],
"tip_labels_style": {
"font-size": "10px"
},
"node_labels": False,
"node_sizes": 8,
"node_colors": toytree.colors[2],
}
# use your custom style dictionary in one or more tree drawings
rtre.draw(height=400, **mystyle);
Explanation: Drawing trees: styles
There are innumerous ways in which to style ToyTree drawings. We provide a number of pre-built tree_styles (normal, dark, coalescent, multitree), but users can also create their own style dictionaries that can be easily reused. Below are some examples. You can use tab-completion within the draw function to see the docstring for more details on available arguments to toggle, or you can see which styles are available on ToyTrees by accessing their .style dictionary. See the Styling chapter for more details.
End of explanation
# hover over nodes to see pop-up elements
rtre.draw(height=350, node_hover=True, node_sizes=10, tip_labels_align=True);
Explanation: Drawing trees: nodes
Plotting node values on a tree is a useful way of representing additional information about trees. Toytree tries to make this process fool-proof, in the sense that the data you plot on nodes will always be the correct data associated with that node. This is done through simple shortcut methods for plotting node features, as well as a convenience function called .get_node_values() that draws the values explicitly from the same tree structure that is being plotted (this avoids making a list of values from a tree and then plotting them on that tree only to find that a the order of tips or nodes in the tree has changed.) Finally, toytree also provides interactive features that allow you to explore many features of your data by simply hovering over nodes with your cursor. This is made possible by the HTML+JS framework in which toytrees are displayed in jupyter notebooks, or in web-pages.
End of explanation
rtre.draw(node_labels='support', node_sizes=15);
Explanation: In the example above the labels on each node indicate their "idx" value, which is simply a unique identifier given to every node. We could alternatively select one of the features that you could see listed on the node when you hovered over it and toytree will display that value on the node instead. In the example below we plot the node support values. You'll notice that in this context no values were shown for the tip nodes, but instead only for internal nodes. More on this below.
End of explanation
# You can do the same without printing the 'idx' label on nodes.
rtre.draw(
node_labels=None,
node_sizes=10,
node_colors='grey'
);
Explanation: You can also create plots with the nodes shown, but without node labels. This is often most useful when combined with mapping different colors to nodes to represent different classes of data. In the example below we pass a single color and size for all nodes.
End of explanation
tre0.get_node_values("support", show_root=1, show_tips=1)
tre0.get_node_values("support", show_root=1, show_tips=0)
tre0.get_node_values("support", show_root=0, show_tips=0)
# show support values
tre0.draw(
node_labels=tre0.get_node_values("support", 0, 0),
node_sizes=20,
);
# show support values
tre0.draw(
node_labels=tre0.get_node_values("support", 1, 1),
node_sizes=20,
);
Explanation: You can draw values on all the nodes, or only on non-tip nodes, or only on internal nodes (not tips or root). Use the .get_node_values function of ToyTrees to build a list of values for plotting on the tree. Because the data are extracted from the same tree they will be plotted on the values will always be ordered properly.
End of explanation
# build a color list in node plot order with different values based on support
colors = [
toytree.colors[0] if i==100 else toytree.colors[1]
for i in rtre.get_node_values('support', 1, 1)
]
# You can do the same without printing the 'idx' label on nodes.
rtre.draw(
node_sizes=10,
node_colors=colors
);
Explanation: Because .get_node_values() returns values in node plot order, it is especially useful for building lists of values for color mapping on nodes. Here we map different colors to nodes depending on whether the support value is 100 or not.
End of explanation
# draw a plot and store the Canvas object to a variable
canvas, axes, mark = rtre.draw(width=400, height=300);
Explanation: Drawing: saving figures
Toytree drawings can be saved to disk using the render functions of toyplot. This is where it is useful to store the Canvas object as a variable when it is returned during a toytree drawing. You can save toyplot figures in a variety of formats, including HTML (which is actually an SVG figures wrapped in HTML with addition javascript to provide interactivity); or SVG, PDF, and PNG.
End of explanation
# for sharing through web-links (or even email!) html is great!
toyplot.html.render(canvas, "/tmp/tree-plot.html")
Explanation: HTML rendering is the default format. This will save the figure as a vector graphic (SVG) wrapped in HTML with additional optional javascript wrapping for interactive features. You can share the file with others and anyone can open it in a browser. You can embed it on your website, or even display it in emails!
End of explanation
# for creating scientific figures SVG is often the most useful format
import toyplot.svg
toyplot.svg.render(canvas, "/tmp/tree-plot.svg")
Explanation: Optional formats: If you want to do additional styling of your figures in Illustrator or InkScape (recommended) then SVG is likely your best option. You can save figures in SVG by simply importing this as an additional option from toyplot.
End of explanation
import toyplot.pdf
toyplot.pdf.render(canvas, "/tmp/tree-plot.pdf")
Explanation: Despite the advantages of working with the SVG or HTML formats (e.g., vector graphics and interactive pop-ups), if you're like me you still sometimes love to have an old-fashioned PDF. Again, you can import this from toyplot.
End of explanation
# set dimensions of the canvas
canvas = toyplot.Canvas(width=700, height=250)
# dissect canvas into multiple cartesian areas (x1, x2, y1, y2)
ax0 = canvas.cartesian(bounds=('10%', '45%', '10%', '90%'))
ax1 = canvas.cartesian(bounds=('55%', '90%', '10%', '90%'))
# call draw with the 'axes' argument to pass it to a specific cartesian area
style = {
"tip_labels_align": True,
"tip_labels_style": {
"font-size": "9px"
},
}
rtre.draw(axes=ax0, **style);
rtre.draw(axes=ax1, tip_labels_colors='indigo', **style);
# hide the axes (e.g, ticks and splines)
ax0.show=False
ax1.show=False
Explanation: Drawing: The Canvas, Axes, and coordinates
When you call the toytree.draw() function it returns two Toyplot objects which are used to display the figure. The first is the Canvas, which is the HTML element that holds the figure, and the second is a Cartesian axes object, which represent the coordinates for the plot. You can store these objects when they are returned by the draw() function to further manipulate the plot. Storing the Canvas is necessary in order to save the plot.
The Canvas and Axes
If you wish to combine multiple toytree figures into a single figure then it is easiest to first create instances of the toyplot Canvas and Axes objects and then to add the toytree drawing to this plot by using the .draw(axes=axes) argument. In the example below we first define the Canvas size, then define two coordinate axes inside of this Canvas, and then we pass these coordinate axes objects to two separate toytree drawings.
End of explanation
# store the returned Canvas and Axes objects
canvas, axes, makr = rtre.draw(
width=300,
height=300,
tip_labels_align=True,
tip_labels=False,
)
# show the axes coordinates
axes.show = True
axes.x.ticks.show = True
axes.y.ticks.show = True
# overlay a grid
axes.hlines(np.arange(0, 13, 2), style={"stroke": "red", "stroke-dasharray": "2,4"})
axes.vlines(0, style={"stroke": "blue", "stroke-dasharray": "2,4"});
# store the returned Canvas and Axes objects
canvas, axes, mark = rtre.draw(
width=300,
height=300,
tip_labels=False,
tip_labels_align=True,
layout='d',
)
# show the axes coordinates
axes.show = True
axes.x.ticks.show = True
axes.y.ticks.show = True
# overlay a grid
axes.vlines(np.arange(0, 13, 2), style={"stroke": "red", "stroke-dasharray": "2,4"})
axes.hlines(0, style={"stroke": "blue", "stroke-dasharray": "2,4"});
Explanation: The Coordinates
Toytrees drawings are designed to use a set coordinate space within the axes to make it easy to situate additional plots to align with tree drawings. Regardless of whether the tree drawing is oriented 'right' or 'down' the farthest tip of the tree (not tip label but tip) will align at the zero-axis. For right-facing trees this means at x=0, for down-facing trees this means y=0. On the other axis, tree tips will be spaced from zero to ntips with a unit of 1 between each tip. For tips on aligning additional plotting methods (barplots, scatterplots, etc.) with toytree drawings see the Cookbook gallery. Below I add a grid to overlay tree plots in both orientations to highlight the coordinate space.
End of explanation |
10,243 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='teal'> Introduction to Neural Networks and Pytorch </font>
Notebook version
Step1: <font color='teal'> 1. Introduction and purpose of this Notebook </font>
<font color='teal'> 1.1. About Neural Networks </font>
Neural Networks (NN) have become the state of the art for many machine learning problems
Natural Language Processing
Computer Vision
Image Recognition
They are in widespread use for many applications, e.g.,
Language translation (<a href="https
Step3: <font color='olive'>Dogs vs Cats data set</font>
Dataset is taken from <a href="https
Step8: <font color='teal'> 2.2. Logistic Regression as a Simple Neural Network </font>
We can consider logistic regression as an extremely simple (1 layer) neural network
<center><img src="figures/LR_network.png" width="600"/></center>
In this context, $\text{NLL}({\bf w})$ is normally referred to as cross-entropy loss
We need to find parameters $\bf w$ and $b$ to minimize the loss $\rightarrow$ GD / SGD
Gradient computation can be simplified using the <font color='navy'>chain rule</font>
<br>
\begin{align}
\frac{\partial \text{NLL}}{\partial {\bf w}} & = \frac{\partial \text{NLL}}{\partial {\hat y}} \cdot \frac{\partial \hat y}{\partial o} \cdot \frac{\partial o}{\partial {\bf w}} \
& = \sum_{k=0}^{K-1} \left[\frac{1 - y_k}{1 - \hat y_k} - \frac{y_k}{\hat y_k}\right]\hat y_k (1-\hat y_k) {\bf x}k \
& = \sum{k=0}^{K-1} \left[\hat y_k - y_k\right] {\bf x}k \
\frac{\partial \text{NLL}}{\partial b} & = \sum{k=0}^{K-1} \left[\hat y_k - y_k \right]
\end{align}
Gradient Descent Optimization
<br>
$${\bf w}{n+1} = {\bf w}_n + \rho_n \sum{k=0}^{K-1} (y_k - \hat y_k){\bf x}k$$
$$b{n+1} = b_n + \rho_n \sum_{k=0}^{K-1} (y_k - \hat y_k)$$
Step10: <font color='olive'>Exercise</font>
Study the behavior of the algorithm changing the number of epochs and the learning rate
Repeat the analysis for the other dataset, trying to obtain as large an accuracy value as possible
What do you believe are the reasons for the very different performance for both datasets?
Linear logistic regression allowed us to review a few concepts that are key for Neural Networks
Step11: <font color='olive'>Exercise</font>
Study the behavior of the algorithm changing the number of iterations and the learning rate
Obtain the confusion matrix, and study which classes are more difficult to classify
Think about the differences between using this 10-class network, vs training 10 binary classifiers, one for each class
As in linear logistic regression note that we covered the following aspects of neural network design, implementation, and training
Step12: <font color='olive'>Results in Dogs vs Cats dataset ($epochs = 1000$ and $\rho = 0.05$)</font>
Step13: <font color='olive'>Results in Binary Sign Digits Dataset ($epochs = 10000$ and $\rho = 0.001$)</font>
Step14: <font color='olive'>Exercises</font>
Train the network using other settings for
Step15: <font color='teal'> 3. Implementing Deep Networks with PyTorch </font>
Pytorch is a Python library that provides different levels of abstraction for implementing deep neural networks
The main features of PyTorch are
Step16: Tensors can be converted back to numpy arrays
Note that in this case, a tensor and its corresponding numpy array will share memory
Operations and slicing use a syntax similar to numpy
Step17: Adding underscore performs operations "in place", e.g., x.add_(y)
If a GPU is available, tensors can be moved to and from the GPU device
Operations on tensors stored in a GPU will be carried out using GPU resources and will typically be highly parallelized
Step18: <font color='teal'> 3.3. Automatic gradient calculation </font>
PyTorch tensors have a property requires_grad. When true, PyTorch automatic gradient calculation will be activated for that variable
In order to compute these derivatives numerically, PyTorch keeps track of all operations carried out on these variables, organizing them in a forward computation graph.
When executing the backward() method, derivatives will be calculated
However, this should only be activated when necessary, to save computation
Step20: <font color='olive'>Exercise</font>
Initialize a tensor x with the upper right $5 \times 10$ submatrix of flattened digits
Compute output vector y applying a function of your choice to x
Compute scalar value z as the sum of all elements in y squared
Check that x.grad calculation is correct using the backward method
Try to run your cell multiple times to see if the calculation is still correct. If not, implement the necessary mnodifications so that you can run the cell multiple times, but the gradient does not change from run to run
Note
Step21: Syntaxis is a bit different because input variables are tensors, not arrays
This time we did not need to implement the backward function
Step22: It is important to deactivate gradient updates after the network has been evaluated on training data, and gradients of the loss function have been computed
Step25: <font color='olive'> 3.4.2. Using torch nn module </font>
PyTorch nn module provides many attributes and methods that make the implementation and training of Neural Networks simpler
nn.Module and nn.Parameter allow to implement a more concise training loop
nn.Module is a PyTorch class that will be used to encapsulate and design a specific neural network, thus, it is central to the implementation of deep neural nets using PyTorch
nn.Parameter allow the definition of trainable network parameters. In this way, we will simplify the implementation of the training loop.
All parameters defined with nn.Parameter will have requires_grad = True
Step27: nn.Module comes with several kinds of pre-defined layers, thus making it even simpler to implement neural networks
We can also import the Cross Entropy Loss from nn.Module. When doing so
Step28: Note faster convergence is observed in this case. It is actually due to a more convenient initialization of the hidden layer
<font color='olive'> 3.4.3. Network Optimization </font>
We cover in this subsection two different aspects about network training using PyTorch
Step29: Note network optimization is carried out outside torch.no_grad() but network evaluation (other than forward output calculation for the training patterns) still need to deactivate gradient updates
Step30: <font color='olive'> Exercise </font>
Implement network training with other optimization methods. You can refer to the <a href="https
Step31: <font color='olive'> 3.4.4. Multi Layer networks using nn.Sequential </font>
PyTorch simplifies considerably the implementation of neural network training, since we do not need to implement derivatives ourselves
We can also make a simpler implementation of multilayer networks using nn.Sequential function
It returns directly a network with the requested topology, including parameters and forward evaluation method
Step32: <font color='teal'> 3.5. Generalization</font>
For complex network topologies (i.e., many parameters), network training can incur in over-fitting issues
Some common strategies to avoid this are | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
size = 18
params = {'legend.fontsize': 'Large',
'axes.labelsize': size,
'axes.titlesize': size,
'xtick.labelsize': size*0.75,
'ytick.labelsize': size*0.75}
plt.rcParams.update(params)
Explanation: <font color='teal'> Introduction to Neural Networks and Pytorch </font>
Notebook version: 0.2. (Nov 5, 2021)
Authors: Jerónimo Arenas García ([email protected])
Jesús Cid-Sueiro ([email protected])
Changes: v.0.1. (Nov 14, 2020) - First version
v.0.2. (Nov 5, 2021) - Structuring code, revisiting formulation
Pending changes:
Use epochs instead of iters in first part of notebook
Add an example with dropout
Add theory about CNNs
Define some functions to simplify code cells
End of explanation
digitsX = np.load('./data/Sign-language-digits-dataset/X.npy')
digitsY = np.load('./data/Sign-language-digits-dataset/Y.npy')
K = digitsX.shape[0]
img_size = digitsX.shape[1]
digitsX_flatten = digitsX.reshape(K,img_size*img_size)
print('Size of Input Data Matrix:', digitsX.shape)
print('Size of Flattned Input Data Matrix:', digitsX_flatten.shape)
print('Size of label Data Matrix:', digitsY.shape)
selected = [260, 1400]
plt.subplot(1, 2, 1), plt.imshow(digitsX[selected[0]].reshape(img_size, img_size)), plt.axis('off')
plt.subplot(1, 2, 2), plt.imshow(digitsX[selected[1]].reshape(img_size, img_size)), plt.axis('off')
plt.show()
print('Labels corresponding to figures:', digitsY[selected,])
Explanation: <font color='teal'> 1. Introduction and purpose of this Notebook </font>
<font color='teal'> 1.1. About Neural Networks </font>
Neural Networks (NN) have become the state of the art for many machine learning problems
Natural Language Processing
Computer Vision
Image Recognition
They are in widespread use for many applications, e.g.,
Language translation (<a href="https://arxiv.org/pdf/1609.08144.pdf">Google Neural Machine Translation System</a>)
Automatic speech recognition (<a href="https://machinelearning.apple.com/research/hey-siri">Hey Siri!</a> DNN overview)
Autonomous navigation (<a href="https://venturebeat.com/2020/04/13/facebooks-ai-teaches-robots-to-navigate-environments-using-less-data/">Facebook Robot Autonomous 3D Navigation</a>)
Automatic plate recognition
<center><img src="figures/ComputerVision.png" /></center>
Feed Forward Neural Networks have been around since 1960 but only recently (last 10-12 years) have they met their expectations, and improve other machine learning algorithms
Computation resources are now available at large scale
Cloud Computing (AWS, Azure)
From MultiLayer Perceptrons to Deep Learning
Big Data sets
This has also made possible an intense research effort resulting in
Topologies better suited to particular problems (CNNs, RNNs)
New training strategies providing better generalization
In parallel, Deep Learning Platforms have emerged that make design, implementation, training, and production of DNNs feasible for everyone
<font color='teal'> 1.2. Scope</font>
To provide just an overview of most important NNs and DNNs concepts
Connecting with already studied methods as starting point
Introduction to PyTorch
Providing links to external sources for further study
<font color='teal'> 1.3. Outline</font>
Introduction and purpose of this Notebook
Introduction to Neural Networks
Implementing Deep Networks with PyTorch
<font color='teal'> 1.4. Other resources </font>
We point here to external resources and tutorials that are excellent material for further study of the topic
Most of them include examples and exercises using numpy and PyTorch
This notebook uses examples and other material from some of these sources
|Tutorial|Description|
|-----|---------------------|
|<a href="https://www.simplilearn.com/tutorials/deep-learning-tutorial"> <img src="figures/simplilearn.png" width="100"/> </a>|Very general tutorial including videos and an overview of top deep learning platforms|
|<a href="http://d2l.ai/"> <img src="figures/dl2ai.png" width="100"/> </a>|Very complete book with a lot of theory and examples for MxNET, PyTorch, and TensorFlow|
|<a href="https://pytorch.org/tutorials/"> <img src="figures/PyTorch.png" width="100"/> </a>|Official tutorials from the PyTorch project. Contains a 60 min overview, and a very practical learning PyTorch with examples tutorial|
|<a href="https://www.kaggle.com/kanncaa1/deep-learning-tutorial-for-beginners"> <img src="figures/kaggle.png" width="100"/> </a>|Kaggle tutorials covering an introduction to Neural Networks using Numpy, and a second one offering a PyTorch tutorial|
In addition to this, PyTorch MOOCs can be followed for free in main sites: edX, Coursera, Udacity
<font color='teal'> 2. Introduction to Neural Networks </font>
In this section, we will implement neural networks from scratch using Numpy arrays
No need to learn any new Python libraries
But we need to deal with complexity of multilayer networks
Low-level implementation will be useful to grasp the most important concepts concerning DNNs
Back-propagation
Activation functions
Loss functions
Optimization methods
Generalization
Special layers and configurations
<font color='teal'> 2.1. Data preparation </font>
We start by loading some data sets that will be used to carry out the exercises
<font color='olive'>Sign language digits data set</font>
Dataset is taken from <a href="https://www.kaggle.com/ardamavi/sign-language-digits-dataset"> Kaggle</a> and used in the above referred tutorial
2062 digits in sign language. $64 \times 64$ images
Problem with 10 classes. One hot encoding for the label matrix
Input data are images, we create also a flattened version
End of explanation
# Preprocessing of original Dogs and Cats Pictures
# Adapted from https://medium.com/@mrgarg.rajat/kaggle-dogs-vs-cats-challenge-complete-step-by-step-guide-part-1-a347194e55b1
# RGB channels are collapsed in GRAYSCALE
# Images are resampled to 64x64
import os, cv2 # cv2 -- OpenCV
train_dir = './data/DogsCats/train/'
rows = 64
cols = 64
train_images = sorted([train_dir+i for i in os.listdir(train_dir)])
def read_image(file_path):
image = cv2.imread(file_path, cv2.IMREAD_GRAYSCALE)
return cv2.resize(image, (rows, cols),interpolation=cv2.INTER_CUBIC)
def prep_data(images):
m = len(images)
X = np.ndarray((m, rows, cols), dtype=np.uint8)
y = np.zeros((m,))
print("X.shape is {}".format(X.shape))
for i,image_file in enumerate(images) :
image = read_image(image_file)
X[i,] = np.squeeze(image.reshape((rows, cols)))
if 'dog' in image_file.split('/')[-1].lower():
y[i] = 1
elif 'cat' in image_file.split('/')[-1].lower():
y[i] = 0
if i%5000 == 0 :
print("Proceed {} of {}".format(i, m))
return X,y
X_train, y_train = prep_data(train_images)
np.save('./data/DogsCats/X.npy', X_train)
np.save('./data/DogsCats/Y.npy', y_train)
DogsCatsX = np.load('./data/DogsCats/X.npy')
DogsCatsY = np.load('./data/DogsCats/Y.npy')
K = DogsCatsX.shape[0]
img_size = DogsCatsX.shape[1]
DogsCatsX_flatten = DogsCatsX.reshape(K,img_size*img_size)
print('Size of Input Data Matrix:', DogsCatsX.shape)
print('Size of Flattned Input Data Matrix:', DogsCatsX_flatten.shape)
print('Size of label Data Matrix:', DogsCatsY.shape)
selected = [260, 16000]
plt.subplot(1, 2, 1), plt.imshow(DogsCatsX[selected[0]].reshape(img_size, img_size)), plt.axis('off')
plt.subplot(1, 2, 2), plt.imshow(DogsCatsX[selected[1]].reshape(img_size, img_size)), plt.axis('off')
plt.show()
print('Labels corresponding to figures:', DogsCatsY[selected,])
Explanation: <font color='olive'>Dogs vs Cats data set</font>
Dataset is taken from <a href="https://www.kaggle.com/c/dogs-vs-cats"> Kaggle</a>
25000 pictures of dogs and cats
Binary problem
Input data are images, we create also a flattened version
Original images are RGB, and arbitrary size
Preprocessed images are $64 \times 64$ and gray scale
End of explanation
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
def get_dataset(dataset_name, forze_binary=False):
Loads the selected dataset, among two options: DogsCats or digits.
If dataset_name == 'digits', you can take a dataset with two classes only,
using forze_binary == True
if dataset_name == 'DogsCats':
X = DogsCatsX_flatten
y = DogsCatsY
elif dataset_name == 'digits':
if forze_binary:
#Zero and Ones are one hot encoded in columns 1 and 4
X0 = digitsX_flatten[np.argmax(digitsY, axis=1)==1,]
X1 = digitsX_flatten[np.argmax(digitsY, axis=1)==4,]
X = np.vstack((X0, X1))
y = np.zeros(X.shape[0])
y[X0.shape[0]:] = 1
else:
X = digitsX_flatten
y = digitsY
else:
print("-- ERROR: Unknown dataset")
return
# Joint normalization of all data. For images [-.5, .5] scaling is frequent
min_max_scaler = MinMaxScaler(feature_range=(-.5, .5))
X = min_max_scaler.fit_transform(X)
# Generate train and validation data, shuffle
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42, shuffle=True)
return X_train, X_val, y_train, y_val
# Define some useful functions
def logistic(t):
Computes the logistic function
return 1.0 / (1 + np.exp(-t))
def forward(w,b,x):
Computes the network output
# return logistic(x.dot(w) + b)
return logistic(x @ w + b)
def backward(y, y_hat, x):
Computes the gradient of the loss function for a single sample x with
ouput y_hat, given label y.
# w_grad = x.T.dot((1-y)*y_hat - y*(1-y_hat))/len(y)
# b_grad = np.sum((1-y)*y_hat - y*(1-y_hat))/len(y)
w_grad = x.T @ (y_hat - y) / len(y)
b_grad = np.mean(y_hat - y)
return w_grad, b_grad
def accuracy(y, y_hat):
return np.mean(y == (y_hat >= 0.5))
def loss(y, y_hat):
return - (y @ np.log(y_hat) + (1 - y) @ np.log(1 - y_hat)) / len(y)
X_train, X_val, y_train, y_val = get_dataset('digits', forze_binary=True)
#Neural Network Training
epochs = 50
rho = .05 # Use this setting for Sign Digits Dataset
#Parameter initialization
w = .1 * np.random.randn(X_train.shape[1])
b = .1 * np.random.randn(1)
loss_train = np.zeros(epochs)
loss_val = np.zeros(epochs)
acc_train = np.zeros(epochs)
acc_val = np.zeros(epochs)
for epoch in np.arange(epochs):
y_hat_train = forward(w, b, X_train)
y_hat_val = forward(w, b, X_val)
w_grad, b_grad = backward(y_train, y_hat_train, X_train)
w = w - rho * w_grad
b = b - rho * b_grad
loss_train[epoch] = loss(y_train, y_hat_train)
loss_val[epoch] = loss(y_val, y_hat_val)
acc_train[epoch] = accuracy(y_train, y_hat_train)
acc_val[epoch] = accuracy(y_val, y_hat_val)
plt.figure(figsize=(14,5))
plt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')
plt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')
plt.show()
Explanation: <font color='teal'> 2.2. Logistic Regression as a Simple Neural Network </font>
We can consider logistic regression as an extremely simple (1 layer) neural network
<center><img src="figures/LR_network.png" width="600"/></center>
In this context, $\text{NLL}({\bf w})$ is normally referred to as cross-entropy loss
We need to find parameters $\bf w$ and $b$ to minimize the loss $\rightarrow$ GD / SGD
Gradient computation can be simplified using the <font color='navy'>chain rule</font>
<br>
\begin{align}
\frac{\partial \text{NLL}}{\partial {\bf w}} & = \frac{\partial \text{NLL}}{\partial {\hat y}} \cdot \frac{\partial \hat y}{\partial o} \cdot \frac{\partial o}{\partial {\bf w}} \
& = \sum_{k=0}^{K-1} \left[\frac{1 - y_k}{1 - \hat y_k} - \frac{y_k}{\hat y_k}\right]\hat y_k (1-\hat y_k) {\bf x}k \
& = \sum{k=0}^{K-1} \left[\hat y_k - y_k\right] {\bf x}k \
\frac{\partial \text{NLL}}{\partial b} & = \sum{k=0}^{K-1} \left[\hat y_k - y_k \right]
\end{align}
Gradient Descent Optimization
<br>
$${\bf w}{n+1} = {\bf w}_n + \rho_n \sum{k=0}^{K-1} (y_k - \hat y_k){\bf x}k$$
$$b{n+1} = b_n + \rho_n \sum_{k=0}^{K-1} (y_k - \hat y_k)$$
End of explanation
dataset = 'digits'
X_train, X_val, y_train, y_val = get_dataset('digits')
# Define some useful functions
def softmax(t):
Compute softmax values for each sets of scores in t.
e_t = np.exp(t)
return e_t / e_t.sum(axis=1, keepdims=True)
def forward(w, b, x):
# Calcula la salida de la red
return softmax(x @ w.T + b.T)
def backward(y, y_hat, x):
#Calcula los gradientes
W_grad = (y_hat - y).T @ x / len(y)
b_grad = (y_hat - y).T.mean(axis=1, keepdims=True)
return W_grad, b_grad
def accuracy(y, y_hat):
return np.mean(np.argmax(y, axis=1) == np.argmax(y_hat, axis=1))
def loss(y, y_hat):
return - np.sum(y * np.log(y_hat)) / len(y)
# Neural Network Training
epochs = 300
rho = .1
#Parameter initialization
W = .1 * np.random.randn(y_train.shape[1], X_train.shape[1])
b = .1 * np.random.randn(y_train.shape[1], 1)
loss_train = np.zeros(epochs)
loss_val = np.zeros(epochs)
acc_train = np.zeros(epochs)
acc_val = np.zeros(epochs)
for epoch in np.arange(epochs):
print(f"Epoch {epoch} out of {epochs} \r", end="")
y_hat_train = forward(W, b, X_train)
y_hat_val = forward(W, b, X_val)
W_grad, b_grad = backward(y_train, y_hat_train, X_train)
W = W - rho * W_grad
b = b - rho * b_grad
loss_train[epoch] = loss(y_train, y_hat_train)
loss_val[epoch] = loss(y_val, y_hat_val)
acc_train[epoch] = accuracy(y_train, y_hat_train)
acc_val[epoch] = accuracy(y_val, y_hat_val)
plt.figure(figsize=(14,5))
plt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')
plt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')
plt.show()
Explanation: <font color='olive'>Exercise</font>
Study the behavior of the algorithm changing the number of epochs and the learning rate
Repeat the analysis for the other dataset, trying to obtain as large an accuracy value as possible
What do you believe are the reasons for the very different performance for both datasets?
Linear logistic regression allowed us to review a few concepts that are key for Neural Networks:
Network topology (In this case, a linear network with one layer)
Activation functions
Parametric approach ($\bf w$/$b$)
Parameter initialization
Obtaining the network prediction using forward computation
Loss function
Parameter gradient calculus using backward computation
Optimization method for parameters update (here, GD)
<font color='teal'> 2.3. (Multiclass) SoftMax Regression </font>
One hot encoding output, e.g., $[0, 1, 0, 0]$, $[0, 0, 0, 1]$
Used to encode categorial variables without predefined order
Similar to logistic regression, network tries to predict class probability
$$\hat y_{k,j} = \hat P(y_k=j|{\bf x}_k)$$
Network output should satisfy "probability constraints"
$$\hat y_{k,j} \in [0,1]\qquad \text{and} \qquad \sum_j \hat y_{k,j} = 1$$
Softmax regression network topology:
<img src="figures/SR_network.png" width="600"/>
<font color='olive'>Notation</font>
In this section, it is important to pay attention to subindexes:
|Notation/ Variable Name|Definition|
|-----------------------|---------------------------------|
|$y_k \in [0,\dots,M-1]$|The label of pattern $k$|
|${\bf y}k$|One hot encoding of the label of pattern $k$|
|$y{k,m}$|$m$-th component of vector ${\bf y}k$|
|$y{m}$|$m$-th component of generic vector ${\bf y}$ (i.e., for an undefined pattern)|
|$\hat {\bf y}k$|Network output for pattern $k$|
|$\hat y{k,m}$|$m$-th network output for pattern $k$|
|$\hat y_{m}$|$m$-th network output for an undefined pattern)|
|$k$|Index used for pattern enumeration|
|$m$|Index used for network output enumeration|
|$j$|Secondary index for selected network output|
<font color='olive'>The softmax function</font>
It is to multiclass problems as the logistic function for binary classification
Invented in 1959 by the social scientist R. Duncan Luce
Transforms a set of $M$ real numbers to satisfy "probability" constraints
<br>
$${\bf \hat y} = \text{softmax}({\bf o}) \qquad \text{where} \qquad \hat y_j = \frac{\exp(o_j)}{\sum_m \exp(o_m)} $$
Continuous and <font color="navy">differentiable</font> function
<br>
$$\frac{\partial \hat y_j}{\partial o_j} = \hat y_j (1 - \hat y_j) \qquad \text{and} \qquad \frac{\partial \hat y_j}{\partial o_m} = - \hat y_j \hat y_m$$
The classifier is still linear, since
<br>
$$\arg\max \hat {\bf y} = \arg\max \hat {\bf o} = \arg\max {{\bf W} {\bf x} + {\bf b}}$$
<font color='olive'>Cross-entropy loss for multiclass problems</font>
Similarly to logistic regression, minimization of the log-likelihood can be stated to obtain ${\bf W}$ and ${\bf b}$
<br>
$$\text{Binary}: \text{NLL}({\bf w}, b) = - \sum_{k=0}^{K-1} \log \hat P(y_k|{\bf x}k)$$
$$\text{Multiclass}: \text{NLL}({\bf W}, {\bf b}) = - \sum{k=0}^{K-1} \log \hat P(y_k|{\bf x}_k)$$
Using one hot encoding for the label vector of each sample, e.g., $y_k = 2 \rightarrow {\bf y}_k = [0, 0, 1, 0]$
$$\text{NLL}({\bf W}, {\bf b}) = - \sum_{k=0}^{K-1} \sum_{m=0}^{M-1} y_{k,m} \log \hat P(m|{\bf x}k)= - \sum{k=0}^{K-1} \sum_{m=0}^{M-1} y_{k,m} \log \hat y_{k,m} = \sum_{k=0}^{K-1} l({\bf y}_k, \hat {\bf y}_k)$$
Note that for each pattern, only one element in the inner sum (the one indexed with $m$) is non-zero
In the context of Neural Networks, this cost is referred to as the cross-entropy loss
<br>
$$l({\bf y}, \hat {\bf y}) = - \sum_{m=0}^{M-1} y_{m} \log \hat y_{m}$$
<font color='olive'>Network optimization</font>
Gradient Descent Optimization
<br>
$${\bf W}{n+1} = {\bf W}_n - \rho_n \sum{k=0}^{K-1} \frac{\partial l({\bf y}k,{\hat {\bf y}_k})}{\partial {\bf W}}$$
$${\bf b}{n+1} = {\bf b}n - \rho_n \sum{k=0}^{K-1} \frac{\partial l({\bf y}_k,{\hat {\bf y}_k})}{\partial {\bf b}}$$
We compute derivatives using the chain rule twice
<br>
\begin{align}
\frac{\partial l({\bf y}, \hat{\bf y})}{\partial {\bf W}}
&= \frac{\partial l({\bf y}, \hat{\bf y})}{\partial {\bf o}}
\cdot \frac{\partial {\bf o}}{\partial {\bf W}} \
&= \sum_{i=0}^{M-1}
\frac{\partial l({\bf y}, \hat{\bf y})}{\partial o_i}
\cdot \frac{\partial o_i}{\partial {\bf W}} \
&= \frac{\partial l({\bf y}, \hat{\bf y})}{\partial {\bf o}}
\cdot {\bf x}^\intercal \
&= \frac{\partial \hat{\bf y}}{\partial {\bf o}}
\cdot \frac{\partial l({\bf y}, \hat{\bf y})}{\partial \hat{\bf y}}
\cdot {\bf x}^\intercal \
& = \left[\begin{array}{ccccc}
\hat y_1 (1 - \hat y_1) & -\hat y_1 \hat y_2 & \dots & -\hat y_1 \hat y_{M-1} \
-\hat y_2 \hat y_1 & \hat y_2 (1 - \hat y_2) & \dots & -\hat y_2 \hat y_{M-1} \
\vdots & \vdots & \ddots & \vdots \
- \hat y_{M-1} \hat y_1 & -\hat y_{M-1} \hat y_2 & \dots & \hat y_{M-1} (1-\hat y_{M-1})
\end{array}\right]
\left[\begin{array}{c} -y_1/\hat y_1 \ -y_2/\hat y_2 \ \vdots \ - y_{M-1}/\hat y_{M-1} \end{array}\right]
{\bf x}^\top \
& = (\hat {\bf y} - {\bf y}){\bf x}^\top \
\
\frac{\partial l({\bf y},{\hat {\bf y}})}{\partial {\bf b}}
& = \hat {\bf y} - {\bf y}
\end{align}
End of explanation
# Define some useful functions
def logistic(t):
return 1.0 / (1 + np.exp(-t))
def forward(W1, b1, w2, b2, x):
#Calcula la salida de la red
h = x.dot(W1.T) + b1
y_hat = logistic(h.dot(w2) + b2)
#Provide also hidden units value for backward gradient step
return h, y_hat
def backward(y, y_hat, h, x, w2):
#Calcula los gradientes
w2_grad = h.T.dot(y_hat - y) / len(y)
b2_grad = np.sum(y_hat - y) / len(y)
W1_grad = ((w2[np.newaxis,] * ((1 - h)**2) * (y_hat - y)[:,np.newaxis]).T.dot(x)) / len(y)
b1_grad = ((w2[np.newaxis,] * ((1 - h)**2) * (y_hat - y)[:,np.newaxis]).sum(axis=0)) / len(y)
return w2_grad, b2_grad, W1_grad, b1_grad
def accuracy(y, y_hat):
return np.mean(y == (y_hat >= 0.5))
def loss(y, y_hat):
return - np.sum(y * np.log(y_hat) + (1 - y) * np.log(1 - y_hat)) / len(y)
def evaluate_model(
X_train, X_val, y_train, y_val, n_h=5, epochs=1000, rho=.005):
W1 = .01 * np.random.randn(n_h, X_train.shape[1])
b1 = .01 * np.random.randn(n_h)
w2 = .01 * np.random.randn(n_h)
b2 = .01 * np.random.randn(1)
loss_train = np.zeros(epochs)
loss_val = np.zeros(epochs)
acc_train = np.zeros(epochs)
acc_val = np.zeros(epochs)
for epoch in np.arange(epochs):
print(f'Current epoch: {epoch + 1} \r', end="")
h, y_hat_train = forward(W1, b1, w2, b2, X_train)
dum, y_hat_val = forward(W1, b1, w2, b2, X_val)
w2_grad, b2_grad, W1_grad, b1_grad = backward(y_train, y_hat_train, h, X_train, w2)
W1 = W1 - rho/10 * W1_grad
b1 = b1 - rho/10 * b1_grad
w2 = w2 - rho * w2_grad
b2 = b2 - rho * b2_grad
loss_train[epoch] = loss(y_train, y_hat_train)
loss_val[epoch] = loss(y_val, y_hat_val)
acc_train[epoch] = accuracy(y_train, y_hat_train)
acc_val[epoch] = accuracy(y_val, y_hat_val)
return loss_train, loss_val, acc_train, acc_val
Explanation: <font color='olive'>Exercise</font>
Study the behavior of the algorithm changing the number of iterations and the learning rate
Obtain the confusion matrix, and study which classes are more difficult to classify
Think about the differences between using this 10-class network, vs training 10 binary classifiers, one for each class
As in linear logistic regression note that we covered the following aspects of neural network design, implementation, and training:
Network topology (In this case, a linear network with one layer and $M$ ouptuts)
Activation functions (softmax activation)
Parameter initialization ($\bf W$/$b$)
Obtaining the network prediction using forward computation
Loss function
Parameter gradient calculus using backward computation
Optimization method for parameters update (here, GD)
<font color='teal'> 2.3. Multi Layer Networks (Deep Networks) </font>
Previous networks are constrained in the sense that they can only implement linear classifiers. In this section we analyze how we can extend them to implement non-linear classification:
* Fixed non-linear transformations of inputs: ${\bf z} = {\bf{f}}({\bf x})$
Parametrize the transformation using additional non-linear layers
<center><img src="figures/LR_MLPnetwork.png" width="600"/></center>
When counting layers, we normally ignore the input layer, since there is no computation involved
Intermediate layers are normally referred to as "hidden" layers
Non-linear activations result in an overall non-linear classifier
We can still use Gradient Descent Optimization as long as the network loss derivatives with respect to all parameters exist and are continuous
This is already deep learning. We can have two layers or more, each with different numbers of neurons. But as long as derivatives with respect to parameters can be calculated, the network can be optimized
Finding an appropriate number of layers for a particular problem, as well as the number of neurons per layer, requires exploration
The more data we have for training the network, the more parameters we can afford, making feasible the use of more complex topologies
<font color='olive'>Example: 2-layer network for binary classification</font>
Network topology
Hidden layer with $n_h$ neurons
Hyperbolic tangent activation function for the hidden layer
$${\bf h} = \text{tanh}({\bf o}^{(1)})= \text{tanh}\left({\bf W}^{(1)} {\bf x} + {\bf b}^{(1)}\right)$$
Output layer is linear with logistic activation (as in logistic regression)
$$\hat y = \text{logistic}(o) = \text{logistic}\left({{\bf w}^{(2)}}^\top {\bf h} + b^{(2)}\right)$$
Cross-entropy loss
$$l(y,\hat y) = -\left[ y \log(\hat y) + (1 - y ) \log(1 - \hat y) \right], \qquad \text{with } y\in [0,1]$$
Update of output layer weights as in logistic regression (use ${\bf h}$ instead of ${\bf x}$)
$${\bf w}{n+1}^{(2)} = {\bf w}_n^{(2)} + \rho_n \sum{k=0}^{K-1} (y_k - \hat y_k){\bf h}k$$
$$b{n+1}^{(2)} = b_n^{(2)} + \rho_n \sum_{k=0}^{K-1} (y_k - \hat y_k)$$
<center><img src="figures/forward_graph.png" width="500"/></center>
For updating the input layer parameters we need to use the chain rule (we ignore dimensions and rearrange at the end)
\begin{align}
\frac{\partial l(y, \hat y)}{\partial {\bf W}^{(1)}}
& = \frac{\partial l(y, \hat y)}{\partial o}
\cdot \frac{\partial o}{\partial {\bf h}}
\cdot \frac{\partial {\bf h}}{\partial {\bf o}^{(1)}}
\cdot \frac{\partial {\bf o}^{(1)}}{\partial {\bf W}^{(1)}} \
& = (\hat y - y) [{\bf w}^{(2)} \odot ({\bf 1}-{\bf h})^2] {\bf x}^{\top}
\end{align}
(note that $\dfrac{\partial {\bf o}^{(1)}}{\partial {\bf W}^{(1)}}$ is actually a three dimensional matrix (i.e. a tensor). To apply the chain rule properly, the multiplications in the above equation must represent the adequate tensor products)
\begin{align}
\frac{\partial l(y, \hat y)}{\partial {\bf b}^{(1)}}
& = \frac{\partial l(y, \hat y)}{\partial o}
\cdot \frac{\partial o}{\partial {\bf h}}
\cdot \frac{\partial {\bf h}}{\partial {\bf o}^{(1)}}
\cdot \frac{\partial {\bf o}^{(1)}}{\partial {\bf b}^{(1)}} \
& = (\hat y - y) [{\bf w}^{(2)} \odot ({\bf 1}-{\bf h})^2]
\end{align}
where $\odot$ denotes component-wise multiplication and the square after $({\bf 1}-{\bf h})$ should be computed component-wise
GD update rules become
$${\bf W}{n+1}^{(1)} = {\bf W}_n^{(1)} + \rho_n \sum{k=0}^{K-1} (y_k - \hat y_k)[{\bf w}^{(2)} \odot ({\bf 1}-{\bf h}k)^2] {\bf x}_k^{\top}$$
$${\bf b}{n+1}^{(1)} = {\bf b}n^{(1)} + \rho_n \sum{k=0}^{K-1} (y_k - \hat y_k)[{\bf w}^{(2)} \odot ({\bf 1}-{\bf h}_k)^2]$$
<center><img src="figures/forward_graph.png" width="500"/></center>
The process can be implemented as long as the derivatives of the network overall loss with respect to parameters can be computed
Forward computation graphs represent how the network output can be computed
We can then reverse the graph to compute derivatives with respect to parameters
Deep Learning libraries implement automatic gradient camputation
We just define network topology
Computation of gradients is carried out automatically
End of explanation
dataset = 'DogsCats'
X_train, X_val, y_train, y_val = get_dataset(dataset)
loss_train, loss_val, acc_train, acc_val = evaluate_model(
X_train, X_val, y_train, y_val, n_h=5, epochs=1000, rho=0.05)
plt.figure(figsize=(14,5))
plt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')
plt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')
plt.show()
Explanation: <font color='olive'>Results in Dogs vs Cats dataset ($epochs = 1000$ and $\rho = 0.05$)</font>
End of explanation
dataset = 'digits'
X_train, X_val, y_train, y_val = get_dataset(dataset, forze_binary=True)
loss_train, loss_val, acc_train, acc_val = evaluate_model(
X_train, X_val, y_train, y_val, n_h=5, epochs=10000, rho=0.001)
plt.figure(figsize=(14,5))
plt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')
plt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')
plt.show()
Explanation: <font color='olive'>Results in Binary Sign Digits Dataset ($epochs = 10000$ and $\rho = 0.001$)</font>
End of explanation
x_array = np.linspace(-6,6,100)
y_array = np.clip(x_array, 0, a_max=None)
plt.plot(x_array, y_array)
plt.title('ReLU activation function')
plt.show()
Explanation: <font color='olive'>Exercises</font>
Train the network using other settings for:
The number of iterations
The learning step
The number of neurons in the hidden layer
You may find divergence issues for some settings
Related to the use of the hyperbolic tangent function in the hidden layer (numerical issues)
This is also why learning step was selected smaller for the hidden layer
Optimized libraries rely on certain modifications to obtain more robust implementations
Try to solve both problems using <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html">scikit-learn implementation</a>
You can also explore other activation functions
You can also explore other solvers to speed up convergence
You can also adjust the size of minibatches
Take a look at the early_stopping parameter
<font color='teal'> 2.4. Multi Layer Networks for Regression </font>
Deep Learning networks can be used to solve regression problems with the following common adjustments
Linear activation for the output unit
Square loss:
$$l(y, \hat y) = (y - \hat y)^2, \qquad \text{where} \qquad y, \hat y \in \Re$$
<font color='teal'> 2.5. Activation Functions</font>
You can refer to the <a href="http://d2l.ai/chapter_multilayer-perceptrons/mlp.html#activation-functions">Dive into Deep Learning book</a> for a more detailed discussion on common actiation functions for the hidden units.
We extract some information about the very important ReLU function
The most popular choice, due to both simplicity of implementation and its good performance on a variety of predictive tasks, is the rectified linear unit (ReLU). ReLU provides a very simple nonlinear transformation. Given an element $x$, the function is defined as the maximum of that element and 0.
When the input is negative, the derivative of the ReLU function is 0, and when the input is positive, the derivative of the ReLU function is 1. When the input takes value precisely equal to 0, we say that the derivative is 0 when the input is 0.
The reason for using ReLU is that its derivatives are particularly well behaved: either they vanish or they just let the argument through. This makes optimization better behaved and it mitigated the well-documented problem of vanishing gradients that plagued previous versions of neural networks.
End of explanation
import torch
x = torch.rand((100,200))
digitsX_flatten_tensor = torch.from_numpy(digitsX_flatten)
print(x.type())
print(digitsX_flatten_tensor.size())
Explanation: <font color='teal'> 3. Implementing Deep Networks with PyTorch </font>
Pytorch is a Python library that provides different levels of abstraction for implementing deep neural networks
The main features of PyTorch are:
Definition of numpy-like n-dimensional tensors. They can be stored in / moved to GPU for parallel execution of operations
Automatic calculation of gradients, making backward gradient calculation transparent to the user
Definition of common loss functions, NN layers of different types, optimization methods, data loaders, etc, simplifying NN implementation and training
Provides different levels of abstraction, thus a good balance between flexibility and simplicity
This notebook provides just a basic review of the main concepts necessary to train NNs with PyTorch taking materials from:
<a href="https://pytorch.org/tutorials/beginner/pytorch_with_examples.html">Learning PyTorch with Examples</a>, by Justin Johnson
<a href="https://pytorch.org/tutorials/beginner/nn_tutorial.html">What is torch.nn really?</a>, by Jeremy Howard
<a href="https://www.kaggle.com/kanncaa1/pytorch-tutorial-for-deep-learning-lovers">Pytorch Tutorial for Deep Learning Lovers</a>, by Kaggle user kanncaa1
<font color='teal'> 3.1. Installation and PyTorch introduction</font>
PyTorch can be installed with or without GPU support
If you have an Anaconda installation, you can install from the command line, using the <a href="https://pytorch.org/">instructions of the project website</a>
PyTorch is also preinstalled in Google Collab with free GPU access
Follow RunTime -> Change runtime type, and select GPU for HW acceleration
Please, refer to Pytorch getting started tutorial for a quick introduction regarding tensor definition, GPU vs CPU storage of tensors, operations, and bridge to Numpy
<font color='teal'> 3.2. Torch tensors (very) general overview</font>
We can create tensors with different construction methods provided by the library, either to create new tensors from scratch or from a Numpy array
End of explanation
print('Size of tensor x:', x.size())
print('Tranpose of vector has size', x.t().size()) #Transpose and compute size
print('Extracting upper left matrix of size 3 x 3:', x[:3,:3])
print(x.mm(x.t()).size()) #mm for matrix multiplications
xpx = x.add(x)
xpx2 = torch.add(x,x)
print((xpx!=xpx2).sum()) #Since all are equal, count of different terms is zero
Explanation: Tensors can be converted back to numpy arrays
Note that in this case, a tensor and its corresponding numpy array will share memory
Operations and slicing use a syntax similar to numpy
End of explanation
if torch.cuda.is_available():
device = torch.device('cuda')
x = x.to(device)
y = x.add(x)
y = y.to('cpu')
else:
print('No GPU card is available')
Explanation: Adding underscore performs operations "in place", e.g., x.add_(y)
If a GPU is available, tensors can be moved to and from the GPU device
Operations on tensors stored in a GPU will be carried out using GPU resources and will typically be highly parallelized
End of explanation
x.requires_grad = True
y = (3 * torch.log(x)).sum()
y.backward()
print(x.grad[:2,:2])
print(3/x[:2,:2])
x.requires_grad = False
x.grad.zero_()
print('Automatic gradient calculation is deactivated, and gradients set to zero')
Explanation: <font color='teal'> 3.3. Automatic gradient calculation </font>
PyTorch tensors have a property requires_grad. When true, PyTorch automatic gradient calculation will be activated for that variable
In order to compute these derivatives numerically, PyTorch keeps track of all operations carried out on these variables, organizing them in a forward computation graph.
When executing the backward() method, derivatives will be calculated
However, this should only be activated when necessary, to save computation
End of explanation
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
dataset = 'digits'
#Joint normalization of all data. For images [-.5, .5] scaling is frequent
min_max_scaler = MinMaxScaler(feature_range=(-.5, .5))
X = min_max_scaler.fit_transform(digitsX_flatten)
#Generate train and validation data, shuffle
X_train, X_val, y_train, y_val = train_test_split(X, digitsY, test_size=0.2, random_state=42, shuffle=True)
#Convert to Torch tensors
X_train_torch = torch.from_numpy(X_train)
X_val_torch = torch.from_numpy(X_val)
y_train_torch = torch.from_numpy(y_train)
y_val_torch = torch.from_numpy(y_val)
# Define some useful functions
def softmax(t):
Compute softmax values for each sets of scores in t
return t.exp() / t.exp().sum(-1).unsqueeze(-1)
def model(w,b,x):
#Calcula la salida de la red
return softmax(x.mm(w) + b)
def accuracy(y, y_hat):
return (y.argmax(axis=-1) == y_hat.argmax(axis=-1)).float().mean()
def nll(y, y_hat):
return -(y * y_hat.log()).mean()
Explanation: <font color='olive'>Exercise</font>
Initialize a tensor x with the upper right $5 \times 10$ submatrix of flattened digits
Compute output vector y applying a function of your choice to x
Compute scalar value z as the sum of all elements in y squared
Check that x.grad calculation is correct using the backward method
Try to run your cell multiple times to see if the calculation is still correct. If not, implement the necessary mnodifications so that you can run the cell multiple times, but the gradient does not change from run to run
Note: The backward method can only be run on scalar variables
<font color='teal'> 3.4. Feed Forward Network using PyTorch </font>
In this section we will change our code for a neural network to use tensors instead of numpy arrays. We will work with the sign digits datasets.
We will introduce all concepts using a single layer perceptron (softmax regression), and then implement networks with additional hidden layers
<font color='olive'> 3.4.1. Using Automatic differentiation </font>
We start by loading the data, and converting to tensors.
As a first step, we refactor our code to use tensor operations
We do not need to pay too much attention to particular details regarding tensor operations, since these will not be necessary when moving to higher PyTorch abstraction levels
We do not need to implement gradient calculation. PyTorch will take care of that
End of explanation
# Parameter initialization
W = .1 * torch.randn(X_train_torch.size()[1], y_train_torch.size()[1])
W.requires_grad_()
b = torch.zeros(y_train_torch.size()[1], requires_grad=True)
epochs = 500
rho = .5
loss_train = np.zeros(epochs)
loss_val = np.zeros(epochs)
acc_train = np.zeros(epochs)
acc_val = np.zeros(epochs)
# Network training
for epoch in range(epochs):
print(f'Current epoch: {epoch + 1} \r', end="")
#Compute network output and cross-entropy loss
pred = model(W,b,X_train_torch)
loss = nll(y_train_torch, pred)
#Compute gradients
loss.backward()
#Deactivate gradient automatic updates
with torch.no_grad():
#Computing network performance after iteration
loss_train[epoch] = loss.item()
acc_train[epoch] = accuracy(y_train_torch, pred).item()
pred_val = model(W, b, X_val_torch)
loss_val[epoch] = nll(y_val_torch, pred_val).item()
acc_val[epoch] = accuracy(y_val_torch, pred_val).item()
#Weight update
W -= rho * W.grad
b -= rho * b.grad
#Reset gradients
W.grad.zero_()
b.grad.zero_()
Explanation: Syntaxis is a bit different because input variables are tensors, not arrays
This time we did not need to implement the backward function
End of explanation
plt.figure(figsize=(14,5))
plt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')
plt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')
plt.show()
Explanation: It is important to deactivate gradient updates after the network has been evaluated on training data, and gradients of the loss function have been computed
End of explanation
from torch import nn
class my_multiclass_net(nn.Module):
def __init__(self, nin, nout):
This method initializes the network parameters
Parameters nin and nout stand for the number of input parameters (features in X)
and output parameters (number of classes)
super().__init__()
self.W = nn.Parameter(.1 * torch.randn(nin, nout))
self.b = nn.Parameter(torch.zeros(nout))
def forward(self, x):
return softmax(x.mm(self.W) + self.b)
def softmax(t):
Compute softmax values for each sets of scores in t
return t.exp() / t.exp().sum(-1).unsqueeze(-1)
my_net = my_multiclass_net(X_train_torch.size()[1], y_train_torch.size()[1])
epochs = 500
rho = .5
loss_train = np.zeros(epochs)
loss_val = np.zeros(epochs)
acc_train = np.zeros(epochs)
acc_val = np.zeros(epochs)
for epoch in range(epochs):
print(f'Current epoch: {epoch + 1} \r', end="")
#Compute network output and cross-entropy loss
pred = my_net(X_train_torch)
loss = nll(y_train_torch, pred)
#Compute gradients
loss.backward()
#Deactivate gradient automatic updates
with torch.no_grad():
#Computing network performance after iteration
loss_train[epoch] = loss.item()
acc_train[epoch] = accuracy(y_train_torch, pred).item()
pred_val = my_net(X_val_torch)
loss_val[epoch] = nll(y_val_torch, pred_val).item()
acc_val[epoch] = accuracy(y_val_torch, pred_val).item()
#Weight update
for p in my_net.parameters():
p -= p.grad * rho
#Reset gradients
my_net.zero_grad()
plt.figure(figsize=(14,5))
plt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')
plt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')
plt.show()
Explanation: <font color='olive'> 3.4.2. Using torch nn module </font>
PyTorch nn module provides many attributes and methods that make the implementation and training of Neural Networks simpler
nn.Module and nn.Parameter allow to implement a more concise training loop
nn.Module is a PyTorch class that will be used to encapsulate and design a specific neural network, thus, it is central to the implementation of deep neural nets using PyTorch
nn.Parameter allow the definition of trainable network parameters. In this way, we will simplify the implementation of the training loop.
All parameters defined with nn.Parameter will have requires_grad = True
End of explanation
from torch import nn
class my_multiclass_net(nn.Module):
def __init__(self, nin, nout):
Note that now, we do not even need to initialize network parameters ourselves
super().__init__()
self.lin = nn.Linear(nin, nout)
def forward(self, x):
return self.lin(x)
loss_func = nn.CrossEntropyLoss()
my_net = my_multiclass_net(X_train_torch.size()[1], y_train_torch.size()[1])
epochs = 500
rho = .1
loss_train = np.zeros(epochs)
loss_val = np.zeros(epochs)
acc_train = np.zeros(epochs)
acc_val = np.zeros(epochs)
for epoch in range(epochs):
print(f'Current epoch: {epoch + 1} \r', end="")
#Compute network output and cross-entropy loss
pred = my_net(X_train_torch)
loss = loss_func(pred, y_train_torch.argmax(axis=-1))
#Compute gradients
loss.backward()
#Deactivate gradient automatic updates
with torch.no_grad():
#Computing network performance after iteration
loss_train[epoch] = loss.item()
acc_train[epoch] = accuracy(y_train_torch, pred).item()
pred_val = my_net(X_val_torch)
loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item()
acc_val[epoch] = accuracy(y_val_torch, pred_val).item()
#Weight update
for p in my_net.parameters():
p -= p.grad * rho
#Reset gradients
my_net.zero_grad()
plt.figure(figsize=(14,5))
plt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')
plt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')
plt.show()
Explanation: nn.Module comes with several kinds of pre-defined layers, thus making it even simpler to implement neural networks
We can also import the Cross Entropy Loss from nn.Module. When doing so:
We do not have to compute the softmax, since the nn.CrossEntropyLoss already does so
nn.CrossEntropyLoss receives two input arguments, the first is the output of the network, and the second is the true label as a 1-D tensor (i.e., an array of integers, one-hot encoding should not be used)
End of explanation
from torch import optim
my_net = my_multiclass_net(X_train_torch.size()[1], y_train_torch.size()[1])
opt = optim.SGD(my_net.parameters(), lr=0.1)
epochs = 500
loss_train = np.zeros(epochs)
loss_val = np.zeros(epochs)
acc_train = np.zeros(epochs)
acc_val = np.zeros(epochs)
for epoch in range(epochs):
print(f'Current epoch: {epoch + 1} \r', end="")
#Compute network output and cross-entropy loss
pred = my_net(X_train_torch)
loss = loss_func(pred, y_train_torch.argmax(axis=-1))
#Compute gradients
loss.backward()
#Deactivate gradient automatic updates
with torch.no_grad():
#Computing network performance after iteration
loss_train[epoch] = loss.item()
acc_train[epoch] = accuracy(y_train_torch, pred).item()
pred_val = my_net(X_val_torch)
loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item()
acc_val[epoch] = accuracy(y_val_torch, pred_val).item()
opt.step()
opt.zero_grad()
Explanation: Note faster convergence is observed in this case. It is actually due to a more convenient initialization of the hidden layer
<font color='olive'> 3.4.3. Network Optimization </font>
We cover in this subsection two different aspects about network training using PyTorch:
Using torch.optim allows an easier and more interpretable encoding of neural network training, and opens the door to more sophisticated training algorithms
Using minibatches can speed up network convergence
torch.optim provides two convenient methods for neural network training:
opt.step() updates all network parameters using current gradients
opt.zero_grad() resets all network parameters
End of explanation
plt.figure(figsize=(14,5))
plt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')
plt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')
plt.show()
Explanation: Note network optimization is carried out outside torch.no_grad() but network evaluation (other than forward output calculation for the training patterns) still need to deactivate gradient updates
End of explanation
from torch.utils.data import TensorDataset, DataLoader
train_ds = TensorDataset(X_train_torch, y_train_torch)
train_dl = DataLoader(train_ds, batch_size=64)
from torch import optim
my_net = my_multiclass_net(X_train_torch.size()[1], y_train_torch.size()[1])
opt = optim.SGD(my_net.parameters(), lr=0.1)
epochs = 200
loss_train = np.zeros(epochs)
loss_val = np.zeros(epochs)
acc_train = np.zeros(epochs)
acc_val = np.zeros(epochs)
for epoch in range(epochs):
print(f'Current epoch: {epoch + 1} \r', end="")
for xb, yb in train_dl:
#Compute network output and cross-entropy loss for current minibatch
pred = my_net(xb)
loss = loss_func(pred, yb.argmax(axis=-1))
#Compute gradients and optimize parameters
loss.backward()
opt.step()
opt.zero_grad()
#At the end of each epoch, evaluate overall network performance
with torch.no_grad():
#Computing network performance after iteration
pred = my_net(X_train_torch)
loss_train[epoch] = loss_func(pred, y_train_torch.argmax(axis=-1)).item()
acc_train[epoch] = accuracy(y_train_torch, pred).item()
pred_val = my_net(X_val_torch)
loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item()
acc_val[epoch] = accuracy(y_val_torch, pred_val).item()
plt.figure(figsize=(14,5))
plt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')
plt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')
plt.show()
Explanation: <font color='olive'> Exercise </font>
Implement network training with other optimization methods. You can refer to the <a href="https://pytorch.org/docs/stable/optim.html">official documentation</a> and select a couple of methods. You can also try to implement adaptive learning rates using torch.optim.lr_scheduler
Each epoch of the previous implementation of network training was actually implementing Gradient Descent
In SGD only a minibatch of training patterns are used at every iteration
In each epoch we iterate over all training patterns sequentially selecting non-overlapping minibatches
Overall, convergence is usually faster than when using Gradient Descent
Torch provides methods that simplify the implementation of this strategy
End of explanation
my_net = nn.Sequential(
nn.Linear(X_train_torch.size()[1], 200),
nn.ReLU(),
nn.Linear(200,50),
nn.ReLU(),
nn.Linear(50,20),
nn.ReLU(),
nn.Linear(20,y_train_torch.size()[1])
)
opt = optim.SGD(my_net.parameters(), lr=0.1)
epochs = 200
loss_train = np.zeros(epochs)
loss_val = np.zeros(epochs)
acc_train = np.zeros(epochs)
acc_val = np.zeros(epochs)
for epoch in range(epochs):
print(f'Current epoch: {epoch + 1} \r', end="")
for xb, yb in train_dl:
#Compute network output and cross-entropy loss for current minibatch
pred = my_net(xb)
loss = loss_func(pred, yb.argmax(axis=-1))
#Compute gradients and optimize parameters
loss.backward()
opt.step()
opt.zero_grad()
#At the end of each epoch, evaluate overall network performance
with torch.no_grad():
#Computing network performance after iteration
pred = my_net(X_train_torch)
loss_train[epoch] = loss_func(pred, y_train_torch.argmax(axis=-1)).item()
acc_train[epoch] = accuracy(y_train_torch, pred).item()
pred_val = my_net(X_val_torch)
loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item()
acc_val[epoch] = accuracy(y_val_torch, pred_val).item()
plt.figure(figsize=(14,5))
plt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')
plt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')
plt.show()
print('Validation accuracy with this net:', acc_val[-1])
Explanation: <font color='olive'> 3.4.4. Multi Layer networks using nn.Sequential </font>
PyTorch simplifies considerably the implementation of neural network training, since we do not need to implement derivatives ourselves
We can also make a simpler implementation of multilayer networks using nn.Sequential function
It returns directly a network with the requested topology, including parameters and forward evaluation method
End of explanation
dataset = 'digits'
#Generate train and validation data, shuffle
X_train, X_val, y_train, y_val = train_test_split(digitsX[:,np.newaxis,:,:], digitsY, test_size=0.2, random_state=42, shuffle=True)
#Convert to Torch tensors
X_train_torch = torch.from_numpy(X_train)
X_val_torch = torch.from_numpy(X_val)
y_train_torch = torch.from_numpy(y_train)
y_val_torch = torch.from_numpy(y_val)
train_ds = TensorDataset(X_train_torch, y_train_torch)
train_dl = DataLoader(train_ds, batch_size=64)
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x):
return self.func(x)
my_net = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AvgPool2d(4),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(my_net.parameters(), lr=0.1)
epochs = 2500
loss_train = np.zeros(epochs)
loss_val = np.zeros(epochs)
acc_train = np.zeros(epochs)
acc_val = np.zeros(epochs)
for epoch in range(epochs):
print(f'Número de épocas: {epoch + 1}\r', end="")
for xb, yb in train_dl:
#Compute network output and cross-entropy loss for current minibatch
pred = my_net(xb)
loss = loss_func(pred, yb.argmax(axis=-1))
#Compute gradients and optimize parameters
loss.backward()
opt.step()
opt.zero_grad()
#At the end of each epoch, evaluate overall network performance
with torch.no_grad():
# Computing network performance after iteration
pred = my_net(X_train_torch)
loss_train[epoch] = loss_func(pred, y_train_torch.argmax(axis=-1)).item()
acc_train[epoch] = accuracy(y_train_torch, pred).item()
pred_val = my_net(X_val_torch)
loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item()
acc_val[epoch] = accuracy(y_val_torch, pred_val).item()
plt.figure(figsize=(14,5))
plt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')
plt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')
plt.show()
Explanation: <font color='teal'> 3.5. Generalization</font>
For complex network topologies (i.e., many parameters), network training can incur in over-fitting issues
Some common strategies to avoid this are:
Early stopping
Dropout regularization
<center><a href="https://medium.com/analytics-vidhya/a-simple-introduction-to-dropout-regularization-with-code-5279489dda1e"><img src="figures/dropout.png" width="450"/>Image Source</a></center>
Data augmentation can also be used to avoid overfitting, as well as to achieve improved accuracy by providing the network some a priori expert knowledge
E.g., if image rotations and scalings do not affect the correct class, we could enlarge the dataset by creating artificial images with these transformations
<font color='teal'> 3.6. Convolutional Networks for Image Processing </font>
PyTorch implements other layers that are better suited for different applications
In image processing, we normally recur to Convolutional Neural Networks, since they are able to capture the true spatial information of the image
<center><a href="https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html"><img src="figures/CNN.png" width="800"/>Image Source</a></center>
End of explanation |
10,244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Backpropagation
Step1: Variables & Terminology
$W_{i}$ - weights of the $i$th layer
$B_{i}$ - biases of the $i$th layer
$L_{a}^{i}$ - activation (Inner product of weights and inputs of previous layer) of the $i$th layer.
$L_{o}^{i}$ - output of the $i$th layer. (This is $f(L_{a}^{i})$, where $f$ is the activation function)
MLP with one input, one hidden, one output layer
$X, y$ are the training samples
$\mathbf{W_{1}}$ and $\mathbf{W_{2}}$ are the weights for first (hidden) and the second (output) layer.
$\mathbf{B_{1}}$ and $\mathbf{B_{2}}$ are the biases for first (hidden) and the second (output) layer.
$L_{a}^{0} = L_{o}^{0}$, since the first (zeroth) layers is just the input.
Activations and outputs
$L_{a}^{1} = X\mathbf{W_{1}} + \mathbf{B_{1}}$
$L_{o}^{1} = \frac{1}{1 + e^{-L_{a}^{1}}}$
$L_{a}^{2} = L_{o}^{1}\mathbf{W_{2}} + \mathbf{B_{2}}$
$L_{o}^{2} = \frac{1}{1 + e^{-L_{a}^{2}}}$
Loss $E = \frac{1}{2} \sum_{S}(y - L_{o}^{2})^{2}$
Derivation of backpropagation learning rule
Step2: Exercise
Step3: Hints | Python Code:
from IPython.display import Image
Image("mlp.png", height=200, width=600)
Explanation: Backpropagation
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("LOc_y67AzCA")
import numpy as np
from utils import backprop_decision_boundary, backprop_make_classification, backprop_make_moons
from sklearn.metrics import accuracy_score
from theano import tensor as T
from theano import function, shared
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.rc('figure', figsize=(8, 6))
%matplotlib inline
x, y = T.dmatrices('xy')
# weights and biases
w1 = shared(np.random.rand(2, 3), name="w1")
b1 = shared(np.random.rand(1, 3), name="b1")
w2 = shared(np.random.rand(3, 2), name="w2")
b2 = shared(np.random.rand(1, 2), name="b2")
# layer activations
l1_activation = T.dot(x, w1) + b1.repeat(x.shape[0], axis=0)
l1_output = 1.0 / (1 + T.exp(-l1_activation))
l2_activation = T.dot(l1_output, w2) + b2.repeat(l1_output.shape[0], axis=0)
l2_output = 1.0 / (1 + T.exp(-l2_activation))
# losses and gradients
loss = 0.5 * T.sum((y - l2_output) ** 2)
gw1, gb1, gw2, gb2 = T.grad(loss, [w1, b1, w2, b2])
# functions
alpha = 0.2
predict = function([x], l2_output)
train = function([x, y], loss, updates=[(w1, w1 - alpha * gw1), (b1, b1 - alpha * gb1),
(w2, w2 - alpha * gw2), (b2, b2 - alpha * gb2)])
# make dummy data
X, Y = backprop_make_classification()
backprop_decision_boundary(predict, X, Y)
y_hat = predict(X)
print("Accuracy: ", accuracy_score(np.argmax(Y, axis=1), np.argmax(y_hat, axis=1)))
for i in range(500):
l = train(X, Y)
if i % 100 == 0:
print(l)
backprop_decision_boundary(predict, X, Y)
y_hat = predict(X)
print("Accuracy: ", accuracy_score(np.argmax(Y, axis=1), np.argmax(y_hat, axis=1)))
Explanation: Variables & Terminology
$W_{i}$ - weights of the $i$th layer
$B_{i}$ - biases of the $i$th layer
$L_{a}^{i}$ - activation (Inner product of weights and inputs of previous layer) of the $i$th layer.
$L_{o}^{i}$ - output of the $i$th layer. (This is $f(L_{a}^{i})$, where $f$ is the activation function)
MLP with one input, one hidden, one output layer
$X, y$ are the training samples
$\mathbf{W_{1}}$ and $\mathbf{W_{2}}$ are the weights for first (hidden) and the second (output) layer.
$\mathbf{B_{1}}$ and $\mathbf{B_{2}}$ are the biases for first (hidden) and the second (output) layer.
$L_{a}^{0} = L_{o}^{0}$, since the first (zeroth) layers is just the input.
Activations and outputs
$L_{a}^{1} = X\mathbf{W_{1}} + \mathbf{B_{1}}$
$L_{o}^{1} = \frac{1}{1 + e^{-L_{a}^{1}}}$
$L_{a}^{2} = L_{o}^{1}\mathbf{W_{2}} + \mathbf{B_{2}}$
$L_{o}^{2} = \frac{1}{1 + e^{-L_{a}^{2}}}$
Loss $E = \frac{1}{2} \sum_{S}(y - L_{o}^{2})^{2}$
Derivation of backpropagation learning rule:
End of explanation
X, Y = backprop_make_moons()
plt.scatter(X[:, 0], X[:, 1], c=np.argmax(Y, axis=1))
Explanation: Exercise: Implement an MLP with two hidden layers, for the following dataset
End of explanation
# enter code here
Explanation: Hints:
Use two hidden layers, one containing 3 and the other containing 4 neurons
Use learning rate $\alpha$ = 0.2
Try to make the network converge in 1000 iterations
End of explanation |
10,245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Target Function
Lets create a target 1-D function with multiple local maxima to test and visualize how the BayesianOptimization package works. The target function we will try to maximize is the following
Step1: Create a BayesianOptimization Object
Enter the target function to be maximized, its variable(s) and their corresponding ranges (see this example for a multi-variable case). A minimum number of 2 initial guesses is necessary to kick start the algorithms, these can either be random or user defined.
Step2: In this example we will use the Upper Confidence Bound (UCB) as our utility function. It has the free parameter
$\kappa$ which control the balance between exploration and exploitation; we will set $\kappa=5$ which, in this case, makes the algorithm quite bold. Additionally we will use the cubic correlation in our Gaussian Process.
Step3: Plotting and visualizing the algorithm at each step
Lets first define a couple functions to make plotting easier
Step4: Two random points
After we probe two points at random, we can fit a Gaussian Process and start the bayesian optimization procedure. Two points should give us a uneventful posterior with the uncertainty growing as we go further from the observations.
Step5: After one step of GP (and two random points)
Step6: After two steps of GP (and two random points)
Step7: After three steps of GP (and two random points)
Step8: After four steps of GP (and two random points)
Step9: After five steps of GP (and two random points)
Step10: After six steps of GP (and two random points)
Step11: After seven steps of GP (and two random points) | Python Code:
def target(x):
return np.exp(-(x - 2)**2) + np.exp(-(x - 6)**2/10) + 1/ (x**2 + 1)
x = np.linspace(-2, 10, 1000)
y = target(x)
plt.plot(x, y)
Explanation: Target Function
Lets create a target 1-D function with multiple local maxima to test and visualize how the BayesianOptimization package works. The target function we will try to maximize is the following:
$$f(x) = e^{-(x - 2)^2} + e^{-\frac{(x - 6)^2}{10}} + \frac{1}{x^2 + 1}, $$ its maximum is at $x = 2$ and we will restrict the interval of interest to $x \in (-2, 10)$.
End of explanation
bo = BayesianOptimization(target, {'x': (-2, 10)})
Explanation: Create a BayesianOptimization Object
Enter the target function to be maximized, its variable(s) and their corresponding ranges (see this example for a multi-variable case). A minimum number of 2 initial guesses is necessary to kick start the algorithms, these can either be random or user defined.
End of explanation
gp_params = {'corr': 'cubic'}
bo.maximize(init_points=2, n_iter=0, acq='ucb', kappa=5, **gp_params)
Explanation: In this example we will use the Upper Confidence Bound (UCB) as our utility function. It has the free parameter
$\kappa$ which control the balance between exploration and exploitation; we will set $\kappa=5$ which, in this case, makes the algorithm quite bold. Additionally we will use the cubic correlation in our Gaussian Process.
End of explanation
def posterior(bo, xmin=-2, xmax=10):
xmin, xmax = -2, 10
bo.gp.fit(bo.X, bo.Y)
mu, sigma2 = bo.gp.predict(np.linspace(xmin, xmax, 1000).reshape(-1, 1), eval_MSE=True)
return mu, np.sqrt(sigma2)
def plot_gp(bo, x, y):
fig = plt.figure(figsize=(16, 10))
fig.suptitle('Gaussian Process and Utility Function After {} Steps'.format(len(bo.X)), fontdict={'size':30})
gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1])
axis = plt.subplot(gs[0])
acq = plt.subplot(gs[1])
mu, sigma = posterior(bo)
axis.plot(x, y, linewidth=3, label='Target')
axis.plot(bo.X.flatten(), bo.Y, 'D', markersize=8, label=u'Observations', color='r')
axis.plot(x, mu, '--', color='k', label='Prediction')
axis.fill(np.concatenate([x, x[::-1]]),
np.concatenate([mu - 1.9600 * sigma, (mu + 1.9600 * sigma)[::-1]]),
alpha=.6, fc='c', ec='None', label='95% confidence interval')
axis.set_xlim((-2, 10))
axis.set_ylim((None, None))
axis.set_ylabel('f(x)', fontdict={'size':20})
axis.set_xlabel('x', fontdict={'size':20})
utility = bo.util.utility(x.reshape((-1, 1)), bo.gp, 0)
acq.plot(x, utility, label='Utility Function', color='purple')
acq.plot(x[np.argmax(utility)], np.max(utility), '*', markersize=15,
label=u'Next Best Guess', markerfacecolor='gold', markeredgecolor='k', markeredgewidth=1)
acq.set_xlim((-2, 10))
acq.set_ylim((0, np.max(utility) + 0.5))
acq.set_ylabel('Utility', fontdict={'size':20})
acq.set_xlabel('x', fontdict={'size':20})
axis.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
acq.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
Explanation: Plotting and visualizing the algorithm at each step
Lets first define a couple functions to make plotting easier
End of explanation
plot_gp(bo, x, y)
Explanation: Two random points
After we probe two points at random, we can fit a Gaussian Process and start the bayesian optimization procedure. Two points should give us a uneventful posterior with the uncertainty growing as we go further from the observations.
End of explanation
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
Explanation: After one step of GP (and two random points)
End of explanation
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
Explanation: After two steps of GP (and two random points)
End of explanation
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
Explanation: After three steps of GP (and two random points)
End of explanation
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
Explanation: After four steps of GP (and two random points)
End of explanation
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
Explanation: After five steps of GP (and two random points)
End of explanation
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
Explanation: After six steps of GP (and two random points)
End of explanation
bo.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(bo, x, y)
Explanation: After seven steps of GP (and two random points)
End of explanation |
10,246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
verify pyEMU Influence class
Step1: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
Step2: pick just a few parameters since infstat is super picky about this
Step3: run pest one time since it is super picky about the format of the .res file
Step4: Write the .jco since pest just stomped on it | Python Code:
%matplotlib inline
import os
import shutil
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyemu
Explanation: verify pyEMU Influence class
End of explanation
pst = pyemu.Pst("freyberg.pst")
pst.pestpp_options = {}
inf = pyemu.Influence(jco="freyberg.jcb",pst=pst,verbose=False)
inf.drop_prior_information()
Explanation: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
End of explanation
par = inf.pst.parameter_data
adj_pars = par.loc[par.pargp=="hk","parnme"][:10].values.tolist()
fixed_pars = [pname for pname in inf.pst.adj_par_names if pname not in adj_pars]
jco_ord = inf.jco.get(inf.pst.obs_names,adj_pars)
ord_base = "freyberg_ord_infstat"
inf.pst.parameter_data.loc[fixed_pars,"partrans"] = "fixed"
#jco_ord.to_binary(ord_base + ".jco")
#inf.pst.control_data.parsaverun = ' '
inf.pst.control_data.noptmax = 0
inf.pst.write(ord_base+".pst")
Explanation: pick just a few parameters since infstat is super picky about this
End of explanation
os.system("pest.exe "+ord_base+'.pst')
Explanation: run pest one time since it is super picky about the format of the .res file
End of explanation
jco_ord.to_binary(ord_base + ".jco")
os.system("infstat freyberg_ord")
Explanation: Write the .jco since pest just stomped on it
End of explanation |
10,247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kernel Approximations for Large-Scale Non-Linear Learning
Predictions in a kernel-SVM are made using the formular
$$
\hat{y} = \alpha_1 y_1 k(\mathbf{x^{(1)}}, \mathbf{x}) + ... + \alpha_n y_n k(\mathbf{x^{(n)}}, \mathbf{x})> 0
$$
$$
0 \leq \alpha_i \leq C
$$
Radial basis function (Gaussian) kernel
Step1: Linear SVM
Step2: Kernel SVM
Step3: Kernel Approximation + Linear SVM
Step4: Out of core Kernel approximation | Python Code:
from helpers import Timer
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
digits = load_digits()
X, y = digits.data / 16., digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
Explanation: Kernel Approximations for Large-Scale Non-Linear Learning
Predictions in a kernel-SVM are made using the formular
$$
\hat{y} = \alpha_1 y_1 k(\mathbf{x^{(1)}}, \mathbf{x}) + ... + \alpha_n y_n k(\mathbf{x^{(n)}}, \mathbf{x})> 0
$$
$$
0 \leq \alpha_i \leq C
$$
Radial basis function (Gaussian) kernel:
$$k(\mathbf{x}, \mathbf{x'}) = \exp(-\gamma ||\mathbf{x} - \mathbf{x'}||^2)$$
Kernel approximation $\phi$:
$$\phi(\mathbf{x})\phi(\mathbf{x'}) \approx k(\mathbf{x}, \mathbf{x'})$$
$$\hat{y} \approx w^T \phi(\mathbf{x})> 0$$
End of explanation
from sklearn.svm import LinearSVC
from sklearn.grid_search import GridSearchCV
grid = GridSearchCV(LinearSVC(random_state=0),
param_grid={'C': np.logspace(-3, 2, 6)}, cv=5)
with Timer():
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
Explanation: Linear SVM
End of explanation
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
grid = GridSearchCV(SVC(), param_grid={'C': np.logspace(-3, 2, 6),
'gamma': np.logspace(-3, 2, 6)}, cv=5)
with Timer():
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
Explanation: Kernel SVM
End of explanation
from sklearn.kernel_approximation import RBFSampler
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(RBFSampler(random_state=0),
LinearSVC(dual=False, random_state=0))
grid = GridSearchCV(pipe, param_grid={'linearsvc__C': np.logspace(-3, 2, 6),
'rbfsampler__gamma': np.logspace(-3, 2, 6)}, cv=5)
with Timer():
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
Explanation: Kernel Approximation + Linear SVM
End of explanation
import cPickle
from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier(random_state=0)
for iteration in range(30):
for i in range(9):
X_batch, y_batch = cPickle.load(open("data/batch_%02d.pickle" % i))
sgd.partial_fit(X_batch, y_batch, classes=range(10))
X_test, y_test = cPickle.load(open("data/batch_09.pickle"))
sgd.score(X_test, y_test)
sgd = SGDClassifier(random_state=0)
rbf_sampler = RBFSampler(gamma=.2, random_state=0).fit(np.ones((1, 64)))
for iteration in range(30):
for i in range(9):
X_batch, y_batch = cPickle.load(open("data/batch_%02d.pickle" % i))
X_kernel = rbf_sampler.transform(X_batch)
sgd.partial_fit(X_kernel, y_batch, classes=range(10))
sgd.score(rbf_sampler.transform(X_test), y_test)
Explanation: Out of core Kernel approximation
End of explanation |
10,248 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting workbook
Plotting workbook to accompany the paper Carbon Capture and Storage Energy Systems vs. Dispatchable Renewables for Climate Mitigation
Step1: Eq. 6
Define EROI-CCS function (Eq. 6 in the paper)
Step2: Eq. 2
Define dispatchable EROI for renewables with storage (Eq. 2 in the paper)
Step3: Figure 3 & Table 3
Now we generate the contour lines for Figure 3, using the examples from Table 3.
We have two tyes of renewable energy plants, PV and Wind, each with two cases of EROI, taken from the lower and upper boundaries encountered in studies.
We have assumed a storage technology mixes taken from Table 1 of Barnhart et al 2013. Here we have the ESOI values reported for 7 energy storage storage technologies, 5 types of electrochemical storage (Batteries), Compressed air (CAES) and pumped hydro (PHS).
We have considered diffrent energy storage technology mixes for our PV and Wind examples. For PV, we have taken an equal mix of all 5 electrochemical storage technologies, without CAES or PHS (resulting in the pv share vector of [0.2,0.2,0.2,0.2,0.2,0,0]). For Wind, we have considered 50% battery storage (as an equal 10% from each technology), and 25-25% of CAES and PHS, resulting in the wind share vector of [0.1,0.1,0.1,0.1,0.1,0.25,0.25]).
Step4: Construct test cases
Step5: Construct test cases
Step6: Figure 3
Produce contour lines for multiple $f_{op}$ and $\phi$ levels and place the example plants on the contours. Calculate the $EROI_{CCS}$ and $EROI_{disp}$, respectively, for multiple base $EROI$ values. Likewise we obtain a plot for comparison between dispatchable (i.e. with storage) EROI for renewables and CCS EROI for fossil fuel based power plants.
Step7: Figure 4
For this figure we are considering various levels of powerplant efficiency, and its effect on the EROI , under various capture ratio assumptions. This is described in Eq. 1 and Eq. 4 in the paper.
Define parameters
Electricity penalty (b) curve from Sanpasternich (2009) Figure 8 [kWh/kg CO2]
Step8: Operational cost of coal fuel cycle over plant lifetime output [kWh]
Step9: Carbon intensity of electricity [kg CO2/kWh]
Step10: Energy penalty of CCS [dmnl]
Step11: Emissions
Step12: Energy cost of constructing CCS plant
Step13: EROIs
Energy cost of constructing CCS plant
Step14: Energy cost of operating CCS plant
Step15: Translation to quantities as per defined in the paper
Eq. 5
Step16: Eq. 4
Step17: Eq. 3
Step18: Eq. 1
Step19: Verification
Bactkrack to Eq. 3 from Eq. 5
Step20: Get back Eq. 4 from Eq. 5
Step21: Get back Eq. 3 from Eq. 4
Step22: Translate
Translate example notation into paper equation notations
Step23: ... ?
Construct from bottom up | Python Code:
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.font_manager as font_manager
plt.style.use('seaborn-darkgrid')
%matplotlib inline
prop = font_manager.FontProperties('Segoe UI')
from sympy import *
init_printing()
Explanation: Plotting workbook
Plotting workbook to accompany the paper Carbon Capture and Storage Energy Systems vs. Dispatchable Renewables for Climate Mitigation: A Bio-physical Comparison by Sgouridis, Dale, Csala, Chiesa & Bardi
End of explanation
var('EROI_CCS EROI f_op');
eq6=Eq(EROI_CCS,EROI/(1+f_op*EROI))
eq6
Explanation: Eq. 6
Define EROI-CCS function (Eq. 6 in the paper)
End of explanation
phi = Symbol('phi')
eta = Symbol('eta')
var('EROI_disp EROI ESOI phi eta');
eq2=Eq(EROI_disp,((1-phi)+(eta*phi))/((1/EROI)+(eta*phi/ESOI)))
eq2
Explanation: Eq. 2
Define dispatchable EROI for renewables with storage (Eq. 2 in the paper)
End of explanation
etas=[0.90,0.75,0.90,0.75,0.60,0.70,0.85] #from (Barnhart et al 2013)
S=[32,20,5,10,9,797,704] #from (Barnhart et al 2013)
shr0=[0.2,0.2,0.2,0.2,0.2,0,0] #assumed shares for considered PV exmaples in this paper
shr1=[0.1,0.1,0.1,0.1,0.1,0.25,0.25] #assumed shares for considered Wind exmaples in this paper
eta_pv=sum([etas[i]*shr0[i] for i in range(len(etas))]) #resulting composite eta for pv
eta_wind=sum([etas[i]*shr1[i] for i in range(len(etas))]) #resulting composite eta for wind
S_pv=sum([S[i]*shr0[i] for i in range(len(etas))]) #resulting composite S for pv
S_wind=sum([S[i]*shr1[i] for i in range(len(etas))]) #resulting composite S for wind
print 'Composite eta for pv',eta_pv
print 'Composite eta for wind',eta_wind
print 'Composite ESOI for pv',S_pv
print 'Composite ESOI for wind',S_wind
Explanation: Figure 3 & Table 3
Now we generate the contour lines for Figure 3, using the examples from Table 3.
We have two tyes of renewable energy plants, PV and Wind, each with two cases of EROI, taken from the lower and upper boundaries encountered in studies.
We have assumed a storage technology mixes taken from Table 1 of Barnhart et al 2013. Here we have the ESOI values reported for 7 energy storage storage technologies, 5 types of electrochemical storage (Batteries), Compressed air (CAES) and pumped hydro (PHS).
We have considered diffrent energy storage technology mixes for our PV and Wind examples. For PV, we have taken an equal mix of all 5 electrochemical storage technologies, without CAES or PHS (resulting in the pv share vector of [0.2,0.2,0.2,0.2,0.2,0,0]). For Wind, we have considered 50% battery storage (as an equal 10% from each technology), and 25-25% of CAES and PHS, resulting in the wind share vector of [0.1,0.1,0.1,0.1,0.1,0.25,0.25]).
End of explanation
#define pretty output formatter for sympy equation
def format_eq(eq):
return [solve(eq[i])[0] for i in range(len(eq))]
#return [float((repr(eq[i])[2:]).split(',')[1].strip()[:-1]) for i in range(len(eq))]
eroi_pv=[10,40] #EROEIe for case [F, G] in Table 3 of the paper, PV plant
eroi_disp_pv=format_eq([eq2.subs([[EROI,i],[phi,0.4],[eta,eta_pv],[ESOI,S_pv]]) for i in eroi_pv])
eroi_wind=[10,33] #EROEIe for case [H, I] in Table 3 of the paper, Wind plant
eroi_disp_wind=format_eq([eq2.subs([[EROI,i],[phi,0.4],[eta,eta_wind],[ESOI,S_wind]]) for i in eroi_wind])
print 'PV [F,G]'
eroi_disp_pv
print 'Wind [H,I]'
eroi_disp_wind
Explanation: Construct test cases: Renewables
End of explanation
eroi_gas=[9.6, 38.5] #EROEIe for case [A, B] in Table 3 of the paper, gas plant
eroi_ccs_gas=format_eq([eq6.subs([[EROI,i],[f_op,0.147]]) for i in eroi_gas])
eroi_coal=[8.9, 22.6] #EROEIe for case [D, E] in Table 3 of the paper, coal plant
eroi_ccs_coal=format_eq([eq6.subs([[EROI,i],[f_op,0.219]]) for i in eroi_coal])
print 'gas [A,B]'
eroi_ccs_gas
print 'coal [D,E]'
eroi_ccs_coal
Explanation: Construct test cases: Fossils
End of explanation
plt.figure(figsize=(5,4))
n=51
erois=range(1,n)
### Renewables contours
colors=['#addd8e',
'#78c679',
'#31a354',
'#006837']
plt.plot(0,0,lw=0,label='${\phi}$ (%)')
#plot EROI_disp contour lines for various EROI values and phi levels
#for eta and S assume a fully battery-based storage (worst case), i.e. the eta_pv and S_pv
pv=1.0 #(pv share in hypothetical renewable technologies)
phis=[0.1,0.2,0.3,0.4] #assumed levels for phi
for r in range(len(phis)):
phir=phis[r]
eroi_disps=format_eq([eq2.subs([[EROI,i],[phi,phir],[eta,pv*eta_pv+(1-pv)*eta_wind],[ESOI,pv*S_pv+(1-pv)*S_wind]]) for i in erois])
plt.plot(erois,eroi_disps,c=colors[r],label=int(phir*100))
plt.xlabel('EROI')
a=plt.gca()
plt.text(-0.07, 0.65, 'EROI$_{disp}$',
horizontalalignment='right',
verticalalignment='bottom',rotation=90,#fontproperties=prop,
size=11,color='#006837',transform=plt.gca().transAxes)
plt.text(-0.07, 0.27, 'EROI$_{CCS}$',
horizontalalignment='right',
verticalalignment='bottom',rotation=90,#fontproperties=prop,
size=11,color='#bd0036',transform=plt.gca().transAxes)
### Fossils contours
colors=['#fed976',
'#feb24c',
'#f03b20',
'#bd0036']
plt.plot(0,0,lw=0,label='$f_{op}$ (%)')
fops=[0.05,0.1,0.15,0.2] #assumed levels for fop
for r in range(len(fops)):
fopr=fops[r]
eroi_ccss=format_eq([eq6.subs([[EROI,i],[f_op,fopr]]) for i in erois])
plt.plot(erois,eroi_ccss,c=colors[r],label=int(fopr*100))
### Table 3 cases
cf="#f03b20"
cr="#31a354"
cg="#aaaaaa"
s=30 #marker size
plt.scatter(eroi_gas,eroi_ccs_gas,s,color=cf,marker='o')
plt.scatter(eroi_coal,eroi_ccs_coal,s,color=cf,marker='^')
plt.scatter(eroi_pv,eroi_disp_pv,s,color=cr,marker='d')
plt.scatter(eroi_wind,eroi_disp_wind,s,color=cr,marker='s')
plt.xlim(0,50)
plt.ylim(0,40)
#[F, G]
plt.text(eroi_pv[1], eroi_disp_pv[1]-1.5, 'PV +\nBattery',
horizontalalignment='center',
verticalalignment='top',
size=9,color=cr)
#[H, I]
plt.text(eroi_wind[1], eroi_disp_wind[1]+1.5, 'Wind + Battery +\n PHS + CAES',
horizontalalignment='center',
verticalalignment='bottom',
size=9,color=cr)
#[A, B]
plt.text(eroi_gas[1], eroi_ccs_gas[1]-2.2, 'Gas + CCS',
horizontalalignment='center',
verticalalignment='top',
size=9,color=cf)
#[D, E]
plt.text(eroi_coal[1], eroi_ccs_coal[1]-1, 'Coal + CCS',
horizontalalignment='center',
verticalalignment='top',
size=9,color=cf)
#Annotate cases
ap=0.7
plt.text(eroi_coal[0]+2, eroi_ccs_coal[0]-0.5, 'D',
horizontalalignment='center',
verticalalignment='top',
size=10,color=cf,
bbox=dict(facecolor='white', edgecolor=cf, alpha=ap, boxstyle='round'))
plt.text(eroi_gas[0]+2, eroi_ccs_gas[0]+2, 'B',
horizontalalignment='center',
verticalalignment='top',
size=10,color=cf,
bbox=dict(facecolor='white', edgecolor=cf, alpha=ap, boxstyle='round'))
plt.text(eroi_coal[1], eroi_ccs_coal[1]+3.3, 'E',
horizontalalignment='center',
verticalalignment='top',
size=10,color=cf,
bbox=dict(facecolor='white', edgecolor=cf, alpha=ap, boxstyle='round'))
plt.text(eroi_gas[1], eroi_ccs_gas[1]+3.3, 'A',
horizontalalignment='center',
verticalalignment='top',
size=10,color=cf,
bbox=dict(facecolor='white', edgecolor=cf, alpha=ap, boxstyle='round'))
plt.text(eroi_pv[1], eroi_disp_pv[1]+3.3, 'G',
horizontalalignment='center',
verticalalignment='top',
size=10,color=cr,
bbox=dict(facecolor='white', edgecolor=cr, alpha=ap, boxstyle='round'))
plt.text(eroi_wind[0]+2.6, eroi_disp_wind[0], 'F',
horizontalalignment='center',
verticalalignment='top',
size=10,color=cr,
bbox=dict(facecolor='white', edgecolor=cr, alpha=ap, boxstyle='round'))
plt.text(eroi_pv[0], eroi_disp_pv[0]+5.3, 'H',
horizontalalignment='center',
verticalalignment='top',
size=10,color=cr,
bbox=dict(facecolor='white', edgecolor=cr, alpha=ap, boxstyle='round'))
plt.text(eroi_wind[1], eroi_disp_wind[1]-2, 'I',
horizontalalignment='center',
verticalalignment='top',
size=10,color=cr,
bbox=dict(facecolor='white', edgecolor=cr, alpha=ap, boxstyle='round'))
plt.legend(framealpha=0,loc=2,fontsize=9)
plt.savefig('fig3.png',bbox_inches = 'tight', facecolor='w', pad_inches = 0.1, dpi=200)
plt.show()
Explanation: Figure 3
Produce contour lines for multiple $f_{op}$ and $\phi$ levels and place the example plants on the contours. Calculate the $EROI_{CCS}$ and $EROI_{disp}$, respectively, for multiple base $EROI$ values. Likewise we obtain a plot for comparison between dispatchable (i.e. with storage) EROI for renewables and CCS EROI for fossil fuel based power plants.
End of explanation
#CCS_op #operating CCS penalty
#CR #capture ratio
var('CCS_op CR');
CCS_op_eq=Eq(CCS_op,25.957**6 - 85.031*CR**5 + \
114.5*CR**4 - 80.385*CR**3 + \
31.47*CR**2 - 6.7725*CR + 1.1137)
CCS_op_eq
Explanation: Figure 4
For this figure we are considering various levels of powerplant efficiency, and its effect on the EROI , under various capture ratio assumptions. This is described in Eq. 1 and Eq. 4 in the paper.
Define parameters
Electricity penalty (b) curve from Sanpasternich (2009) Figure 8 [kWh/kg CO2]
End of explanation
#PP_op_L #operational cost of coal fuel cycle over plant lifetime
#PP_op #operational cost of coal fuel cycle
#PP_CF #powerplant capacity factor
#PP_L #powerplant lifetime
#eta #powerplant efficiency
var('PP_op_L PP_op PP_CF eta PP_L');
PP_op_L_eq=Eq(PP_op_L,8760*PP_L*PP_CF*PP_op/eta)
PP_op_L_eq
Explanation: Operational cost of coal fuel cycle over plant lifetime output [kWh]
End of explanation
#C_CO2 #carbon dioxide content of coal [kg/MJ]
#Elec_CO2 #carbon intensity of electricity [kg CO2/kWh]
var('Elec_CO2 C_CO2');
Elec_CO2_eq=Eq(Elec_CO2,C_CO2*3.6/eta)
Elec_CO2_eq
Explanation: Carbon intensity of electricity [kg CO2/kWh]
End of explanation
#b #energy penalty of CCS
var('b');
b_eq=Eq(b,CCS_op*Elec_CO2*CR)
b_eq
Explanation: Energy penalty of CCS [dmnl]
End of explanation
#E #emissions
var('E');
E_eq=Eq(E,CCS_op*(1-CR))
E_eq
Explanation: Emissions
End of explanation
#CCS_cons_energy #energy cost of constructing CCS plant
#f_cap #energy cost share of constructing CCS plant
#PP_cons_energy #energy cost of power plant construction [MJ/MW], does not include energy embodied in materials
var('CCS_cons_energy PP_cons_energy f_cap');
CCS_cons_energy_eq=Eq(CCS_cons_energy,f_cap*PP_cons_energy)
CCS_cons_energy_eq
Explanation: Energy cost of constructing CCS plant
End of explanation
#EROI_1 #EROI where CCS construction energy is a ratio of power plant construction energy [MJ_e/MW/(MJ_p/MW) = dmnl]
#CCS_L #CCS plant lifetime
var('EROI_1 CCS_L');
EROI_1_eq=Eq(EROI_1,8760*PP_L*PP_CF*(1-b)*3.6/\
(PP_cons_energy + PP_op_L + CCS_cons_energy*PP_L/CCS_L))
EROI_1_eq
#EROI_1_adj #makes adjustment for electrical output [MJ_p/MJ_p = dmnl]
var('EROI_1_adj');
EROI_1_adj_eq=Eq(EROI_1_adj,EROI_1/0.3)
EROI_1_adj_eq
Explanation: EROIs
Energy cost of constructing CCS plant
End of explanation
#EROI_2 #where CCS construction energy is a ratio of CCS operation energy [MJ_e/MJ_p = dmnl]
var('EROI_2');
EROI_2_eq=Eq(EROI_2,8760*PP_L*PP_CF*(1-b)/\
(PP_cons_energy + PP_op_L + f_op*CCS_op + CCS_cons_energy*PP_L/CCS_L))
EROI_2_eq
#EROI_2_adj #makes adjustment for electrical output [MJ_p/MJ_p = dmnl]
var('EROI_2_adj');
EROI_2_adj_eq=Eq(EROI_2_adj,EROI_1/0.3)
EROI_2_adj_eq
Explanation: Energy cost of operating CCS plant
End of explanation
#R #E_op/E_cap
var('R');
eq5=Eq(EROI_CCS,(R+1)*EROI/(R+1+f_cap+f_op*(R+1)*EROI))
eq5
var('E_op E_cap');
R_eq=Eq(R,E_op/E_cap)
R_eq
Explanation: Translation to quantities as per defined in the paper
Eq. 5
End of explanation
var('E_CCS E_out E_red');
eq4=Eq(EROI_CCS,E_out/((1+f_cap)*E_cap+E_op+f_op*E_out))
eq4
f_op_eq=Eq(f_op,E_red/E_out)
f_op_eq
f_cap_eq=Eq(f_cap,E_CCS/E_cap)
f_cap_eq
Explanation: Eq. 4
End of explanation
eq3=Eq(EROI_CCS,E_out/(E_cap+E_op+E_red+E_CCS))
eq3
Explanation: Eq. 3
End of explanation
var('E_in');
eq1=Eq(EROI,E_out/E_in)
eq1
E_in_eq=Eq(E_in,E_cap+E_op)
E_in_eq
solve(E_in_eq,E_in)[0]
eq1.subs([[E_in,solve(E_in_eq,E_in)[0]]])
Explanation: Eq. 1
End of explanation
eq5
Explanation: Verification
Bactkrack to Eq. 3 from Eq. 5
End of explanation
eq5.subs([[R,solve(R_eq,R)[0]],\
[EROI,solve(eq1.subs([[E_in,solve(E_in_eq,E_in)[0]]]),EROI)[0]]]).simplify()
eq4
Explanation: Get back Eq. 4 from Eq. 5
End of explanation
eq4.subs([[f_op,solve(f_op_eq,f_op)[0]],\
[f_cap,solve(f_cap_eq,f_cap)[0]]]).simplify()
eq3
Explanation: Get back Eq. 3 from Eq. 4
End of explanation
eq3
E_out_eq=Eq(E_out,8760*PP_L*PP_CF*(1-b))
E_out_eq
E_out_eq.subs([[b,solve(b_eq,b)[0]]])\
.subs([[CCS_op,solve(CCS_op_eq,CCS_op)[0]]])\
.subs([[Elec_CO2,solve(Elec_CO2_eq,Elec_CO2)[0]]])
Explanation: Translate
Translate example notation into paper equation notations
End of explanation
eq3
Explanation: ... ?
Construct from bottom up
End of explanation |
10,249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decorators
Decorators can be thought of as functions which modify the functionality of another function. They help to make your code shorter and more "Pythonic".
To properly explain decorators we will slowly build up from functions. Make sure to restart the Python and the Notebooks for this lecture to look the same on your own computer. So lets break down the steps
Step1: Scope Review
Remember from the nested statements lecture that Python uses Scope to know what a label is referring to. For example
Step2: Remember that Python functions create a new scope, meaning the function has its own namespace to find variable names when they are mentioned within the function. We can check for local variables and global variables with the local() and globals() functions. For example
Step3: Here we get back a dictionary of all the global variables, many of them are predefined in Python. So let's go ahead and look at the keys
Step4: Note how s is there, the Global Variable we defined as a string
Step5: Now lets run our function to check for any local variables in the func() (there shouldn't be any)
Step6: Great! Now lets continue with building out the logic of what a decorator is. Remember that in Python everything is an object. That means functions are objects which can be assigned labels and passed into other functions. Lets start with some simple examples
Step7: Assign a label to the function. Note that e are not using parentheses here because we are not calling the function hello, instead we are just putting it into the greet variable.
Step8: This assignment is not attached to the original function
Step9: Functions within functions
Great! So we've seen how we can treat functions as objects, now lets see how we can define functions inside of other functions
Step10: Note how due to scope, the welcome() function is not defined outside of the hello() function. Now lets learn about returning functions from within functions
Step11: Now lets see what function is returned if we set x = hello(), note how the closed parenthesis means that name ahs been defined as Jose.
Step12: Great! Now we can see how x is pointing to the greet function inside of the hello function.
Step13: Lets take a quick look at the code again.
In the if/else clause we are returning greet and welcome, not greet() and welcome().
This is because when you put a pair of parentheses after it, the function gets executed; whereas if you don’t put parenthesis after it, then it can be passed around and can be assigned to other variables without executing it.
When we write x = hello(), hello() gets executed and because the name is Jose by default, the function greet is returned. If we change the statement to x = hello(name = "Sam") then the welcome function will be returned. We can also do print hello()() which outputs now you are in the greet() function.
Functions as Arguments
Now lets see how we can pass functions as arguments into other functions
Step14: Great! Note how we can pass the functions as objects and then use them within other functions. Now we can get started with writing our first decorator
Step15: So what just happened here? A decorator simple wrapped the function and modified its behavior. Now lets understand how we can rewrite this code using the @ symbol, which is what Python uses for Decorators | Python Code:
def func():
return 1
func()
Explanation: Decorators
Decorators can be thought of as functions which modify the functionality of another function. They help to make your code shorter and more "Pythonic".
To properly explain decorators we will slowly build up from functions. Make sure to restart the Python and the Notebooks for this lecture to look the same on your own computer. So lets break down the steps:
Functions Review
End of explanation
s = 'Global Variable'
def func():
print locals()
Explanation: Scope Review
Remember from the nested statements lecture that Python uses Scope to know what a label is referring to. For example:
End of explanation
print globals()
Explanation: Remember that Python functions create a new scope, meaning the function has its own namespace to find variable names when they are mentioned within the function. We can check for local variables and global variables with the local() and globals() functions. For example:
End of explanation
print globals().keys()
Explanation: Here we get back a dictionary of all the global variables, many of them are predefined in Python. So let's go ahead and look at the keys:
End of explanation
globals()['s']
Explanation: Note how s is there, the Global Variable we defined as a string:
End of explanation
func()
Explanation: Now lets run our function to check for any local variables in the func() (there shouldn't be any)
End of explanation
def hello(name='Jose'):
return 'Hello '+name
hello()
Explanation: Great! Now lets continue with building out the logic of what a decorator is. Remember that in Python everything is an object. That means functions are objects which can be assigned labels and passed into other functions. Lets start with some simple examples:
End of explanation
greet = hello
greet
greet()
Explanation: Assign a label to the function. Note that e are not using parentheses here because we are not calling the function hello, instead we are just putting it into the greet variable.
End of explanation
del hello
hello()
greet()
Explanation: This assignment is not attached to the original function:
End of explanation
def hello(name='Jose'):
print 'The hello() function has been executed'
def greet():
return '\t This is inside the greet() function'
def welcome():
return "\t This is inside the welcome() function"
print greet()
print welcome()
print "Now we are back inside the hello() function"
hello()
welcome()
Explanation: Functions within functions
Great! So we've seen how we can treat functions as objects, now lets see how we can define functions inside of other functions:
End of explanation
def hello(name='Jose'):
def greet():
return '\t This is inside the greet() function'
def welcome():
return "\t This is inside the welcome() function"
if name == 'Jose':
return greet
else:
return welcome
x = hello()
Explanation: Note how due to scope, the welcome() function is not defined outside of the hello() function. Now lets learn about returning functions from within functions:
Returning Functions
End of explanation
x
Explanation: Now lets see what function is returned if we set x = hello(), note how the closed parenthesis means that name ahs been defined as Jose.
End of explanation
print x()
Explanation: Great! Now we can see how x is pointing to the greet function inside of the hello function.
End of explanation
def hello():
return 'Hi Jose!'
def other(func):
print 'Other code would go here'
print func()
other(hello)
Explanation: Lets take a quick look at the code again.
In the if/else clause we are returning greet and welcome, not greet() and welcome().
This is because when you put a pair of parentheses after it, the function gets executed; whereas if you don’t put parenthesis after it, then it can be passed around and can be assigned to other variables without executing it.
When we write x = hello(), hello() gets executed and because the name is Jose by default, the function greet is returned. If we change the statement to x = hello(name = "Sam") then the welcome function will be returned. We can also do print hello()() which outputs now you are in the greet() function.
Functions as Arguments
Now lets see how we can pass functions as arguments into other functions:
End of explanation
def new_decorator(func):
def wrap_func():
print "Code would be here, before executing the func"
func()
print "Code here will execute after the func()"
return wrap_func
def func_needs_decorator():
print "This function is in need of a Decorator"
func_needs_decorator()
# Reassign func_needs_decorator
func_needs_decorator = new_decorator(func_needs_decorator)
func_needs_decorator()
Explanation: Great! Note how we can pass the functions as objects and then use them within other functions. Now we can get started with writing our first decorator:
Creating a Decorator
In the previous example we actually manually created a Decorator. Here we will modify it to make its use case clear:
End of explanation
@new_decorator
def func_needs_decorator():
print "This function is in need of a Decorator"
func_needs_decorator()
Explanation: So what just happened here? A decorator simple wrapped the function and modified its behavior. Now lets understand how we can rewrite this code using the @ symbol, which is what Python uses for Decorators:
End of explanation |
10,250 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Dataset API
Learning Objectives
1. Learn how use tf.data to read data from memory
1. Learn how to use tf.data in a training loop
1. Learn how use tf.data to read data from disk
1. Learn how to write production input pipelines with feature engineering (batching, shuffling, etc.)
In this notebook, we will start by refactoring the linear regression we implemented in the previous lab so that is takes its data from atf.data.Dataset, and we will learn how to implement stochastic gradient descent with it. In this case, the original dataset will be synthetic and read by the tf.data API directly from memory.
In a second part, we will learn how to load a dataset with the tf.data API when the dataset resides on disk.
Step1: Loading data from memory
Creating the dataset
Let's consider the synthetic dataset of the previous section
Step2: We begin with implementing a function that takes as input
our $X$ and $Y$ vectors of synthetic data generated by the linear function $y= 2x + 10$
the number of passes over the dataset we want to train on (epochs)
the size of the batches the dataset (batch_size)
and returns a tf.data.Dataset
Step3: Let's test our function by iterating twice over our dataset in batches of 3 datapoints
Step4: Loss function and gradients
The loss function and the function that computes the gradients are the same as before
Step5: Training loop
The main difference now is that now, in the traning loop, we will iterate directly on the tf.data.Dataset generated by our create_dataset function.
We will configure the dataset so that it iterates 250 times over our synthetic dataset in batches of 2.
Lab Task #2
Step6: Loading data from disk
Locating the CSV files
We will start with the taxifare dataset CSV files that we wrote out in a previous lab.
The taxifare datast files been saved into ../data.
Check that it is the case in the cell below, and, if not, regenerate the taxifare
dataset by running the provious lab notebook
Step7: Use tf.data to read the CSV files
The tf.data API can easily read csv files using the helper function
tf.data.experimental.make_csv_dataset
If you have TFRecords (which is recommended), you may use
tf.data.experimental.make_batched_features_dataset
The first step is to define
the feature names into a list CSV_COLUMNS
their default values into a list DEFAULTS
Step8: Let's now wrap the call to make_csv_dataset into its own function that will take only the file pattern (i.e. glob) where the dataset files are to be located
Step9: Note that this is a prefetched dataset, where each element is an OrderedDict whose keys are the feature names and whose values are tensors of shape (1,) (i.e. vectors).
Let's iterate over the two first element of this dataset using dataset.take(2) and let's convert them ordinary Python dictionary with numpy array as values for more readability
Step10: Transforming the features
What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary
Step11: Let's iterate over 2 examples from our tempds dataset and apply our feature_and_labels
function to each of the examples to make sure it's working
Step12: Batching
Let's now refactor our create_dataset function so that it takes an additional argument batch_size and batch the data correspondingly. We will also use the features_and_labels function we implemented in order for our dataset to produce tuples of features and labels.
Lab Task #4b
Step13: Let's test that our batches are of the right size
Step14: Shuffling
When training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely.
Let's refactor our create_dataset function so that it shuffles the data, when the dataset is used for training.
We will introduce a additional argument mode to our function to allow the function body to distinguish the case
when it needs to shuffle the data (mode == tf.estimator.ModeKeys.TRAIN) from when it shouldn't (mode == tf.estimator.ModeKeys.EVAL).
Also, before returning we will want to prefetch 1 data point ahead of time (dataset.prefetch(1)) to speedup training
Step15: Let's check that our function work well in both modes | Python Code:
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.0 || pip install tensorflow==2.0
import json
import math
import os
from pprint import pprint
import numpy as np
import tensorflow as tf
print(tf.version.VERSION)
Explanation: TensorFlow Dataset API
Learning Objectives
1. Learn how use tf.data to read data from memory
1. Learn how to use tf.data in a training loop
1. Learn how use tf.data to read data from disk
1. Learn how to write production input pipelines with feature engineering (batching, shuffling, etc.)
In this notebook, we will start by refactoring the linear regression we implemented in the previous lab so that is takes its data from atf.data.Dataset, and we will learn how to implement stochastic gradient descent with it. In this case, the original dataset will be synthetic and read by the tf.data API directly from memory.
In a second part, we will learn how to load a dataset with the tf.data API when the dataset resides on disk.
End of explanation
N_POINTS = 10
X = tf.constant(range(N_POINTS), dtype=tf.float32)
Y = 2 * X + 10
Explanation: Loading data from memory
Creating the dataset
Let's consider the synthetic dataset of the previous section:
End of explanation
# TODO 1
def create_dataset(X, Y, epochs, batch_size):
dataset = # TODO: Your code goes here.
dataset = # TODO: Your code goes here.
return dataset
Explanation: We begin with implementing a function that takes as input
our $X$ and $Y$ vectors of synthetic data generated by the linear function $y= 2x + 10$
the number of passes over the dataset we want to train on (epochs)
the size of the batches the dataset (batch_size)
and returns a tf.data.Dataset:
Remark: Note that the last batch may not contain the exact number of elements you specified because the dataset was exhausted.
If you want batches with the exact same number of elements per batch, we will have to discard the last batch by
setting:
python
dataset = dataset.batch(batch_size, drop_remainder=True)
We will do that here.
Lab Task #1: Complete the code below to
1. instantiate a tf.data dataset using tf.data.Dataset.from_tensor_slices.
2. Set up the dataset to
* repeat epochs times,
* create a batch of size batch_size, ignoring extra elements when the batch does not divide the number of input elements evenly.
End of explanation
BATCH_SIZE = 3
EPOCH = 2
dataset = create_dataset(X, Y, epochs=1, batch_size=3)
for i, (x, y) in enumerate(dataset):
print("x:", x.numpy(), "y:", y.numpy())
assert len(x) == BATCH_SIZE
assert len(y) == BATCH_SIZE
assert EPOCH
Explanation: Let's test our function by iterating twice over our dataset in batches of 3 datapoints:
End of explanation
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
Explanation: Loss function and gradients
The loss function and the function that computes the gradients are the same as before:
End of explanation
# TODO 2
EPOCHS = 250
BATCH_SIZE = 2
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dataset = # TODO: Your code goes here.
for step, (X_batch, Y_batch) in # TODO: Your code goes here.
dw0, dw1 = #TODO: Your code goes here.
#TODO: Your code goes here.
#TODO: Your code goes here.
if step % 100 == 0:
loss = #TODO: Your code goes here.
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
assert loss < 0.0001
assert abs(w0 - 2) < 0.001
assert abs(w1 - 10) < 0.001
Explanation: Training loop
The main difference now is that now, in the traning loop, we will iterate directly on the tf.data.Dataset generated by our create_dataset function.
We will configure the dataset so that it iterates 250 times over our synthetic dataset in batches of 2.
Lab Task #2: Complete the code in the cell below to call your dataset above when training the model. Note that the step, (X_batch, Y_batch) iterates over the dataset. The inside of the for loop should be exactly as in the previous lab.
End of explanation
!ls -l ../data/taxi*.csv
Explanation: Loading data from disk
Locating the CSV files
We will start with the taxifare dataset CSV files that we wrote out in a previous lab.
The taxifare datast files been saved into ../data.
Check that it is the case in the cell below, and, if not, regenerate the taxifare
dataset by running the provious lab notebook:
End of explanation
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
Explanation: Use tf.data to read the CSV files
The tf.data API can easily read csv files using the helper function
tf.data.experimental.make_csv_dataset
If you have TFRecords (which is recommended), you may use
tf.data.experimental.make_batched_features_dataset
The first step is to define
the feature names into a list CSV_COLUMNS
their default values into a list DEFAULTS
End of explanation
# TODO 3
def create_dataset(pattern):
# TODO: Your code goes here.
return dataset
tempds = create_dataset('../data/taxi-train*')
print(tempds)
Explanation: Let's now wrap the call to make_csv_dataset into its own function that will take only the file pattern (i.e. glob) where the dataset files are to be located:
Lab Task #3: Complete the code in the create_dataset(...) function below to return a tf.data dataset made from the make_csv_dataset. Have a look at the documentation here. The pattern will be given as an argument of the function but you should set the batch_size, column_names and column_defaults.
End of explanation
for data in tempds.take(2):
pprint({k: v.numpy() for k, v in data.items()})
print("\n")
Explanation: Note that this is a prefetched dataset, where each element is an OrderedDict whose keys are the feature names and whose values are tensors of shape (1,) (i.e. vectors).
Let's iterate over the two first element of this dataset using dataset.take(2) and let's convert them ordinary Python dictionary with numpy array as values for more readability:
End of explanation
UNWANTED_COLS = ['pickup_datetime', 'key']
# TODO 4a
def features_and_labels(row_data):
label = # TODO: Your code goes here.
features = # TODO: Your code goes here.
# TODO: Your code goes here.
return features, label
Explanation: Transforming the features
What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary:
Remove the unwanted column "key"
Keep the label separate from the features
Let's first implement a funciton that takes as input a row (represented as an OrderedDict in our tf.data.Dataset as above) and then returns a tuple with two elements:
The first element beeing the same OrderedDict with the label dropped
The second element beeing the label itself (fare_amount)
Note that we will need to also remove the key and pickup_datetime column, which we won't use.
Lab Task #4a: Complete the code in the features_and_labels(...) function below. Your function should return a dictionary of features and a label. Keep in mind row_data is already a dictionary and you will need to remove the pickup_datetime and key from row_data as well.
End of explanation
for row_data in tempds.take(2):
features, label = features_and_labels(row_data)
pprint(features)
print(label, "\n")
assert UNWANTED_COLS[0] not in features.keys()
assert UNWANTED_COLS[1] not in features.keys()
assert label.shape == [1]
Explanation: Let's iterate over 2 examples from our tempds dataset and apply our feature_and_labels
function to each of the examples to make sure it's working:
End of explanation
# TODO 4b
def create_dataset(pattern, batch_size):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = # TODO: Your code goes here.
return dataset
Explanation: Batching
Let's now refactor our create_dataset function so that it takes an additional argument batch_size and batch the data correspondingly. We will also use the features_and_labels function we implemented in order for our dataset to produce tuples of features and labels.
Lab Task #4b: Complete the code in the create_dataset(...) function below to return a tf.data dataset made from the make_csv_dataset. Now, the pattern and batch_size will be given as an arguments of the function but you should set the column_names and column_defaults as before. You will also apply a .map(...) method to create features and labels from each example.
End of explanation
BATCH_SIZE = 2
tempds = create_dataset('../data/taxi-train*', batch_size=2)
for X_batch, Y_batch in tempds.take(2):
pprint({k: v.numpy() for k, v in X_batch.items()})
print(Y_batch.numpy(), "\n")
assert len(Y_batch) == BATCH_SIZE
Explanation: Let's test that our batches are of the right size:
End of explanation
# TODO 4c
def create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = # TODO: Your code goes here.
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = # TODO: Your code goes here.
# take advantage of multi-threading; 1=AUTOTUNE
dataset = # TODO: Your code goes here.
return dataset
Explanation: Shuffling
When training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely.
Let's refactor our create_dataset function so that it shuffles the data, when the dataset is used for training.
We will introduce a additional argument mode to our function to allow the function body to distinguish the case
when it needs to shuffle the data (mode == tf.estimator.ModeKeys.TRAIN) from when it shouldn't (mode == tf.estimator.ModeKeys.EVAL).
Also, before returning we will want to prefetch 1 data point ahead of time (dataset.prefetch(1)) to speedup training:
Lab Task #4c: The last step of our tf.data dataset will specify shuffling and repeating of our dataset pipeline. Complete the code below to add these three steps to the Dataset pipeline
1. follow the .map(...) operation which extracts features and labels with a call to .cache() the result.
2. during training, use .shuffle(...) and .repeat() to shuffle batches and repeat the dataset
3. use .prefetch(...) to take advantage of multi-threading and speedup training.
End of explanation
tempds = create_dataset('../data/taxi-train*', 2, tf.estimator.ModeKeys.TRAIN)
print(list(tempds.take(1)))
tempds = create_dataset('../data/taxi-valid*', 2, tf.estimator.ModeKeys.EVAL)
print(list(tempds.take(1)))
Explanation: Let's check that our function work well in both modes:
End of explanation |
10,251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='inputs_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='inputs_z')
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
10,252 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear models for regression
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_
Step1: Linear Regression
Step2: Ridge Regression (L2 penalty)
Step3: Lasso (L1 penalty)
Step4: Linear models for classification
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_ > 0
The influence of C in LinearSVC
Step5: Multi-Class linear classification
Step6: Exercises
Use GridSearchCV to tune the parameter C of LinearSVC on the digits dataset.
Compare l1 penalty and l2 penalty by plotting the coefficients as above for the digits dataset. Classify odd vs even digits to make it a binary task. | Python Code:
from sklearn.datasets import make_regression
from sklearn.cross_validation import train_test_split
X, y, true_coefficient = make_regression(n_samples=80, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5)
print(X_train.shape)
print(y_train.shape)
true_coefficient
X.shape, y.shape
plt.plot(X[:,1], y, 'bo', markersize=4);
Explanation: Linear models for regression
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_
End of explanation
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression().fit(X_train, y_train)
print("R^2 on training set: %f" % linear_regression.score(X_train, y_train))
print("R^2 on test set: %f" % linear_regression.score(X_test, y_test))
from sklearn.metrics import r2_score
print(r2_score(np.dot(X, true_coefficient), y))
plt.figure(figsize=(10, 5))
coefficient_sorting = np.argsort(true_coefficient)[::-1]
plt.plot(true_coefficient[coefficient_sorting], "o", label="true")
plt.plot(linear_regression.coef_[coefficient_sorting], "o", label="linear regression")
plt.legend()
Explanation: Linear Regression
End of explanation
from sklearn.linear_model import Ridge
ridge_models = {}
training_scores = []
test_scores = []
for alpha in [100, 10, 1, .01]:
ridge = Ridge(alpha=alpha).fit(X_train, y_train)
training_scores.append(ridge.score(X_train, y_train))
test_scores.append(ridge.score(X_test, y_test))
ridge_models[alpha] = ridge
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [100, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([100, 10, 1, .01]):
plt.plot(ridge_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
Explanation: Ridge Regression (L2 penalty)
End of explanation
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
Explanation: Lasso (L1 penalty)
End of explanation
from plots import plot_linear_svc_regularization
plot_linear_svc_regularization()
Explanation: Linear models for classification
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_ > 0
The influence of C in LinearSVC
End of explanation
from sklearn.datasets import make_blobs
X, y = make_blobs(random_state=42)
plt.scatter(X[:, 0], X[:, 1], c=y)
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X, y)
print(linear_svm.coef_.shape)
print(linear_svm.intercept_.shape)
plt.scatter(X[:, 0], X[:, 1], c=y)
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1])
plt.ylim(-10, 15)
plt.xlim(-10, 8)
Explanation: Multi-Class linear classification
End of explanation
# %load solutions/linear_models.py
Explanation: Exercises
Use GridSearchCV to tune the parameter C of LinearSVC on the digits dataset.
Compare l1 penalty and l2 penalty by plotting the coefficients as above for the digits dataset. Classify odd vs even digits to make it a binary task.
End of explanation |
10,253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Radio Frequency Interference mitigation using deep convolutional neural networks
This example demonstrates how tf_unet is trained on the 'Bleien Galactic Survey data'.
To create the training data the SEEK package (https
Step1: preparing training data
only one day...
Step2: setting up the unet
Step3: training the network
only one epoch. For good results many more are neccessary
Step4: running the prediction on the trained unet | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
import glob
plt.rcParams['image.cmap'] = 'gist_earth'
Explanation: Radio Frequency Interference mitigation using deep convolutional neural networks
This example demonstrates how tf_unet is trained on the 'Bleien Galactic Survey data'.
To create the training data the SEEK package (https://github.com/cosmo-ethz/seek) has to be installed
End of explanation
!wget -q -r -nH -np --cut-dirs=2 https://people.phys.ethz.ch/~ipa/cosmo/bgs_example_data/
!mkdir -p bgs_example_data/seek_cache
!seek --file-prefix='./bgs_example_data' --post-processing-prefix='bgs_example_data/seek_cache' --chi-1=20 --overwrite=True seek.config.process_survey_fft
Explanation: preparing training data
only one day...
End of explanation
from scripts.rfi_launcher import DataProvider
from tf_unet import unet
files = glob.glob('bgs_example_data/seek_cache/*')
data_provider = DataProvider(600, files)
net = unet.Unet(channels=data_provider.channels,
n_class=data_provider.n_class,
layers=3,
features_root=64,
cost_kwargs=dict(regularizer=0.001),
)
Explanation: setting up the unet
End of explanation
trainer = unet.Trainer(net, optimizer="momentum", opt_kwargs=dict(momentum=0.2))
path = trainer.train(data_provider, "./unet_trained_bgs_example_data",
training_iters=32,
epochs=1,
dropout=0.5,
display_step=2)
Explanation: training the network
only one epoch. For good results many more are neccessary
End of explanation
data_provider = DataProvider(10000, files)
x_test, y_test = data_provider(1)
prediction = net.predict(path, x_test)
fig, ax = plt.subplots(1,3, figsize=(12,4))
ax[0].imshow(x_test[0,...,0], aspect="auto")
ax[1].imshow(y_test[0,...,1], aspect="auto")
ax[2].imshow(prediction[0,...,1], aspect="auto")
Explanation: running the prediction on the trained unet
End of explanation |
10,254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Le but de cet exemple est de calculer, pour chaque décile de revenu, la part de leur consommation que les ménages accordent à chaque catégorie de bien. Les catégories suivent le niveau le plus agrégé de la nomenclature COICOP.
Import de modules généraux
Step1: Import de modules spécifiques à Openfisca
Step2: Import d'une nouvelle palette de couleurs
Step3: Construction de la dataframe et réalisation des graphiques | Python Code:
from __future__ import division
import pandas
import seaborn
Explanation: Le but de cet exemple est de calculer, pour chaque décile de revenu, la part de leur consommation que les ménages accordent à chaque catégorie de bien. Les catégories suivent le niveau le plus agrégé de la nomenclature COICOP.
Import de modules généraux
End of explanation
from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_bar
from openfisca_france_indirect_taxation.surveys import SurveyScenario
Explanation: Import de modules spécifiques à Openfisca
End of explanation
seaborn.set_palette(seaborn.color_palette("Set2", 12))
%matplotlib inline
Explanation: Import d'une nouvelle palette de couleurs
End of explanation
simulated_variables = ['coicop12_{}'.format(coicop12_index) for coicop12_index in range(1, 13)]
for year in [2000, 2005, 2011]:
survey_scenario = SurveyScenario.create(year = year)
pivot_table = pandas.DataFrame()
for values in simulated_variables:
pivot_table = pandas.concat([
pivot_table,
survey_scenario.compute_pivot_table(values = [values], columns = ['niveau_vie_decile'])
])
df = pivot_table.T
df['depenses_tot'] = df[['coicop12_{}'.format(i) for i in range(1, 13)]].sum(axis = 1)
for i in range(1, 13):
df['part_coicop12_{}'.format(i)] = \
df['coicop12_{}'.format(i)] / df['depenses_tot']
print 'Profil de la consommation des ménages en {}'.format(year)
graph_builder_bar(df[['part_coicop12_{}'.format(i) for i in range(1, 13)]])
Explanation: Construction de la dataframe et réalisation des graphiques
End of explanation |
10,255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Update TOC trends analysis
Tore has previously written code to calculate Mann-Kendall (M-K) trend statistics and Sen's slope estimates for data series in RESA2. According to my notes from a meeting with Tore on 13/05/2016, the workflow goes something like this
Step3: 2. Statistical functions
Looking at the output in the ICPW_STATISTICS3 table of RESA2, we need to calculate the following statistcs (only some of which are output by the Excel macro)
Step4: 3. Perform comparison
Step5: And below is the output from the Excel macro for comparison.
Step6: My code gives near-identical results to those from the Excel macro, although there are a few edge cases that might be worth investigating further. For example, if there are fewer than 10 non-null values, my code currently prints a warning. I'm not sure exactly what the Excel macro does yet, but in general it seems that for fewer than 10 values it's necessary to use a lookup table (see e.g. the Instructions sheet of the file here).
4. Get data from RESA2
The next step is to read the correct data directly from RESA2 and summarise it to look like raw_df, above. Start off by connecting to the database.
Step7: Looking at the ICPW_STATISTICS table in RESA2, it seems as though trends have been assessed for 14 parameters and several different time periods for each site of interest. The length and number of time periods vary from site to site, so I'll need to check with Heleen regarding how these varaibles should be chosen. The 14 parameters are as follows
Step8: 4.1.2. Sea-salt corrected values
The Xs are sea-salt corrected values (also sometimes denoted with an asterisk e.g. Ca*). They are calculated by comparison to chloride concentrations, which are generall assumed to be conservative. The usual equation is
Step9: 4.3. Extract time series
The next step is to get time series for the desired parameters for each of these stations.
Step10: 4.4. Aggregate to annual
Step11: 4.4. Convert units and apply sea-salt correction
I haven't calculated all 14 parameters here, as I'm not sure exactly what they all are. The ones I'm reasonably certain of are included below.
Step13: 4.5. Calculate trends
Step14: 5. Compare to previous trends analysis
This seems to be working OK so far, but I need to do some more testing to see that my results more-or-less agree with those calculated previously by Tore. As a start, let's compare the results above with those in the ICPW_STATISTICS3 table of RESA2, which is where (I think) Tore has saved his previous output.
Step15: For e.g. site 23499, I can now re-run my code for the period from 1990 to 2004 and compare my results to those above. | Python Code:
# Read data and results from the Excel macro
in_xlsx = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Data\mk_sen_test_data.xlsx')
raw_df = pd.read_excel(in_xlsx, sheetname='input')
res_df = pd.read_excel(in_xlsx, sheetname='results')
raw_df
res_df
Explanation: Update TOC trends analysis
Tore has previously written code to calculate Mann-Kendall (M-K) trend statistics and Sen's slope estimates for data series in RESA2. According to my notes from a meeting with Tore on 13/05/2016, the workflow goes something like this:
Run code to extract and summarise time series from RESA2, insert this data into Mann-Kendall_Sen.xls, then read the results back into a new table in RESA2 called e.g. ICPW_STATISTICS. <br><br>
Run the ICPStat query in Icp-waters2001_2000.accdb to summarise the data in ICPW_STATISTICS. This creates a new table currently called aaa, but Tore says he'll rename it to something more descriptive before he leaves. <br><br>
Run the export() subroutine in the Export module of Icp-waters2001_2000.accdb to reformat the aaa table and write the results to an Excel file.
Mann-Kendall_Sen.xls is an early version of the popular Excel macro MULTIMK/CONDMK, which Tore has modified slightly for use in this analysis. (A more recent version of the same file is available here). This Excel macro permits some quite sophisticated multivariate and conditional analyses, but as far as I can tell the TOC trends code is only making use of the most basic functionality - performing repeated independent trend tests on annually summarised time series.
Unfortunately, although the workflow above makes sense, I've so far failed to find and run Tore's code for step 1 (I can find everything for steps 2 and 3, but not the code for interacting with the Excel workbook). It also seems a bit messy to be switching back and forth between RESA2, Excel and Access in this way, so the code here is a first step towards refactoring the whole analysis into Python.
1. Test data
The Mann-Kendall_Sen.xls file on the network already had some example ICPW data in it, which I can use to test my code. The raw input data and the results obtained from the Excel macro are saved as mk_sen_test_data.xlsx.
End of explanation
def mk_test(x, stn_id, par, alpha=0.05):
Adapted from http://pydoc.net/Python/ambhas/0.4.0/ambhas.stats/
by Sat Kumar Tomer.
Perform the MK test for monotonic trends. Uses the "normal
approximation" to determine significance and therefore should
only be used if the number of values is >= 10.
Args:
x: 1D array of data
name: Name for data series (string)
alpha: Significance level
Returns:
var_s: Variance of test statistic
s: M-K test statistic
z: Normalised test statistic
p: p-value of the significance test
trend: Whether to reject the null hypothesis (no trend) at
the specified significance level. One of:
'increasing', 'decreasing' or 'no trend'
import numpy as np
from scipy.stats import norm
n = len(x)
if n < 10:
print (' Data series for %s at site %s has fewer than 10 non-null values. '
'Significance estimates may be unreliable.' % (par, int(stn_id)))
# calculate S
s = 0
for k in xrange(n-1):
for j in xrange(k+1,n):
s += np.sign(x[j] - x[k])
# calculate the unique data
unique_x = np.unique(x)
g = len(unique_x)
# calculate the var(s)
if n == g: # there is no tie
var_s = (n*(n-1)*(2*n+5))/18.
else: # there are some ties in data
tp = np.zeros(unique_x.shape)
for i in xrange(len(unique_x)):
tp[i] = sum(unique_x[i] == x)
# Sat Kumar's code has "+ np.sum", which is incorrect
var_s = (n*(n-1)*(2*n+5) - np.sum(tp*(tp-1)*(2*tp+5)))/18.
if s>0:
z = (s - 1)/np.sqrt(var_s)
elif s == 0:
z = 0
elif s<0:
z = (s + 1)/np.sqrt(var_s)
else:
z = np.nan
# calculate the p_value
p = 2*(1-norm.cdf(abs(z))) # two tail test
h = abs(z) > norm.ppf(1-alpha/2.)
if (z<0) and h:
trend = 'decreasing'
elif (z>0) and h:
trend = 'increasing'
elif np.isnan(z):
trend = np.nan
else:
trend = 'no trend'
return var_s, s, z, p, trend
def wc_stats(raw_df, st_yr=None, end_yr=None):
Calculate key statistics for the TOC trends analysis:
'station_id'
'par_id'
'non_missing'
'median'
'mean'
'std_dev'
'period'
'mk_std_dev'
'mk_stat'
'norm_mk_stat'
'mk_p_val'
'trend'
'sen_slp'
Args:
raw_df: Dataframe with annual data for a single station. Columns must
be: [station_id, year, par1, par2, ... parn]
st_yr: First year to include in analysis. Pass None to start
at the beginning of the series
end_year: Last year to include in analysis. Pass None to start
at the beginning of the series
Returns:
df of key statistics.
import numpy as np, pandas as pd
from scipy.stats import theilslopes
# Checking
df = raw_df.copy()
assert list(df.columns[:2]) == ['STATION_ID', 'YEAR'], 'Columns must be: [STATION_ID, YEAR, par1, par2, ... parn]'
assert len(df['STATION_ID'].unique()) == 1, 'You can only process data for one site at a time'
# Get just the period of interest
if st_yr:
df = df.query('YEAR >= @st_yr')
if end_yr:
df = df.query('YEAR <= @end_yr')
# Get stn_id
stn_id = df['STATION_ID'].iloc[0]
# Tidy up df
df.index = df['YEAR']
df.sort_index(inplace=True)
del df['STATION_ID'], df['YEAR']
# Container for results
data_dict = {'station_id':[],
'par_id':[],
'non_missing':[],
'median':[],
'mean':[],
'std_dev':[],
'period':[],
'mk_std_dev':[],
'mk_stat':[],
'norm_mk_stat':[],
'mk_p_val':[],
'trend':[],
'sen_slp':[]}
# Loop over pars
for col in df.columns:
# 1. Station ID
data_dict['station_id'].append(stn_id)
# 2. Par ID
data_dict['par_id'].append(col)
# 3. Non-missing
data_dict['non_missing'].append(pd.notnull(df[col]).sum())
# 4. Median
data_dict['median'].append(df[col].median())
# 5. Mean
data_dict['mean'].append(df[col].mean())
# 6. Std dev
data_dict['std_dev'].append(df[col].std())
# 7. Period
st_yr = df.index.min()
end_yr = df.index.max()
per = '%s-%s' % (st_yr, end_yr)
data_dict['period'].append(per)
# 8. M-K test
# Drop missing values
mk_df = df[[col]].dropna(how='any')
# Only run stats if more than 1 valid value
if len(mk_df) > 1:
var_s, s, z, p, trend = mk_test(mk_df[col].values, stn_id, col)
data_dict['mk_std_dev'].append(np.sqrt(var_s))
data_dict['mk_stat'].append(s)
data_dict['norm_mk_stat'].append(z)
data_dict['mk_p_val'].append(p)
data_dict['trend'].append(trend)
# 8. Sen's slope
# First element of output gives median slope. Other results could
# also be useful - see docs
sslp = theilslopes(mk_df[col].values, mk_df.index, 0.95)[0]
data_dict['sen_slp'].append(sslp)
# Otherwise all NaN
else:
for par in ['mk_std_dev', 'mk_stat', 'norm_mk_stat',
'mk_p_val', 'trend', 'sen_slp']:
data_dict[par].append(np.nan)
# Build to df
res_df = pd.DataFrame(data_dict)
res_df = res_df[['station_id', 'par_id', 'period', 'non_missing',
'mean', 'median', 'std_dev', 'mk_stat', 'norm_mk_stat',
'mk_p_val', 'mk_std_dev', 'trend', 'sen_slp']]
return res_df
Explanation: 2. Statistical functions
Looking at the output in the ICPW_STATISTICS3 table of RESA2, we need to calculate the following statistcs (only some of which are output by the Excel macro):
Number of non-missing values
Median
Mean
Period over which data are available (start and end years)
Standard deviation (of the data)
Standard deviation (expected under the null hypothesis of the M-K test)
M-K statistic
Normalised M-K statistic $\left(= \frac{M-K \; statistic}{Standard \; deviation} \right)$
M-K p-value
Sen's slope (a.k.a. the Theil-Sen slope)
Most of these should be quite straightforward. We'll start off by defining a function to calculate the M-K statistic (note that Scipy already has a function for the Theil-Sen slope). We'll also define another function to bundle everything together and return a dataframe of the results.
End of explanation
# Run analysis on test data and print results
out_df = wc_stats(raw_df)
del out_df['station_id']
out_df
Explanation: 3. Perform comparison
End of explanation
res_df
Explanation: And below is the output from the Excel macro for comparison.
End of explanation
# Use custom RESA2 function to connect to db
r2_func_path = r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\Upload_Template\useful_resa2_code.py'
resa2 = imp.load_source('useful_resa2_code', r2_func_path)
engine, conn = resa2.connect_to_resa2()
Explanation: My code gives near-identical results to those from the Excel macro, although there are a few edge cases that might be worth investigating further. For example, if there are fewer than 10 non-null values, my code currently prints a warning. I'm not sure exactly what the Excel macro does yet, but in general it seems that for fewer than 10 values it's necessary to use a lookup table (see e.g. the Instructions sheet of the file here).
4. Get data from RESA2
The next step is to read the correct data directly from RESA2 and summarise it to look like raw_df, above. Start off by connecting to the database.
End of explanation
# Tabulate chemical properties
chem_dict = {'molar_mass':[96, 35, 40, 24, 14],
'valency':[2, 1, 2, 2, 1],
'resa2_ref_ratio':[0.103, 1., 0.037, 0.196, 'N/A']}
chem_df = pd.DataFrame(chem_dict, index=['SO4', 'Cl', 'Ca', 'Mg', 'NO3-N'])
chem_df = chem_df[['molar_mass', 'valency', 'resa2_ref_ratio']]
chem_df
Explanation: Looking at the ICPW_STATISTICS table in RESA2, it seems as though trends have been assessed for 14 parameters and several different time periods for each site of interest. The length and number of time periods vary from site to site, so I'll need to check with Heleen regarding how these varaibles should be chosen. The 14 parameters are as follows:
ESO4
ESO4X
ECl
ESO4Cl
TOC_DOC
ECaEMg
ECaXEMgX
ENO3
Al
ANC
ALK
HPLUS
ESO4EClENO3
ENO3DIVENO3ESO4X
Many of these quantities are unfamiliar to me, but presumably the equations for calculating them can be found in Tore's code (which I can't find at present). Check with Heleen whether all of these are still required and find equations as necessary.
The other issue is how to aggregate the values in the database from their original temporal resolution to annual summaries. I assume the median annual value is probably appropriate in most cases, but it would be good to know what Tore did previosuly.
For now, I'll focus on:
Extracting the data from the database for a specified time period, <br><br>
Calculating the required water chemistry parameters, <br><br>
Taking annual medians and <br><br>
Estimating the trend statistics.
It should then be fairly easy to modify this code later as necessary.
4.1. Equations
Some of the quantities listed above are straightforward to calculate.
4.1.1. Micro-equivalents per litre
The Es in the parameter names are just unit conversions to micro-equivalents per litre:
$$EPAR \; (\mu eq/l) = \frac{1.10^6 * valency}{molar \; mass \; (g/mol)} * PAR \; (g/l)$$
Molar masses and valencies for the key species listed above are given in the table below.
End of explanation
# Get stations for a specified list of projects
proj_list = ['ICPW_TOCTRENDS_2015_CZ', 'ICPW_TOCTRENDS_2015_IT']
sql = ('SELECT station_id, station_code '
'FROM resa2.stations '
'WHERE station_id IN (SELECT UNIQUE(station_id) '
'FROM resa2.projects_stations '
'WHERE project_id IN (SELECT project_id '
'FROM resa2.projects '
'WHERE project_name IN %s))'
% str(tuple(proj_list)))
stn_df = pd.read_sql(sql, engine)
stn_df
Explanation: 4.1.2. Sea-salt corrected values
The Xs are sea-salt corrected values (also sometimes denoted with an asterisk e.g. Ca*). They are calculated by comparison to chloride concentrations, which are generall assumed to be conservative. The usual equation is:
$$PARX = PAR_{sample} - \left[ \left( \frac{PAR}{Cl} \right){ref} * Cl{sample} \right]$$
where $PAR_{sample}$ and $Cl_{sample}$ are the concentrations measured in the lake or river and $\left( \frac{PAR}{Cl} \right)_{ref}$ is (ideally) the long-term average concentration in incoming rainwater. In some cases the reference values are simply taken from sea water concentrations (ignoring effects such as evaporative fractionation etc.).
I'm not sure what values to assume, but by rearranging the above equations and applying it to data extarcted from RESA2 I can back-calculate the reference values. For example, brief testing using data from Italy, Switzerland and the Czech Republic implies that RESA2 uses a standard reference value for sulphate of 0.103.
The reference ratios inferred from RESA2 for the key species listed are given in the table above.
NB: In doing this I've identified some additional erros in the database, where this correction has not beeen performed correctly. For some reason, ESO4X values have been set to zero, despite valid ESO4 and ECl measurements being available. The problem only affects a handful od sample, but could be enough to generate false trends. Return to this later?
NB2: Leah's experiences with the RECOVER project suggest that assuming a single reference concentration for all countires in the world is a bad idea. For example, I believe in e.g. the Czech Republic and Italy it is usual not to calculate sea-salt corrected concentrations at all, because most of the chloride input comes from industry rather than marine sources. Rainwater concentrations are also likely to vary dramatically from place to place, especially given the range of geographic and climatic conditions covered by this project. Check with Heleen.
4.1.3. ANC
Need to calculate this ANC, ALK, HPLUS and ENO3DIVENO3ESO4X.
4.2. Choose projects
The first step is to specify a list of RESA2 projects and get the stations associated with them.
End of explanation
# Specify parameters of interest
par_list = ['SO4', 'Cl', 'Ca', 'Mg', 'NO3-N', 'TOC', 'Al']
if 'DOC' in par_list:
print ('The database treats DOC and TOC similarly.\n'
'You should probably enter "TOC" instead')
# Check pars are valid
if len(par_list)==1:
sql = ("SELECT * FROM resa2.parameter_definitions "
"WHERE name = '%s'" % par_list[0])
else:
sql = ('SELECT * FROM resa2.parameter_definitions '
'WHERE name in %s' % str(tuple(par_list)))
par_df = pd.read_sql_query(sql, engine)
assert len(par_df) == len(par_list), 'One or more parameters not valid.'
# Get results for ALL pars for sites and period of interest
if len(stn_df)==1:
sql = ("SELECT * FROM resa2.water_chemistry_values2 "
"WHERE sample_id IN (SELECT water_sample_id FROM resa2.water_samples "
"WHERE station_id = %s)"
% stn_df['station_id'].iloc[0])
else:
sql = ("SELECT * FROM resa2.water_chemistry_values2 "
"WHERE sample_id IN (SELECT water_sample_id FROM resa2.water_samples "
"WHERE station_id IN %s)"
% str(tuple(stn_df['station_id'].values)))
wc_df = pd.read_sql_query(sql, engine)
# Get all sample dates for sites and period of interest
if len(stn_df)==1:
sql = ("SELECT water_sample_id, station_id, sample_date "
"FROM resa2.water_samples "
"WHERE station_id = %s " % stn_df['station_id'].iloc[0])
else:
sql = ("SELECT water_sample_id, station_id, sample_date "
"FROM resa2.water_samples "
"WHERE station_id IN %s " % str(tuple(stn_df['station_id'].values)))
samp_df = pd.read_sql_query(sql, engine)
# Join in par IDs based on method IDs
sql = ('SELECT * FROM resa2.wc_parameters_methods')
meth_par_df = pd.read_sql_query(sql, engine)
wc_df = pd.merge(wc_df, meth_par_df, how='left',
left_on='method_id', right_on='wc_method_id')
# Get just the parameters of interest
wc_df = wc_df.query('wc_parameter_id in %s' % str(tuple(par_df['parameter_id'].values)))
# Join in sample dates
wc_df = pd.merge(wc_df, samp_df, how='left',
left_on='sample_id', right_on='water_sample_id')
# Join in parameter units
sql = ('SELECT * FROM resa2.parameter_definitions')
all_par_df = pd.read_sql_query(sql, engine)
wc_df = pd.merge(wc_df, all_par_df, how='left',
left_on='wc_parameter_id', right_on='parameter_id')
# Join in station codes
wc_df = pd.merge(wc_df, stn_df, how='left',
left_on='station_id', right_on='station_id')
# Convert units
wc_df['value'] = wc_df['value'] * wc_df['conversion_factor']
# Extract columns of interest
wc_df = wc_df[['station_id', 'sample_date', 'name', 'value']]
# Unstack
wc_df.set_index(['station_id', 'sample_date', 'name'], inplace=True)
wc_df = wc_df.unstack(level='name')
wc_df.columns = wc_df.columns.droplevel()
wc_df.reset_index(inplace=True)
wc_df.columns.name = None
wc_df.head()
Explanation: 4.3. Extract time series
The next step is to get time series for the desired parameters for each of these stations.
End of explanation
# Extract year from date column
wc_df['year'] = wc_df['sample_date'].map(lambda x: x.year)
del wc_df['sample_date']
# Groupby station_id and year
grpd = wc_df.groupby(['station_id', 'year'])
# Calculate median
wc_df = grpd.agg('median')
wc_df.head()
Explanation: 4.4. Aggregate to annual
End of explanation
# 1. Convert to ueq/l
for par in ['SO4', 'Cl', 'Mg', 'Ca', 'NO3-N']:
val = chem_df.ix[par, 'valency']
mm = chem_df.ix[par, 'molar_mass']
if par == 'NO3-N':
wc_df['ENO3'] = wc_df[par] * val / mm
else:
wc_df['E%s' % par] = wc_df[par] * val * 1000. / mm
# 2. Apply sea-salt correction
for par in ['ESO4', 'EMg', 'ECa']:
ref = chem_df.ix[par[1:], 'resa2_ref_ratio']
wc_df['%sX' % par] = wc_df[par] - (ref*wc_df['ECl'])
# 3. Calculate combinations
# 3.1. ESO4 + ECl
wc_df['ESO4_ECl'] = wc_df['ESO4'] + wc_df['ECl']
# 3.2. ECa + EMg
wc_df['ECa_EMg'] = wc_df['ECa'] + wc_df['EMg']
# 3.3. ECaX + EMgX
wc_df['ECaX_EMgX'] = wc_df['ECaX'] + wc_df['EMgX']
# 3.4. ESO4 + ECl + ENO3
wc_df['ESO4_ECl_ENO3'] = wc_df['ESO4'] + wc_df['ECl'] + wc_df['ENO3']
# 4. Delete unnecessary columns and tidy
for col in ['SO4', 'Cl', 'Mg', 'Ca', 'NO3-N']:
del wc_df[col]
wc_df.reset_index(inplace=True)
wc_df.head()
Explanation: 4.4. Convert units and apply sea-salt correction
I haven't calculated all 14 parameters here, as I'm not sure exactly what they all are. The ones I'm reasonably certain of are included below.
End of explanation
def process_water_chem_df(stn_df, wc_df, st_yr=None, end_yr=None):
Calculate statistics for the stations, parameters and time
periods specified.
Args:
stn_df: Dataframe of station_ids
wc_df: Dataframe of water chemistry time series for stations
and parameters of interest
st_yr: First year to include in analysis. Pass None to start
at the beginning of the series
end_year: Last year to include in analysis. Pass None to start
at the beginning of the series
Returns:
Dataframe of statistics
# Container for output
df_list = []
# Loop over sites
for stn_id in stn_df['station_id']:
# Extract data for this site
df = wc_df.query('station_id == @stn_id')
# Modify col names
names = list(df.columns)
names[:2] = ['STATION_ID', 'YEAR']
df.columns = names
# Run analysis
df_list.append(toc_stats(df, st_yr=st_yr, end_yr=end_yr))
res_df = pd.concat(df_list, axis=0)
return res_df
res_df = process_water_chem_df(stn_df, wc_df)
res_df.head()
Explanation: 4.5. Calculate trends
End of explanation
# Get results for test sites from RESA2
sql = ('SELECT * FROM resa2.icpw_statistics3 '
'WHERE station_id IN %s'
% str(tuple(stn_df['station_id'].values)))
stat_df = pd.read_sql(sql, engine)
# Get just the cols to compare to my output
stat_df = stat_df[['station_id', 'parameter', 'period', 'nonmiss',
'average', 'median', 'stdev', 'test_stat',
'mk_stat', 'mkp', 'senslope']]
stat_df.head(14).sort_values(by='parameter')
Explanation: 5. Compare to previous trends analysis
This seems to be working OK so far, but I need to do some more testing to see that my results more-or-less agree with those calculated previously by Tore. As a start, let's compare the results above with those in the ICPW_STATISTICS3 table of RESA2, which is where (I think) Tore has saved his previous output.
End of explanation
# Re-run python analysis for the period 1990 - 2004
res_df = process_water_chem_df(stn_df, wc_df, st_yr=1990, end_yr=2004)
# Delete mk_std_dev as not relevant here
del res_df['mk_std_dev']
res_df.head(14).sort_values(by='par_id')
Explanation: For e.g. site 23499, I can now re-run my code for the period from 1990 to 2004 and compare my results to those above.
End of explanation |
10,256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gather summary statistics from network outputs
This example script displays the use of emu to
estimate normal distribution parameters from the output of each convolutional layer in a given pretrained model.
1. Setup
Setup environment
Step1: Define backend (here are implemented
Step2: Load a caffe model
Step3: Load a torch model
Step4: Load available layers and their types
Step5: Select convolutional layers
Step6: 2. Forward images through network
Define path to a directory containing images and run them through the network
Step7: 3. Calculate summary statistics
Estimate mean and standard deviation per layer | Python Code:
import sys
import os
import numpy as np
from collections import OrderedDict
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Gather summary statistics from network outputs
This example script displays the use of emu to
estimate normal distribution parameters from the output of each convolutional layer in a given pretrained model.
1. Setup
Setup environment
End of explanation
backend = 'caffe'
Explanation: Define backend (here are implemented: caffe and torch)
End of explanation
if backend == 'caffe':
# make sure pycaffe is in your system path
caffe_root = os.getenv("HOME") + '/caffe/'
sys.path.insert(0, caffe_root + 'python')
# Load CaffeAdapter class
from emu.caffe import CaffeAdapter
# Define the path to .caffemodel, deploy.prototxt and mean.npy
# Here we use the pretrained CaffeNet from the Caffe model zoo
model_fp = caffe_root + 'models/bvlc_reference_caffenet/'
weights_fp = model_fp + 'bvlc_reference_caffenet.caffemodel'
prototxt_fp = model_fp + 'deploy.prototxt'
mean_fp = caffe_root + 'data/ilsvrc12/ilsvrc_2012_mean.npy'
# Alternatively, we could also define the mean as a numpy array:
# mean = np.array([104.00698793, 116.66876762, 122.67891434])
adapter = CaffeAdapter(prototxt_fp, weights_fp, mean_fp)
Explanation: Load a caffe model
End of explanation
if backend == 'torch':
# Load TorchAdapter class
from emu.torch import TorchAdapter
# Define the path to the model file where the file can be a torch7 or pytorch model.
# Torch7 models are supported but not well tested.
model_fp = 'models/resnet-18.t7'
# Alternatively, we can use pretrained torchvision models (see README).
# model_fp = 'resnet18'
# Define mean and std
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
# Alternatively, we could also pass a .t7 file path to the constructor
# Define the image input size to the model with order:
# Channels x Height x Width
input_size = (3, 224, 224)
adapter = TorchAdapter(model_fp, mean, std, input_size)
Explanation: Load a torch model
End of explanation
layer_types = adapter.get_layers()
for lname, ltype in layer_types.items():
print('%s:\t%s' % (lname, ltype))
Explanation: Load available layers and their types
End of explanation
conv_layers = [lname for lname, ltype in layer_types.items() if 'conv' in ltype.lower()]
Explanation: Select convolutional layers
End of explanation
images_dp = 'images/'
files = os.listdir(images_dp)
# Filter for jpeg extension
image_files = [os.path.join(images_dp, f) for f in files if f.endswith('.jpg')]
# Run in batched fashion
batch_size = 32
# As we run in batch mode, we have to store the intermediate layer outputs
layer_outputs = OrderedDict()
for layer in conv_layers:
layer_outputs[layer] = []
for i in range(0, len(image_files), batch_size):
image_list = image_files[i:(i+batch_size)]
# Forward batch through network
# The adapter takes care of loading images and transforming them to the right format.
# Alternatively, we could load and transform the images manually and pass a list of numpy arrays.
batch = adapter.preprocess(image_list)
adapter.forward(batch)
# Save a copy of the outputs of the convolutional layers.
for layer in conv_layers:
output = adapter.get_layeroutput(layer).copy()
layer_outputs[layer].append(output)
# Concatenate batch arrays to single outputs
for name, layer_output in layer_outputs.items():
layer_outputs[name] = np.concatenate(layer_output, axis=0)
Explanation: 2. Forward images through network
Define path to a directory containing images and run them through the network
End of explanation
means = [output.mean() for output in layer_outputs.values()]
stds = [output.std() for output in layer_outputs.values()]
plt.plot(means)
plt.xticks(range(len(conv_layers)), conv_layers, rotation=45.0)
plt.title('Convolution output mean over network depth');
plt.xlabel('Layer');
plt.plot(stds)
plt.xticks(range(len(conv_layers)), conv_layers, rotation=45.0)
plt.title('Convolution output std over network depth');
plt.xlabel('Layer');
Explanation: 3. Calculate summary statistics
Estimate mean and standard deviation per layer
End of explanation |
10,257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Text classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Import the required packages.
Step3: Download the sample training data.
In this tutorial, we will use the SST-2 (Stanford Sentiment Treebank) which is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for testing. The dataset has two classes
Step4: The SST-2 dataset is stored in TSV format. The only difference between TSV and CSV is that TSV uses a tab \t character as its delimiter instead of a comma , in the CSV format.
Here are the first 5 lines of the training dataset. label=0 means negative, label=1 means positive.
| sentence | label | | | |
|-------------------------------------------------------------------------------------------|-------|---|---|---|
| hide new secretions from the parental units | 0 | | | |
| contains no wit , only labored gags | 0 | | | |
| that loves its characters and communicates something rather beautiful about human nature | 1 | | | |
| remains utterly satisfied to remain the same throughout | 0 | | | |
| on the worst revenge-of-the-nerds clichés the filmmakers could dredge up | 0 | | | |
Next, we will load the dataset into a Pandas dataframe and change the current label names (0 and 1) to a more human-readable ones (negative and positive) and use them for model training.
Step5: Quickstart
There are five steps to train a text classification model
Step6: Model Maker also supports other model architectures such as BERT. If you are interested to learn about other architecture, see the Choose a model architecture for Text Classifier section below.
Step 2. Load the training and test data, then preprocess them according to a specific model_spec.
Model Maker can take input data in the CSV format. We will load the training and test dataset with the human-readable label name that were created earlier.
Each model architecture requires input data to be processed in a particular way. DataLoader reads the requirement from model_spec and automatically executes the necessary preprocessing.
Step7: Step 3. Train the TensorFlow model with the training data.
The average word embedding model use batch_size = 32 by default. Therefore you will see that it takes 2104 steps to go through the 67,349 sentences in the training dataset. We will train the model for 10 epochs, which means going through the training dataset 10 times.
Step8: Step 4. Evaluate the model with the test data.
After training the text classification model using the sentences in the training dataset, we will use the remaining 872 sentences in the test dataset to evaluate how the model performs against new data it has never seen before.
As the default batch size is 32, it will take 28 steps to go through the 872 sentences in the test dataset.
Step9: Step 5. Export as a TensorFlow Lite model.
Let's export the text classification that we have trained in the TensorFlow Lite format. We will specify which folder to export the model.
By default, the float TFLite model is exported for the average word embedding model architecture.
Step10: You can download the TensorFlow Lite model file using the left sidebar of Colab. Go into the average_word_vec folder as we specified in export_dir parameter above, right-click on the model.tflite file and choose Download to download it to your local computer.
This model can be integrated into an Android or an iOS app using the NLClassifier API of the TensorFlow Lite Task Library.
See the TFLite Text Classification sample app for more details on how the model is used in a working app.
Note 1
Step11: Load training data
You can upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.
<img src="https
Step12: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders.
Train a TensorFlow Model
Train a text classification model using the training data.
Note
Step13: Examine the detailed model structure.
Step14: Evaluate the model
Evaluate the model that we have just trained using the test data and measure the loss and accuracy value.
Step15: Export as a TensorFlow Lite model
Convert the trained model to TensorFlow Lite model format with metadata so that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite.
In many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.
The default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.
Step16: The TensorFlow Lite model file can be integrated in a mobile app using the BertNLClassifier API in TensorFlow Lite Task Library. Please note that this is different from the NLClassifier API used to integrate the text classification trained with the average word vector model architecture.
The export formats can be one or a list of the following
Step17: You can evaluate the TFLite model with evaluate_tflite method to measure its accuracy. Converting the trained TensorFlow model to TFLite format and apply quantization can affect its accuracy so it is recommended to evaluate the TFLite model accuracy before deployment.
Step18: Advanced Usage
The create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecSpec and BertClassifierSpec classes are currently supported. The create function comprises of the following steps
Step19: Customize the average word embedding model hyperparameters
You can adjust the model infrastructure like the wordvec_dim and the seq_len variables in the AverageWordVecSpec class.
For example, you can train the model with a larger value of wordvec_dim. Note that you must construct a new model_spec if you modify the model.
Step20: Get the preprocessed data.
Step21: Train the new model.
Step22: Tune the training hyperparameters
You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,
epochs
Step23: Evaluate the newly retrained model with 20 training epochs.
Step24: Change the Model Architecture
You can change the model by changing the model_spec. The following shows how to change to BERT-Base model.
Change the model_spec to BERT-Base model for the text classifier. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install -q tflite-model-maker
Explanation: Text classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories. The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews.
Prerequisites
Install the required packages
To run this example, install the required packages, including the Model Maker package from the GitHub repo.
End of explanation
import numpy as np
import os
from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.text_classifier import AverageWordVecSpec
from tflite_model_maker.text_classifier import DataLoader
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
Explanation: Import the required packages.
End of explanation
data_dir = tf.keras.utils.get_file(
fname='SST-2.zip',
origin='https://dl.fbaipublicfiles.com/glue/data/SST-2.zip',
extract=True)
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')
Explanation: Download the sample training data.
In this tutorial, we will use the SST-2 (Stanford Sentiment Treebank) which is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for testing. The dataset has two classes: positive and negative movie reviews.
End of explanation
import pandas as pd
def replace_label(original_file, new_file):
# Load the original file to pandas. We need to specify the separator as
# '\t' as the training data is stored in TSV format
df = pd.read_csv(original_file, sep='\t')
# Define how we want to change the label name
label_map = {0: 'negative', 1: 'positive'}
# Excute the label change
df.replace({'label': label_map}, inplace=True)
# Write the updated dataset to a new file
df.to_csv(new_file)
# Replace the label name for both the training and test dataset. Then write the
# updated CSV dataset to the current folder.
replace_label(os.path.join(os.path.join(data_dir, 'train.tsv')), 'train.csv')
replace_label(os.path.join(os.path.join(data_dir, 'dev.tsv')), 'dev.csv')
Explanation: The SST-2 dataset is stored in TSV format. The only difference between TSV and CSV is that TSV uses a tab \t character as its delimiter instead of a comma , in the CSV format.
Here are the first 5 lines of the training dataset. label=0 means negative, label=1 means positive.
| sentence | label | | | |
|-------------------------------------------------------------------------------------------|-------|---|---|---|
| hide new secretions from the parental units | 0 | | | |
| contains no wit , only labored gags | 0 | | | |
| that loves its characters and communicates something rather beautiful about human nature | 1 | | | |
| remains utterly satisfied to remain the same throughout | 0 | | | |
| on the worst revenge-of-the-nerds clichés the filmmakers could dredge up | 0 | | | |
Next, we will load the dataset into a Pandas dataframe and change the current label names (0 and 1) to a more human-readable ones (negative and positive) and use them for model training.
End of explanation
spec = model_spec.get('average_word_vec')
Explanation: Quickstart
There are five steps to train a text classification model:
Step 1. Choose a text classification model architecture.
Here we use the average word embedding model architecture, which will produce a small and fast model with decent accuracy.
End of explanation
train_data = DataLoader.from_csv(
filename='train.csv',
text_column='sentence',
label_column='label',
model_spec=spec,
is_training=True)
test_data = DataLoader.from_csv(
filename='dev.csv',
text_column='sentence',
label_column='label',
model_spec=spec,
is_training=False)
Explanation: Model Maker also supports other model architectures such as BERT. If you are interested to learn about other architecture, see the Choose a model architecture for Text Classifier section below.
Step 2. Load the training and test data, then preprocess them according to a specific model_spec.
Model Maker can take input data in the CSV format. We will load the training and test dataset with the human-readable label name that were created earlier.
Each model architecture requires input data to be processed in a particular way. DataLoader reads the requirement from model_spec and automatically executes the necessary preprocessing.
End of explanation
model = text_classifier.create(train_data, model_spec=spec, epochs=10)
Explanation: Step 3. Train the TensorFlow model with the training data.
The average word embedding model use batch_size = 32 by default. Therefore you will see that it takes 2104 steps to go through the 67,349 sentences in the training dataset. We will train the model for 10 epochs, which means going through the training dataset 10 times.
End of explanation
loss, acc = model.evaluate(test_data)
Explanation: Step 4. Evaluate the model with the test data.
After training the text classification model using the sentences in the training dataset, we will use the remaining 872 sentences in the test dataset to evaluate how the model performs against new data it has never seen before.
As the default batch size is 32, it will take 28 steps to go through the 872 sentences in the test dataset.
End of explanation
model.export(export_dir='average_word_vec')
Explanation: Step 5. Export as a TensorFlow Lite model.
Let's export the text classification that we have trained in the TensorFlow Lite format. We will specify which folder to export the model.
By default, the float TFLite model is exported for the average word embedding model architecture.
End of explanation
mb_spec = model_spec.get('mobilebert_classifier')
Explanation: You can download the TensorFlow Lite model file using the left sidebar of Colab. Go into the average_word_vec folder as we specified in export_dir parameter above, right-click on the model.tflite file and choose Download to download it to your local computer.
This model can be integrated into an Android or an iOS app using the NLClassifier API of the TensorFlow Lite Task Library.
See the TFLite Text Classification sample app for more details on how the model is used in a working app.
Note 1: Android Studio Model Binding does not support text classification yet so please use the TensorFlow Lite Task Library.
Note 2: There is a model.json file in the same folder with the TFLite model. It contains the JSON representation of the metadata bundled inside the TensorFlow Lite model. Model metadata helps the TFLite Task Library know what the model does and how to pre-process/post-process data for the model. You don't need to download the model.json file as it is only for informational purpose and its content is already inside the TFLite file.
Note 3: If you train a text classification model using MobileBERT or BERT-Base architecture, you will need to use BertNLClassifier API instead to integrate the trained model into a mobile app.
The following sections walk through the example step by step to show more details.
Choose a model architecture for Text Classifier
Each model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and BERT-Base models.
| Supported Model | Name of model_spec | Model Description | Model size |
|--------------------------|-------------------------|-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------|
| Averaging Word Embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation. | <1MB |
| MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications. | 25MB w/ quantization <br/> 100MB w/o quantization |
| BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks. | 300MB |
In the quick start, we have used the average word embedding model. Let's switch to MobileBERT to train a model with higher accuracy.
End of explanation
train_data = DataLoader.from_csv(
filename='train.csv',
text_column='sentence',
label_column='label',
model_spec=mb_spec,
is_training=True)
test_data = DataLoader.from_csv(
filename='dev.csv',
text_column='sentence',
label_column='label',
model_spec=mb_spec,
is_training=False)
Explanation: Load training data
You can upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_text_classification.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your dataset to the cloud, you can also locally run the library by following the guide.
To keep it simple, we will reuse the SST-2 dataset downloaded earlier. Let's use the DataLoader.from_csv method to load the data.
Please be noted that as we have changed the model architecture, we will need to reload the training and test dataset to apply the new preprocessing logic.
End of explanation
model = text_classifier.create(train_data, model_spec=mb_spec, epochs=3)
Explanation: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders.
Train a TensorFlow Model
Train a text classification model using the training data.
Note: As MobileBERT is a complex model, each training epoch will takes about 10 minutes on a Colab GPU. Please make sure that you are using a GPU runtime.
End of explanation
model.summary()
Explanation: Examine the detailed model structure.
End of explanation
loss, acc = model.evaluate(test_data)
Explanation: Evaluate the model
Evaluate the model that we have just trained using the test data and measure the loss and accuracy value.
End of explanation
model.export(export_dir='mobilebert/')
Explanation: Export as a TensorFlow Lite model
Convert the trained model to TensorFlow Lite model format with metadata so that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite.
In many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.
The default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.
End of explanation
model.export(export_dir='mobilebert/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB])
Explanation: The TensorFlow Lite model file can be integrated in a mobile app using the BertNLClassifier API in TensorFlow Lite Task Library. Please note that this is different from the NLClassifier API used to integrate the text classification trained with the average word vector model architecture.
The export formats can be one or a list of the following:
ExportFormat.TFLITE
ExportFormat.LABEL
ExportFormat.VOCAB
ExportFormat.SAVED_MODEL
By default, it exports only the TensorFlow Lite model file containing the model metadata. You can also choose to export other files related to the model for better examination. For instance, exporting only the label file and vocab file as follows:
End of explanation
accuracy = model.evaluate_tflite('mobilebert/model.tflite', test_data)
print('TFLite model accuracy: ', accuracy)
Explanation: You can evaluate the TFLite model with evaluate_tflite method to measure its accuracy. Converting the trained TensorFlow model to TFLite format and apply quantization can affect its accuracy so it is recommended to evaluate the TFLite model accuracy before deployment.
End of explanation
new_model_spec = model_spec.get('mobilebert_classifier')
new_model_spec.seq_len = 256
Explanation: Advanced Usage
The create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecSpec and BertClassifierSpec classes are currently supported. The create function comprises of the following steps:
Creates the model for the text classifier according to model_spec.
Trains the classifier model. The default epochs and the default batch size are set by the default_training_epochs and default_batch_size variables in the model_spec object.
This section covers advanced usage topics like adjusting the model and the training hyperparameters.
Customize the MobileBERT model hyperparameters
The model parameters you can adjust are:
seq_len: Length of the sequence to feed into the model.
initializer_range: The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
trainable: Boolean that specifies whether the pre-trained layer is trainable.
The training pipeline parameters you can adjust are:
model_dir: The location of the model checkpoint files. If not set, a temporary directory will be used.
dropout_rate: The dropout rate.
learning_rate: The initial learning rate for the Adam optimizer.
tpu: TPU address to connect to.
For instance, you can set the seq_len=256 (default is 128). This allows the model to classify longer text.
End of explanation
new_model_spec = AverageWordVecSpec(wordvec_dim=32)
Explanation: Customize the average word embedding model hyperparameters
You can adjust the model infrastructure like the wordvec_dim and the seq_len variables in the AverageWordVecSpec class.
For example, you can train the model with a larger value of wordvec_dim. Note that you must construct a new model_spec if you modify the model.
End of explanation
new_train_data = DataLoader.from_csv(
filename='train.csv',
text_column='sentence',
label_column='label',
model_spec=new_model_spec,
is_training=True)
Explanation: Get the preprocessed data.
End of explanation
model = text_classifier.create(new_train_data, model_spec=new_model_spec)
Explanation: Train the new model.
End of explanation
model = text_classifier.create(new_train_data, model_spec=new_model_spec, epochs=20)
Explanation: Tune the training hyperparameters
You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,
epochs: more epochs could achieve better accuracy, but may lead to overfitting.
batch_size: the number of samples to use in one training step.
For example, you can train with more epochs.
End of explanation
new_test_data = DataLoader.from_csv(
filename='dev.csv',
text_column='sentence',
label_column='label',
model_spec=new_model_spec,
is_training=False)
loss, accuracy = model.evaluate(new_test_data)
Explanation: Evaluate the newly retrained model with 20 training epochs.
End of explanation
spec = model_spec.get('bert_classifier')
Explanation: Change the Model Architecture
You can change the model by changing the model_spec. The following shows how to change to BERT-Base model.
Change the model_spec to BERT-Base model for the text classifier.
End of explanation |
10,258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing logistic regression from scratch
The goal of this notebook is to implement your own logistic regression classifier. We will
Step1: Load review dataset
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
Step2: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
Step3: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
Step4: Note
Step5: Now, we will perform 2 simple data transformations
Step6: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note
Step7: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
Step8: Now, write some code to compute the number of product reviews that contain the word perfect.
Hint
Step9: Quiz Question. How many reviews contain the word perfect?
Step10: Convert SFrame to NumPy array
As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices.
First, make sure you can perform the following import.
Step11: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step12: Let us convert the data into NumPy arrays.
Step13: Now, let us see what the sentiment column looks like
Step14: Estimating conditional probability with link function
Recall from lecture that the link function is given by
Step15: Aside. How the link function works with matrix algebra
Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$
Step16: Compute derivative of log likelihood with respect to a single coefficient
Recall from lecture
Step17: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation)
Step18: Checkpoint
Just to make sure we are on the same page, run the following code block and check that the outputs match.
Step19: Taking gradient steps
Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum.
Complete the following function to solve the logistic regression model using gradient ascent
Step20: Now, let us run the logistic regression solver.
Step21: Predicting sentiments
Recall that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula
Step22: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above
Step23: Measuring accuracy
We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows
Step24: Which words contribute most to positive & negative sentiments?
Recall that we were able to compute the "most positive words". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following
Step25: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Ten "most positive" words
Now, we compute the 10 words that have the most positive coefficient values. These words are associated with positive sentiment.
Step26: Ten "most negative" words
Next, we repeat this exercise on the 10 most negative words. That is, we compute the 10 words that have the most negative coefficient values. These words are associated with negative sentiment. | Python Code:
# Run some setup code for this notebook.
import sys
import os
sys.path.append('..')
import graphlab
Explanation: Implementing logistic regression from scratch
The goal of this notebook is to implement your own logistic regression classifier. We will:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Implement the link function for logistic regression.
Write a function to compute the derivative of the log likelihood function with respect to a single coefficient.
Implement gradient ascent.
Given a set of coefficients, predict sentiments.
Compute classification accuracy for the logistic regression model.
End of explanation
products = graphlab.SFrame('datasets/')
Explanation: Load review dataset
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
End of explanation
products['sentiment']
Explanation: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
End of explanation
products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])
Explanation: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
End of explanation
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
Explanation: Note: For this assignment, we eliminated class imbalance by choosing
a subset of the data with a similar number of positive and negative reviews.
Apply text cleaning on the review data
In this section, we will perform some simple feature cleaning using SFrames. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file.
Now, we will load these words from this JSON file:
End of explanation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for important_words)
We start with Step 1 which can be done as follows:
End of explanation
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted.
End of explanation
products['perfect']
Explanation: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
End of explanation
products['contains_perfect'] = products['perfect'].apply(lambda i: 1 if i>=1 else 0)
print(products['contains_perfect'])
print(products['perfect'])
Explanation: Now, write some code to compute the number of product reviews that contain the word perfect.
Hint:
* First create a column called contains_perfect which is set to 1 if the count of the word perfect (stored in column perfect) is >= 1.
* Sum the number of 1s in the column contains_perfect.
End of explanation
print(products['contains_perfect'].sum())
Explanation: Quiz Question. How many reviews contain the word perfect?
End of explanation
import numpy as np
from algorithms.sframe_get_numpy_data import get_numpy_data
Explanation: Convert SFrame to NumPy array
As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices.
First, make sure you can perform the following import.
End of explanation
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
Explanation: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note that the feature matrix includes an additional column 'intercept' to take account of the intercept term.
End of explanation
# Warning: This may take a few minutes...
feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
feature_matrix.shape
Explanation: Let us convert the data into NumPy arrays.
End of explanation
sentiment
Explanation: Now, let us see what the sentiment column looks like:
End of explanation
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
score = np.dot(feature_matrix, coefficients)
predictions = 1.0 / (1 + np.exp(-score))
return predictions
Explanation: Estimating conditional probability with link function
Recall from lecture that the link function is given by:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ represents the word counts of important_words in the review $\mathbf{x}_i$. Complete the following function that implements the link function:
End of explanation
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_predictions =', correct_predictions
print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients)
Explanation: Aside. How the link function works with matrix algebra
Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$:
$$
[\text{feature_matrix}] =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right] =
\left[
\begin{array}{cccc}
h_0(\mathbf{x}_1) & h_1(\mathbf{x}_1) & \cdots & h_D(\mathbf{x}_1) \
h_0(\mathbf{x}_2) & h_1(\mathbf{x}_2) & \cdots & h_D(\mathbf{x}_2) \
\vdots & \vdots & \ddots & \vdots \
h_0(\mathbf{x}_N) & h_1(\mathbf{x}_N) & \cdots & h_D(\mathbf{x}_N)
\end{array}
\right]
$$
By the rules of matrix multiplication, the score vector containing elements $\mathbf{w}^T h(\mathbf{x}_i)$ is obtained by multiplying feature_matrix and the coefficient vector $\mathbf{w}$.
$$
[\text{score}] =
[\text{feature_matrix}]\mathbf{w} =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right]
\mathbf{w}
= \left[
\begin{array}{c}
h(\mathbf{x}_1)^T\mathbf{w} \
h(\mathbf{x}_2)^T\mathbf{w} \
\vdots \
h(\mathbf{x}_N)^T\mathbf{w}
\end{array}
\right]
= \left[
\begin{array}{c}
\mathbf{w}^T h(\mathbf{x}_1) \
\mathbf{w}^T h(\mathbf{x}_2) \
\vdots \
\mathbf{w}^T h(\mathbf{x}_N)
\end{array}
\right]
$$
Checkpoint
Just to make sure you are on the right track, we have provided a few examples. If your predict_probability function is implemented correctly, then the outputs will match:
End of explanation
def feature_derivative(errors, feature):
derivative = np.dot(errors, feature)
return derivative
Explanation: Compute derivative of log likelihood with respect to a single coefficient
Recall from lecture:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
We will now write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts two arguments:
* errors vector containing $\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ for all $i$.
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$.
Complete the following code block:
End of explanation
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp
Explanation: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation):
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
We provide a function to compute the log likelihood for the entire dataset.
End of explanation
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_log_likelihood =', correct_ll
print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients)
Explanation: Checkpoint
Just to make sure we are on the same page, run the following code block and check that the outputs match.
End of explanation
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j].
derivative = feature_derivative(errors, feature_matrix[:,j])
coefficients[j] = coefficients[j] + step_size*derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
Explanation: Taking gradient steps
Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum.
Complete the following function to solve the logistic regression model using gradient ascent:
End of explanation
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301)
Explanation: Now, let us run the logistic regression solver.
End of explanation
scores = np.dot(feature_matrix, coefficients)
Explanation: Predicting sentiments
Recall that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula:
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & \mathbf{x}_i^T\mathbf{w} > 0 \
-1 & \mathbf{x}_i^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Now, we will write some code to compute class predictions. We will do this in two steps:
* Step 1: First compute the scores using feature_matrix and coefficients using a dot product.
* Step 2: Using the formula above, compute the class predictions from the scores.
End of explanation
class_probabilities = 1.0 / (1 + np.exp(-scores) )
print(class_probabilities)
print( 'Number of positive sentiment predicted is {}'.format( (class_probabilities >= 0.5).sum()) )
Explanation: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above:
End of explanation
class_predictions = np.copy(class_probabilities)
class_predictions[class_predictions >=0.5] = 1
class_predictions[class_predictions < 0.5] = 0
print(class_predictions)
print(len(sentiment))
print(len(class_predictions))
num_mistakes = (sentiment == class_predictions).sum() # YOUR CODE HERE
accuracy = 1 -( 1.0*num_mistakes / len(sentiment)) # YOUR CODE HERE
print "-----------------------------------------------------"
print '# Reviews correctly classified =', len(products) - num_mistakes
print '# Reviews incorrectly classified =', num_mistakes
print '# Reviews total =', len(products)
print "-----------------------------------------------------"
print 'Accuracy = %.2f' % accuracy
Explanation: Measuring accuracy
We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Complete the following code block to compute the accuracy of the model.
End of explanation
coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
Explanation: Which words contribute most to positive & negative sentiments?
Recall that we were able to compute the "most positive words". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following:
* Treat each coefficient as a tuple, i.e. (word, coefficient_value).
* Sort all the (word, coefficient_value) tuples by coefficient_value in descending order.
End of explanation
word_coefficient_tuples[:10]
Explanation: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Ten "most positive" words
Now, we compute the 10 words that have the most positive coefficient values. These words are associated with positive sentiment.
End of explanation
word_coefficient_tuples[-10:]
Explanation: Ten "most negative" words
Next, we repeat this exercise on the 10 most negative words. That is, we compute the 10 words that have the most negative coefficient values. These words are associated with negative sentiment.
End of explanation |
10,259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Brainstorm auditory tutorial dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see [1]_ and
Step1: To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change this to False. With
use_precomputed = False running time of this script can be several
minutes even on a fast computer.
Step2: The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of
Step3: In the memory saving mode we use preload=False and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded in the memory.
Step4: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz).
In addition
Step5: For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
Step6: Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
Step7: Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
Step8: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
Step9: We also lowpass filter the data at 100 Hz to remove the hf components.
Step10: Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
Step11: The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
Step12: We mark a set of bad channels that seem noisier than others. This can also
be done interactively with raw.plot by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
Step13: The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword reject_by_annotation=False.
Step14: We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
Step15: The averages for each conditions are computed.
Step16: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. Normally this would be done to
raw data (with
Step17: Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
Step18: Show activations as topography figures.
Step19: We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
Step20: Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
Step21: The transformation is read from a file. More information about coregistering
the data, see ch_interactive_analysis or
Step22: To save time and memory, the forward solution is read from a file. Set
use_precomputed=False in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information
Step23: The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
time_viewer=True.
Standard condition.
Step24: Deviant condition.
Step25: Difference. | Python Code:
# Authors: Mainak Jas <[email protected]>
# Eric Larson <[email protected]>
# Jaakko Leppakangas <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
from mne.filter import notch_filter, low_pass_filter
print(__doc__)
Explanation: Brainstorm auditory tutorial dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see [1]_ and:
http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory
Experiment:
- One subject, 2 acquisition runs 6 minutes each.
- Each run contains 200 regular beeps and 40 easy deviant beeps.
- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.
- Button pressed when detecting a deviant with the right index finger.
The specifications of this dataset were discussed initially on the
FieldTrip bug tracker <http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300>_.
References
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
End of explanation
use_precomputed = True
Explanation: To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change this to False. With
use_precomputed = False running time of this script can be several
minutes even on a fast computer.
End of explanation
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', 'bst_auditory',
'S01_Noise_20131218_01.ds')
Explanation: The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of :class:mne.io.Raw.
End of explanation
preload = not use_precomputed
raw = read_raw_ctf(raw_fname1, preload=preload)
n_times_run1 = raw.n_times
mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=preload)])
raw_erm = read_raw_ctf(erm_fname, preload=preload)
Explanation: In the memory saving mode we use preload=False and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded in the memory.
End of explanation
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick_types(meg=True, eeg=False, stim=True, misc=True, eog=True,
ecg=True)
Explanation: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz).
In addition:
1 stim channel for marking presentation times for the stimuli
1 audio channel for the sent signal
1 response channel for recording the button presses
1 ECG bipolar
2 EOG bipolar (vertical and horizontal)
12 head tracking channels
20 unused channels
The head tracking channels and the unused channels are marked as misc
channels. Here we define the EOG and ECG channels.
End of explanation
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = map(str, annotations_df['label'].values)
annotations = mne.Annotations(onsets, durations, descriptions)
raw.annotations = annotations
del onsets, durations, descriptions
Explanation: For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
End of explanation
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory
Explanation: Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
End of explanation
raw.plot(block=True)
Explanation: Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
End of explanation
if not use_precomputed:
meg_picks = mne.pick_types(raw.info, meg=True, eeg=False)
raw.plot_psd(picks=meg_picks)
notches = np.arange(60, 181, 60)
raw.notch_filter(notches)
raw.plot_psd(picks=meg_picks)
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
End of explanation
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double')
Explanation: We also lowpass filter the data at 100 Hz to remove the hf components.
End of explanation
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')
Explanation: Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
End of explanation
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs
Explanation: The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
End of explanation
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']
Explanation: We mark a set of bad channels that seem noisier than others. This can also
be done interactively with raw.plot by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
End of explanation
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=False,
proj=True)
Explanation: The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword reject_by_annotation=False.
End of explanation
epochs.drop_bad()
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs, picks
Explanation: We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
End of explanation
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant
Explanation: The averages for each conditions are computed.
End of explanation
if use_precomputed:
sfreq = evoked_std.info['sfreq']
nchan = evoked_std.info['nchan']
notches = [60, 120, 180]
for ch_idx in range(nchan):
evoked_std.data[ch_idx] = notch_filter(evoked_std.data[ch_idx], sfreq,
notches, verbose='ERROR')
evoked_dev.data[ch_idx] = notch_filter(evoked_dev.data[ch_idx], sfreq,
notches, verbose='ERROR')
evoked_std.data[ch_idx] = low_pass_filter(evoked_std.data[ch_idx],
sfreq, 100, verbose='ERROR')
evoked_dev.data[ch_idx] = low_pass_filter(evoked_dev.data[ch_idx],
sfreq, 100, verbose='ERROR')
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. Normally this would be done to
raw data (with :func:mne.io.Raw.filter), but to reduce memory consumption
of this tutorial, we do it at evoked stage.
End of explanation
evoked_std.plot(window_title='Standard', gfp=True)
evoked_dev.plot(window_title='Deviant', gfp=True)
Explanation: Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
End of explanation
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard')
evoked_dev.plot_topomap(times=times, title='Deviant')
Explanation: Show activations as topography figures.
End of explanation
evoked_difference = combine_evoked([evoked_dev, -evoked_std], weights='equal')
evoked_difference.plot(window_title='Difference', gfp=True)
Explanation: We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
End of explanation
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm
Explanation: Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
End of explanation
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)
Explanation: The transformation is read from a file. More information about coregistering
the data, see ch_interactive_analysis or
:func:mne.gui.coregistration.
End of explanation
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd
Explanation: To save time and memory, the forward solution is read from a file. Set
use_precomputed=False in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information: CHDBBCEJ, :func:mne.setup_source_space,
create_bem_model, :func:mne.bem.make_watershed_bem.
End of explanation
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain
Explanation: The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
time_viewer=True.
Standard condition.
End of explanation
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain
Explanation: Deviant condition.
End of explanation
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')
Explanation: Difference.
End of explanation |
10,260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulation of a Noddy history and analysis of its voxel topology
Example of how the module can be used to run Noddy simulations and analyse the output.
Step1: Compute the model
The simplest way to perform the Noddy simulation through Python is simply to call the executable. One way that should be fairly platform independent is to use Python's own subprocess module
Step2: For convenience, the model computations are wrapped into a Python function in pynoddy
Step3: Note | Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
# Basic settings
import sys, os
import subprocess
# Now import pynoddy
import pynoddy
%matplotlib inline
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
Explanation: Simulation of a Noddy history and analysis of its voxel topology
Example of how the module can be used to run Noddy simulations and analyse the output.
End of explanation
# Change to sandbox directory to store results
os.chdir(os.path.join(repo_path, 'sandbox'))
# Path to exmaple directory in this repository
example_directory = os.path.join(repo_path,'examples')
# Compute noddy model for history file
history_file = 'strike_slip.his'
history = os.path.join(example_directory, history_file)
nfiles = 1
files = '_'+str(nfiles).zfill(4)
print "files", files
root_name = 'noddy_out'
output_name = root_name + files
print root_name
print output_name
# call Noddy
# NOTE: Make sure that the noddy executable is accessible in the system!!
sys
print subprocess.Popen(['noddy.exe', history, output_name, 'TOPOLOGY'],
shell=False, stderr=subprocess.PIPE,
stdout=subprocess.PIPE).stdout.read()
#
sys
print subprocess.Popen(['topology.exe', root_name, files],
shell=False, stderr=subprocess.PIPE,
stdout=subprocess.PIPE).stdout.read()
Explanation: Compute the model
The simplest way to perform the Noddy simulation through Python is simply to call the executable. One way that should be fairly platform independent is to use Python's own subprocess module:
End of explanation
pynoddy.compute_model(history, output_name)
pynoddy.compute_topology(root_name, files)
Explanation: For convenience, the model computations are wrapped into a Python function in pynoddy:
End of explanation
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
import numpy as np
N1 = pynoddy.NoddyOutput(output_name)
AM= pynoddy.NoddyTopology(output_name)
am_name=root_name +'_uam.bin'
print am_name
print AM.maxlitho
image = np.empty((int(AM.maxlitho),int(AM.maxlitho)), np.uint8)
image.data[:] = open(am_name).read()
cmap=plt.get_cmap('Paired')
cmap.set_under('white') # Color for values less than vmin
plt.imshow(image, interpolation="nearest", vmin=1, cmap=cmap)
plt.show()
Explanation: Note: The Noddy call from Python is, to date, calling Noddy through the subprocess function. In a future implementation, this call could be subsituted with a full wrapper for the C-functions written in Python. Therefore, using the member function compute_model is not only easier, but also the more "future-proof" way to compute the Noddy model.
Loading Topology output files
Here we load the binary adjacency matrix for one topology calculation and display it as an image
End of explanation |
10,261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Scaling up ML using Cloud ML Engine </h1>
In this notebook, we take a previously developed TensorFlow model to predict taxifare rides and package it up so that it can be run in Cloud MLE. For now, we'll run this on a small dataset. The model that was developed is rather simplistic, and therefore, the accuracy of the model is not great either. However, this notebook illustrates how to package up a TensorFlow model to run it within Cloud ML.
Later in the course, we will look at ways to make a more effective machine learning model.
<h2> Environment variables for project and bucket </h2>
Note that
Step1: Allow the Cloud ML Engine service account to read/write to the bucket containing training data.
Step2: <h2> Packaging up the code </h2>
Take your code and put into a standard Python package structure. <a href="taxifare/trainer/model.py">model.py</a> and <a href="taxifare/trainer/task.py">task.py</a> contain the Tensorflow code from earlier (explore the <a href="taxifare/trainer/">directory structure</a>).
Step3: <h2> Find absolute paths to your data </h2>
Note the absolute paths below. /content is mapped in Datalab to where the home icon takes you
Step4: <h2> Running the Python module from the command-line </h2>
Step5: <h2> Running locally using gcloud </h2>
Step6: When I ran it (due to random seeds, your results will be different), the average_loss (Mean Squared Error) on the evaluation dataset was 187, meaning that the RMSE was around 13.
If the above step (to stop TensorBoard) appears stalled, just move on to the next step. You don't need to wait for it to return.
Step7: <h2> Submit training job using gcloud </h2>
First copy the training data to the cloud. Then, launch a training job.
After you submit the job, go to the cloud console (http
Step8: Don't be concerned if the notebook appears stalled (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud.
<b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b>
<h2> Deploy model </h2>
Find out the actual name of the subdirectory where the model is stored and use it to deploy the model. Deploying model will take up to <b>5 minutes</b>.
Step9: <h2> Prediction </h2>
Step10: <h2> Train on larger dataset </h2>
I have already followed the steps below and the files are already available. <b> You don't need to do the steps in this comment. </b> In the next chapter (on feature engineering), we will avoid all this manual processing by using Cloud Dataflow.
Go to http | Python Code:
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
REGION = 'us-central1' # Choose an available region for Cloud MLE from https://cloud.google.com/ml-engine/docs/regions.
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected.
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.4' # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: <h1> Scaling up ML using Cloud ML Engine </h1>
In this notebook, we take a previously developed TensorFlow model to predict taxifare rides and package it up so that it can be run in Cloud MLE. For now, we'll run this on a small dataset. The model that was developed is rather simplistic, and therefore, the accuracy of the model is not great either. However, this notebook illustrates how to package up a TensorFlow model to run it within Cloud ML.
Later in the course, we will look at ways to make a more effective machine learning model.
<h2> Environment variables for project and bucket </h2>
Note that:
<ol>
<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li>
<li> Cloud training often involves saving and restoring model files. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available). A common pattern is to prefix the bucket name by the project id, so that it is unique. Also, for cost reasons, you might want to use a single region bucket. </li>
</ol>
<b>Change the cell below</b> to reflect your Project ID and bucket name.
End of explanation
%%bash
PROJECT_ID=$PROJECT
AUTH_TOKEN=$(gcloud auth print-access-token)
SVC_ACCOUNT=$(curl -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer $AUTH_TOKEN" \
https://ml.googleapis.com/v1/projects/${PROJECT_ID}:getConfig \
| python -c "import json; import sys; response = json.load(sys.stdin); \
print(response['serviceAccount'])")
echo "Authorizing the Cloud ML Service account $SVC_ACCOUNT to access files in $BUCKET"
gsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET
gsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored
gsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET
Explanation: Allow the Cloud ML Engine service account to read/write to the bucket containing training data.
End of explanation
!find taxifare
!cat taxifare/trainer/model.py
Explanation: <h2> Packaging up the code </h2>
Take your code and put into a standard Python package structure. <a href="taxifare/trainer/model.py">model.py</a> and <a href="taxifare/trainer/task.py">task.py</a> contain the Tensorflow code from earlier (explore the <a href="taxifare/trainer/">directory structure</a>).
End of explanation
%%bash
echo $PWD
rm -rf $PWD/taxi_trained
cp $PWD/../tensorflow/taxi-train.csv .
cp $PWD/../tensorflow/taxi-valid.csv .
head -1 $PWD/taxi-train.csv
head -1 $PWD/taxi-valid.csv
Explanation: <h2> Find absolute paths to your data </h2>
Note the absolute paths below. /content is mapped in Datalab to where the home icon takes you
End of explanation
%%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m trainer.task \
--train_data_paths="${PWD}/taxi-train*" \
--eval_data_paths=${PWD}/taxi-valid.csv \
--output_dir=${PWD}/taxi_trained \
--train_steps=1000 --job-dir=./tmp
%%bash
ls $PWD/taxi_trained/export/exporter/
%%writefile ./test.json
{"pickuplon": -73.885262,"pickuplat": 40.773008,"dropofflon": -73.987232,"dropofflat": 40.732403,"passengers": 2}
## local predict doesn't work with Python 3 yet
#%bash
#model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
#gcloud ai-platform local predict \
# --model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
# --json-instances=./test.json
Explanation: <h2> Running the Python module from the command-line </h2>
End of explanation
%%bash
rm -rf taxifare.tar.gz taxi_trained
gcloud ai-platform local train \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
-- \
--train_data_paths=${PWD}/taxi-train.csv \
--eval_data_paths=${PWD}/taxi-valid.csv \
--train_steps=1000 \
--output_dir=${PWD}/taxi_trained
Explanation: <h2> Running locally using gcloud </h2>
End of explanation
!ls $PWD/taxi_trained
Explanation: When I ran it (due to random seeds, your results will be different), the average_loss (Mean Squared Error) on the evaluation dataset was 187, meaning that the RMSE was around 13.
If the above step (to stop TensorBoard) appears stalled, just move on to the next step. You don't need to wait for it to return.
End of explanation
%%bash
echo $BUCKET
gsutil -m rm -rf gs://${BUCKET}/taxifare/smallinput/
gsutil -m cp ${PWD}/*.csv gs://${BUCKET}/taxifare/smallinput/
%%bash
OUTDIR=gs://${BUCKET}/taxifare/smallinput/taxi_trained
JOBNAME=lab3a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_paths="gs://${BUCKET}/taxifare/smallinput/taxi-train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/smallinput/taxi-valid*" \
--output_dir=$OUTDIR \
--train_steps=10000
Explanation: <h2> Submit training job using gcloud </h2>
First copy the training data to the cloud. Then, launch a training job.
After you submit the job, go to the cloud console (http://console.cloud.google.com) and select <b>AI Platform | Jobs</b> to monitor progress.
<b>Note:</b> Don't be concerned if the notebook stalls (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. Use the Cloud Console link (above) to monitor the job.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/taxifare/smallinput/taxi_trained/export/exporter
%%bash
MODEL_NAME="taxifare"
MODEL_VERSION="v1"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/taxifare/smallinput/taxi_trained/export/exporter | tail -1)
echo "Run these commands one-by-one (the very first time, you'll create a model and then create a version)"
#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION
Explanation: Don't be concerned if the notebook appears stalled (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud.
<b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b>
<h2> Deploy model </h2>
Find out the actual name of the subdirectory where the model is stored and use it to deploy the model. Deploying model will take up to <b>5 minutes</b>.
End of explanation
%%bash
gcloud ai-platform predict --model=taxifare --version=v1 --json-instances=./test.json
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
request_data = {'instances':
[
{
'pickuplon': -73.885262,
'pickuplat': 40.773008,
'dropofflon': -73.987232,
'dropofflat': 40.732403,
'passengers': 2,
}
]
}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'taxifare', 'v1')
response = api.projects().predict(body=request_data, name=parent).execute()
print("response={0}".format(response))
Explanation: <h2> Prediction </h2>
End of explanation
%%bash
XXXXX this takes 60 minutes. if you are sure you want to run it, then remove this line.
OUTDIR=gs://${BUCKET}/taxifare/ch3/taxi_trained
JOBNAME=lab3a_$(date -u +%y%m%d_%H%M%S)
CRS_BUCKET=cloud-training-demos # use the already exported data
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--train_data_paths="gs://${CRS_BUCKET}/taxifare/ch3/train.csv" \
--eval_data_paths="gs://${CRS_BUCKET}/taxifare/ch3/valid.csv" \
--output_dir=$OUTDIR \
--train_steps=100000
Explanation: <h2> Train on larger dataset </h2>
I have already followed the steps below and the files are already available. <b> You don't need to do the steps in this comment. </b> In the next chapter (on feature engineering), we will avoid all this manual processing by using Cloud Dataflow.
Go to http://bigquery.cloud.google.com/ and type the query:
<pre>
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'nokeyindata' AS key
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
AND ABS(HASH(pickup_datetime)) % 1000 == 1
</pre>
Note that this is now 1,000,000 rows (i.e. 100x the original dataset). Export this to CSV using the following steps (Note that <b>I have already done this and made the resulting GCS data publicly available</b>, so you don't need to do it.):
<ol>
<li> Click on the "Save As Table" button and note down the name of the dataset and table.
<li> On the BigQuery console, find the newly exported table in the left-hand-side menu, and click on the name.
<li> Click on "Export Table"
<li> Supply your bucket name and give it the name train.csv (for example: gs://cloud-training-demos-ml/taxifare/ch3/train.csv). Note down what this is. Wait for the job to finish (look at the "Job History" on the left-hand-side menu)
<li> In the query above, change the final "== 1" to "== 2" and export this to Cloud Storage as valid.csv (e.g. gs://cloud-training-demos-ml/taxifare/ch3/valid.csv)
<li> Download the two files, remove the header line and upload it back to GCS.
</ol>
<p/>
<p/>
<h2> Run Cloud training on 1-million row dataset </h2>
This took 60 minutes and uses as input 1-million rows. The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). At the end of the training the loss was 32, but the RMSE (calculated on the validation dataset) was stubbornly at 9.03. So, simply adding more data doesn't help.
End of explanation |
10,262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Discrete probability distributions
Rigorous definitions of discrete probability laws and discrete random variables are provided in part 00. By reading this part or from your own education, you should know by now what are the probability mass function and the cumulative distribution function for a discrete probability law.
Bernoulli law
Uniform law
Let $N \in \mathbb{N}$ and $\mathcal{A} = {a_1, \dots, a_N}$ a set of atoms, the uniform discrete law is written as
Step1: The cumulative distribution function of a uniform discrete distribution is defined $\forall k \in [a,b]$ as | Python Code:
N = 6
xk = np.arange(1,N+1)
fig, ax = plt.subplots(1, 1)
ax.plot(xk, sps.randint.pmf(xk, xk[0], 1+xk[-1]), 'ro', ms=12, mec='r')
ax.vlines(xk, 0, sps.randint.pmf(xk, xk[0], 1+xk[-1]), colors='r', lw=4)
plt.show()
Explanation: Discrete probability distributions
Rigorous definitions of discrete probability laws and discrete random variables are provided in part 00. By reading this part or from your own education, you should know by now what are the probability mass function and the cumulative distribution function for a discrete probability law.
Bernoulli law
Uniform law
Let $N \in \mathbb{N}$ and $\mathcal{A} = {a_1, \dots, a_N}$ a set of atoms, the uniform discrete law is written as:
\begin{equation}
p({a_i}) = \frac{1}{N}
\end{equation}
For example, with $N=6$, let's take the example of the set of integers ${1,2,3,4,5,6}$:
End of explanation
xk = np.arange(0,N+1)
fig, ax = plt.subplots(1, 1)
for i in np.arange(0,N+1):
y=sps.randint.cdf(i, xk[1], 1+xk[-1])
l = mlines.Line2D([i,i+1], [y,y], color='r')
ax.add_line(l)
ax.plot(xk[1:], sps.randint.cdf(xk[1:], xk[1], 1+xk[-1]), 'ro', ms=12, mec='r')
ax.plot(xk[:-1]+1, sps.randint.cdf(xk[:-1], xk[1], 1+xk[-1]), 'ro', ms=13, mec='r', mfc='r') # Ugly hack to get white circles with red edges
ax.plot(xk[:-1]+1, sps.randint.cdf(xk[:-1], xk[1], 1+xk[-1]), 'ro', ms=12, mec='r', mfc='w') #
ax.set()
plt.show()
Explanation: The cumulative distribution function of a uniform discrete distribution is defined $\forall k \in [a,b]$ as :
$$F(k;a,b) = \frac{\lfloor k \rfloor-a+1}{b-a+1}$$
In our example it looks like this:
End of explanation |
10,263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
bqplot
This notebook is meant to guide you through the first stages of using the bqplot visualization library. bqplot is a Grammar of Graphics based interactive visualization library for the Jupyter notebook where every single component of a plot is an interactive iPython widget. What this means is that even after a plot is drawn, you can change almost any aspect of it. This makes the creation of advanced Graphical User Interfaces attainable through just a few simple lines of Python code.
Step1: Your First Plot
Let's start by creating a simple Line chart. bqplot has two different APIs, the first one is a matplotlib inspired simple API called pyplot. So let's import that.
Step2: Let's plot y_data against x_data, and then show the plot.
Step3: Use the buttons above to Pan (or Zoom), Reset or save the Figure.
Using bqplot's interactive elements
Now, let's try creating a new plot. First, we create a brand new Figure. The Figure is the final element of any plot that is eventually displayed. You can think of it as a Canvas on which we put all of our other plots.
Step4: Since both the x and the y attributes of a bqplot chart are interactive widgets, we can change them. So, let's
change the y attribute of the chart.
Step5: Re-run the above cell a few times, the same plot should update every time. But, that's not the only thing that can be changed once a plot has been rendered. Let's try changing some of the other attributes.
Step6: It's important to remember that an interactive widget means that the JavaScript and the Python communicate. So, the plot can be changed through a single line of python code, or a piece of python code can be triggered by a change in the plot. Let's go through a simple example. Say we have a function foo
Step7: We can call foo every time any attribute of our scatter is changed. Say, the y values
Step8: To allow the points in the Scatter to be moved interactively, we set the enable_move attribute to True
Step9: Go ahead, head over to the chart and move any point in some way. This move (which happens on the JavaScript side should trigger our Python function foo.
Understanding how bqplot uses the Grammar of Graphics paradigm
bqplot has two different APIs. One is the matplotlib inspired pyplot which we used above (you can think of it as similar to qplot in ggplot2). The other one, the verbose API, is meant to expose every element of a plot individually, so that their attriutes can be controlled in an atomic way. In order to truly use bqplot to build complex and feature-rich GUIs, it pays to understand the underlying theory that is used to create a plot.
To understand this verbose API, it helps to revisit what exactly the components of a plot are. The first thing we need is a Scale.
A Scale is a mapping from (function that converts) data coordinates to figure coordinates. What this means is that, a Scale takes a set of values in any arbitrary unit (say number of people, or $, or litres) and converts it to pixels (or colors for a ColorScale).
Step10: Now, we need to create the actual Mark that will visually represent the data. Let's pick a Scatter chart to start.
Step11: Most of the time, the actual Figure coordinates don't really mean anything to us. So, what we need is the visual representation of our Scale, which is called an Axis.
Step12: And finally, we put it all together on a canvas, which is called a Figure.
Step13: The IPython display machinery displays the last returned value of a cell. If you wish to explicitly display a widget, you can call IPython.display.display.
Step14: Now, that the plot has been generated, we can control every single attribute of it. Let's say we wanted to color the chart based on some other data.
Step15: Now, we define a ColorScale to map the color_data to actual colors
Step16: The grammar of graphics framework allows us to overlay multiple visualizations on a single Figure by having the visualization share the Scales. So, for example, if we had a Bar chart that we would like to plot alongside the Scatter plot, we just pass it the same Scales.
Step17: Finally, we add the new Mark to the Figure to update the plot! | Python Code:
# Let's begin by importing some libraries we'll need
import numpy as np
# And creating some random data
size = 100
np.random.seed(0)
x_data = np.arange(size)
y_data = np.cumsum(np.random.randn(size) * 100.0)
Explanation: bqplot
This notebook is meant to guide you through the first stages of using the bqplot visualization library. bqplot is a Grammar of Graphics based interactive visualization library for the Jupyter notebook where every single component of a plot is an interactive iPython widget. What this means is that even after a plot is drawn, you can change almost any aspect of it. This makes the creation of advanced Graphical User Interfaces attainable through just a few simple lines of Python code.
End of explanation
from bqplot import pyplot as plt
Explanation: Your First Plot
Let's start by creating a simple Line chart. bqplot has two different APIs, the first one is a matplotlib inspired simple API called pyplot. So let's import that.
End of explanation
plt.figure(title="My First Plot")
plt.plot(x_data, y_data)
plt.show()
Explanation: Let's plot y_data against x_data, and then show the plot.
End of explanation
# Creating a new Figure and setting it's title
plt.figure(title="My Second Chart")
# Let's assign the scatter plot to a variable
scatter_plot = plt.scatter(x_data, y_data)
# Let's show the plot
plt.show()
Explanation: Use the buttons above to Pan (or Zoom), Reset or save the Figure.
Using bqplot's interactive elements
Now, let's try creating a new plot. First, we create a brand new Figure. The Figure is the final element of any plot that is eventually displayed. You can think of it as a Canvas on which we put all of our other plots.
End of explanation
scatter_plot.y = np.cumsum(np.random.randn(size) * 100.0)
Explanation: Since both the x and the y attributes of a bqplot chart are interactive widgets, we can change them. So, let's
change the y attribute of the chart.
End of explanation
# Say, the color
scatter_plot.colors = ["Red"]
# Or, the marker style
scatter_plot.marker = "diamond"
Explanation: Re-run the above cell a few times, the same plot should update every time. But, that's not the only thing that can be changed once a plot has been rendered. Let's try changing some of the other attributes.
End of explanation
def foo(change):
print(
"This is a trait change. Foo was called by the fact that we moved the Scatter"
)
print("In fact, the Scatter plot sent us all the new data: ")
print(
"To access the data, try modifying the function and printing the data variable"
)
Explanation: It's important to remember that an interactive widget means that the JavaScript and the Python communicate. So, the plot can be changed through a single line of python code, or a piece of python code can be triggered by a change in the plot. Let's go through a simple example. Say we have a function foo:
End of explanation
# First, we hook up our function `foo` to the colors attribute (or Trait) of the scatter plot
scatter_plot.observe(foo, "y")
Explanation: We can call foo every time any attribute of our scatter is changed. Say, the y values:
End of explanation
scatter_plot.enable_move = True
Explanation: To allow the points in the Scatter to be moved interactively, we set the enable_move attribute to True
End of explanation
# First, we import the scales
from bqplot import LinearScale
# Let's create a scale for the x attribute, and a scale for the y attribute
x_sc = LinearScale()
y_sc = LinearScale()
Explanation: Go ahead, head over to the chart and move any point in some way. This move (which happens on the JavaScript side should trigger our Python function foo.
Understanding how bqplot uses the Grammar of Graphics paradigm
bqplot has two different APIs. One is the matplotlib inspired pyplot which we used above (you can think of it as similar to qplot in ggplot2). The other one, the verbose API, is meant to expose every element of a plot individually, so that their attriutes can be controlled in an atomic way. In order to truly use bqplot to build complex and feature-rich GUIs, it pays to understand the underlying theory that is used to create a plot.
To understand this verbose API, it helps to revisit what exactly the components of a plot are. The first thing we need is a Scale.
A Scale is a mapping from (function that converts) data coordinates to figure coordinates. What this means is that, a Scale takes a set of values in any arbitrary unit (say number of people, or $, or litres) and converts it to pixels (or colors for a ColorScale).
End of explanation
from bqplot import Scatter
scatter_chart = Scatter(x=x_data, y=y_data, scales={"x": x_sc, "y": y_sc})
Explanation: Now, we need to create the actual Mark that will visually represent the data. Let's pick a Scatter chart to start.
End of explanation
from bqplot import Axis
x_ax = Axis(label="X", scale=x_sc)
y_ax = Axis(label="Y", scale=y_sc, orientation="vertical")
Explanation: Most of the time, the actual Figure coordinates don't really mean anything to us. So, what we need is the visual representation of our Scale, which is called an Axis.
End of explanation
from bqplot import Figure
fig = Figure(marks=[scatter_chart], title="A Figure", axes=[x_ax, y_ax])
fig
Explanation: And finally, we put it all together on a canvas, which is called a Figure.
End of explanation
from IPython.display import display
display(fig)
Explanation: The IPython display machinery displays the last returned value of a cell. If you wish to explicitly display a widget, you can call IPython.display.display.
End of explanation
# First, we generate some random color data.
color_data = np.random.randint(0, 2, size=100)
Explanation: Now, that the plot has been generated, we can control every single attribute of it. Let's say we wanted to color the chart based on some other data.
End of explanation
from bqplot import ColorScale
# The colors trait controls the actual colors we want to map to. It can also take a min, mid, max list of
# colors to be interpolated between for continuous data.
col_sc = ColorScale(colors=["MediumSeaGreen", "Red"])
scatter_chart.scales = {"x": x_sc, "y": y_sc, "color": col_sc}
# We pass the color data to the Scatter Chart through it's color attribute
scatter_chart.color = color_data
Explanation: Now, we define a ColorScale to map the color_data to actual colors
End of explanation
from bqplot import Bars
new_size = 50
scale = 100.0
x_data_new = np.arange(new_size)
y_data_new = np.cumsum(np.random.randn(new_size) * scale)
# All we need to do to add a bar chart to the Figure is pass the same scales to the Mark
bar_chart = Bars(x=x_data_new, y=y_data_new, scales={"x": x_sc, "y": y_sc})
Explanation: The grammar of graphics framework allows us to overlay multiple visualizations on a single Figure by having the visualization share the Scales. So, for example, if we had a Bar chart that we would like to plot alongside the Scatter plot, we just pass it the same Scales.
End of explanation
fig.marks = [scatter_chart, bar_chart]
Explanation: Finally, we add the new Mark to the Figure to update the plot!
End of explanation |
10,264 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Processes in Shogun
By Heiko Strathmann - <a href="mailto
Step1: Some Formal Background (Skip if you just want code examples)
This notebook is about Bayesian regression models with Gaussian Process priors. A Gaussian Process (GP) over real valued functions on some domain $\mathcal{X}$, $f(\mathbf{x})
Step2: Apart from its apealling form, this curve has the nice property of given rise to analytical solutions to the required integrals. Recall these are given by
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta},$
and
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
Since all involved elements, the likelihood $p(\mathbf{y}|\mathbf{f})$, the GP prior $p(\mathbf{f}|\boldsymbol{\theta})$ are Gaussian, the same follows for the GP posterior $p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})$, and the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$. Therefore, we just need to sit down with pen and paper to derive the resulting forms of the Gaussian distributions of these objects (see references). Luckily, everything is already implemented in Shogun.
In order to get some intuition about Gaussian Processes in general, let us first have a look at these latent Gaussian variables, which define a probability distribution over real values functions $f(\mathbf{x})
Step3: First, we compute the kernel matrix $\mathbf{C}_\boldsymbol{\theta}$ using the <a href="http
Step4: This matrix, as any kernel or covariance matrix, is positive semi-definite and symmetric. It can be viewed as a similarity matrix. Here, elements on the diagonal (corresponding to $\mathbf{x}=\mathbf{x}'$) have largest similarity. For increasing kernel bandwidth $\tau$, more and more elements are similar. This matrix fully specifies a distribution over functions $f(\mathbf{x})
Step5: Note how the functions are exactly evaluated at the training covariates $\mathbf{x}_i$ which are randomly distributed on the x-axis. Even though these points do not visualise the full functions (we can only evaluate them at a finite number of points, but we connected the points with lines to make it more clear), this reveils that larger values of the kernel bandwidth $\tau$ lead to smoother latent Gaussian functions.
In the above plots all functions are equally possible. That is, the prior of the latent Gaussian variables $\mathbf{f}|\boldsymbol{\theta}$ does not favour any particular function setups. Computing the posterior given our training data, the distribution ober $\mathbf{f}|\mathbf{y},\boldsymbol{\theta}$ then corresponds to restricting the above distribution over functions to those that explain the training data (up to observation noise). We will now use the Shogun class <a href="http
Step6: Note how the above function samples are constrained to go through our training data labels (up to observation noise), as much as their smoothness allows them. In fact, these are already samples from the predictive distribution, which gives a probability for a label $\mathbf{y}^$ for any covariate $\mathbf{x}^$. These distributions are Gaussian (!), nice to look at and extremely useful to understand the GP's underlying model. Let's plot them. We finally use the Shogun class <a href="http
Step7: The question now is
Step8: Now we can output the best parameters and plot the predictive distribution for those.
Step9: Now the predictive distribution is very close to the true data generating process.
Non-Linear, Binary Bayesian Classification
In binary classification, the observed data comes from a space of discrete, binary labels, i.e. $\mathbf{y}\in\mathcal{Y}^n={-1,+1}^n$, which are represented via the Shogun class <a href="http
Step10: Note how the logit function maps any input value to $[0,1]$ in a continuous way. The other plot above is for another classification likelihood is implemented in Shogun is the Gaussian CDF function
$p(\mathbf{y}|\mathbf{f})=\prod_{i=1}^n p(y_i|f_i)=\prod_{i=1}^n \Phi(y_i f_i),$
where $\Phi
Step11: We will now pass this data into Shogun representation, and use the standard Gaussian kernel (or squared exponential covariance function (<a href="http
Step12: This is already quite nice. The nice thing about Gaussian Processes now is that they are Bayesian, which means that have a full predictive distribution, i.e., we can plot the probability for a point belonging to a class. These can be obtained via the interface of <a href="http
Step13: If you are interested in the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$, for example for the sake of comparing different model parameters $\boldsymbol{\theta}$ (more in model-selection later), it is very easy to compute it via the interface of <a href="http
Step14: This plot clearly shows that there is one kernel width (aka hyper-parameter element $\theta$) for that the marginal likelihood is maximised. If one was interested in the single best parameter, the above concept can be used to learn the best hyper-parameters of the GP. In fact, this is possible in a very efficient way since we have a lot of information about the geometry of the marginal likelihood function, as for example its gradient
Step15: In the above plots, it is quite clear that the maximum of the marginal likelihood corresponds to the best single setting of the parameters. To give some more intuition
Step16: This now gives us a trained Gaussian Process with the best hyper-parameters. In the above setting, this is the s <a href="http
Step17: Note how nicely this predictive distribution matches the data generating distribution. Also note that the best kernel bandwidth is different to the one we saw in the above plot. This is caused by the different kernel scalling that was also learned automatically. The kernel scaling, roughly speaking, corresponds to the sharpness of the changes in the surface of the predictive likelihood. Since we have two hyper-parameters, we can plot the surface of the marginal likelihood as a function of both of them. This is sometimes interesting, for example when this surface has multiple maximum (corresponding to multiple "best" parameter settings), and thus might be useful for analysis. It is expensive however.
Step18: Our found maximum nicely matches the result of the "grid-search". The take home message for this is | Python Code:
%matplotlib inline
# import all shogun classes
from modshogun import *
import random
import numpy as np
import matplotlib.pyplot as plt
from math import exp
Explanation: Gaussian Processes in Shogun
By Heiko Strathmann - <a href="mailto:[email protected]">[email protected]</a> - <a href="https://github.com/karlnapf">github.com/karlnapf</a> - <a href="http://herrstrathmann.de">herrstrathmann.de</a>. Based on the GP framework of the <a href="http://www.google-melange.com/gsoc/project/google/gsoc2013/votjak/8001">Google summer of code 2013 project</a> of Roman Votyakov - <a href="mailto:[email protected]">[email protected]</a> - <a href="https://github.com/votjakovr">github.com/votjakovr</a>, and the <a href="http://www.google-melange.com/gsoc/project/google/gsoc2012/walke434/39001">Google summer of code 2012 project</a> of Jacob Walker - <a href="mailto:[email protected]">[email protected]</a> - <a href="https://github.com/puffin444">github.com/puffin444</a>
This notebook is about <a href="http://en.wikipedia.org/wiki/Bayesian_linear_regression">Bayesian regression</a> and <a href="http://en.wikipedia.org/wiki/Statistical_classification">classification</a> models with <a href="http://en.wikipedia.org/wiki/Gaussian_process">Gaussian Process (GP)</a> priors in Shogun. After providing a semi-formal introduction, we illustrate how to efficiently train them, use them for predictions, and automatically learn parameters.
End of explanation
# plot likelihood for three different noise lebels $\sigma$ (which is not yet squared)
sigmas=np.array([0.5,1,2])
# likelihood instance
lik=GaussianLikelihood()
# A set of labels to consider
lab=RegressionLabels(np.linspace(-4.0,4.0, 200))
# A single 1D Gaussian response function, repeated once for each label
# this avoids doing a loop in python which would be slow
F=np.zeros(lab.get_num_labels())
# plot likelihood for all observations noise levels
plt.figure(figsize=(12, 4))
for sigma in sigmas:
# set observation noise, this is squared internally
lik.set_sigma(sigma)
# compute log-likelihood for all labels
log_liks=lik.get_log_probability_f(lab, F)
# plot likelihood functions, exponentiate since they were computed in log-domain
plt.plot(lab.get_labels(), map(exp,log_liks))
plt.ylabel("$p(y_i|f_i)$")
plt.xlabel("$y_i$")
plt.title("Regression Likelihoods for different observation noise levels")
_=plt.legend(["sigma=$%.1f$" % sigma for sigma in sigmas])
Explanation: Some Formal Background (Skip if you just want code examples)
This notebook is about Bayesian regression models with Gaussian Process priors. A Gaussian Process (GP) over real valued functions on some domain $\mathcal{X}$, $f(\mathbf{x}):\mathcal{X} \rightarrow \mathbb{R}$, written as
$\mathcal{GP}(m(\mathbf{x}), k(\mathbf{x},\mathbf{x}')),$
defines a distribution over real valued functions with mean value $m(\mathbf{x})=\mathbb{E}[f(\mathbf{x})]$ and inter-function covariance $k(\mathbf{x},\mathbf{x}')=\mathbb{E}[(f(\mathbf{x})-m(\mathbf{x}))^T(f(\mathbf{x})-m(\mathbf{x})]$. This intuitively means tha the function value at any point $\mathbf{x}$, i.e., $f(\mathbf{x})$ is a random variable with mean $m(\mathbf{x})$; if you take the average of infinitely many functions from the Gaussian Process, and evaluate them at $\mathbf{x}$, you will get this value. Similarily, the function values at two different points $\mathbf{x}, \mathbf{x}'$ have covariance $k(\mathbf{x}, \mathbf{x}')$. The formal definition is that Gaussian Process is a collection of random variables (may be infinite) of which any finite subset have a joint Gaussian distribution.
One can model data with Gaussian Processes via defining a joint distribution over
$n$ data (labels in Shogun) $\mathbf{y}\in \mathcal{Y}^n$, from a $n$ dimensional continous (regression) or discrete (classification) space. These data correspond to $n$ covariates $\mathbf{x}_i\in\mathcal{X}$ (features in Shogun) from the input space $\mathcal{X}$.
Hyper-parameters $\boldsymbol{\theta}$ which depend on the used model (details follow).
Latent Gaussian variables $\mathbf{f}\in\mathbb{R}^n$, coming from a GP, i.e., they have a joint Gaussian distribution. Every entry $f_i$ corresponds to the GP function $f(\mathbf{x_i})$ evaluated at covariate $\mathbf{x}_i$ for $1\leq i \leq n$.
The joint distribution takes the form
$p(\mathbf{f},\mathbf{y},\theta)=p(\boldsymbol{\theta})p(\mathbf{f}|\boldsymbol{\theta})p(\mathbf{y}|\mathbf{f}),$
where $\mathbf{f}|\boldsymbol{\theta}\sim\mathcal{N}(\mathbf{m}\theta, \mathbf{C}\theta)$ is the joint Gaussian distribution for the GP variables, with mean $\mathbf{m}\boldsymbol{\theta}$ and covariance $\mathbf{C}\theta$. The $(i,j)$-th entriy of $\mathbf{C}_\boldsymbol{\theta}$ is given by the covariance or kernel between the $(i,j)$-th covariates $k(\mathbf{x}_i, \mathbf{x}_j)$. Examples for kernel and mean functions are given later in the notebook.
Mean and covariance are both depending on hyper-parameters coming from a prior distribution $\boldsymbol{\theta}\sim p(\boldsymbol{\theta})$. The data itself $\mathbf{y}\in \mathcal{Y}^n$ (no assumptions on $\mathcal{Y}$ for now) is modelled by a likelihood function $p(\mathbf{y}|\mathbf{f})$, which gives the probability of the data $\mathbf{y}$ given a state of the latent Gaussian variables $\mathbf{f}$, i.e. $p(\mathbf{y}|\mathbf{f}):\mathcal{Y}^n\rightarrow [0,1]$.
In order to do inference for a new, unseen covariate $\mathbf{x}^\in\mathcal{X}$, i.e., predicting its label $y^\in\mathcal{Y}$ or in particular computing the predictive distribution for that label, we have integrate over the posterior over the latent Gaussian variables (assume fixed $\boldsymbol{\theta}$ for now, which means you can just ignore the symbol in the following if you want),
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}.$
This posterior, $p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})$, can be obtained using standard <a href="http://en.wikipedia.org/wiki/Bayes'_theorem">Bayes-Rule</a> as
$p(\mathbf{f}|\mathbf{y},\boldsymbol{\theta})=\frac{p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})}{p(\mathbf{y}|\boldsymbol{\theta})},$
with the so called evidence or marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$ given as another integral over the prior over the latent Gaussian variables
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
In order to solve the above integrals, Shogun offers a variety of approximations. Don't worry, you will not have to deal with these nasty integrals on your own, but everything is hidden within Shogun. Though, if you like to play with these objects, you will be able to compute only parts.
Note that in the above description, we did not make any assumptions on the input space $\mathcal{X}$. As long as you define mean and covariance functions, and a likelihood, your data can have any form you like. Shogun in fact is able to deal with standard <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDenseFeatures.html">dense numerical data</a>, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSparseFeatures.html"> sparse data</a>, and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStringFeatures.html">strings of any type</a>, and many more out of the box. We will provide some examples below.
To gain some intuition how these latent Gaussian variables behave, and how to model data with them, see the regression part of this notebook.
Non-Linear Bayesian Regression
Bayesian regression with Gaussian Processes is among the most fundamental applications of latent Gaussian models. As usual, the oberved data come from a contintous space, i.e. $\mathbf{y}\in\mathbb{R}^n$, which is represented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CRegressionLabels.html">CRegressionLabels</a>. We assume that these observations come from some distribution $p(\mathbf{y}|\mathbf{f)}$ that is based on a fixed state of latent Gaussian response variables $\mathbf{f}\in\mathbb{R}^n$. In fact, we assume that the true model is the latent Gaussian response variable (which defined a distribution over functions; plus some Gaussian observation noise which is modelled by the likelihood as
$p(\mathbf{y}|\mathbf{f})=\mathcal{N}(\mathbf{f},\sigma^2\mathbf{I})$
This simple likelihood is implemented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianLikelihood.html">CGaussianLikelihood</a>. It is the well known bell curve. Below, we plot the likelihood as a function of $\mathbf{y}$, for $n=1$.
End of explanation
def generate_regression_toy_data(n=50, n_test=100, x_range=15, x_range_test=20, noise_var=0.4):
# training and test sine wave, test one has more points
X_train = np.random.rand(n)*x_range
X_test = np.linspace(0,x_range_test, 500)
# add noise to training observations
y_test = np.sin(X_test)
y_train = np.sin(X_train)+np.random.randn(n)*noise_var
return X_train, y_train, X_test, y_test
X_train, y_train, X_test, y_test = generate_regression_toy_data()
plt.figure(figsize=(16,4))
plt.plot(X_train, y_train, 'ro')
plt.plot(X_test, y_test)
plt.legend(["Noisy observations", "True model"])
plt.title("One-Dimensional Toy Regression Data")
plt.xlabel("$\mathbf{x}$")
_=plt.ylabel("$\mathbf{y}$")
Explanation: Apart from its apealling form, this curve has the nice property of given rise to analytical solutions to the required integrals. Recall these are given by
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta},$
and
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
Since all involved elements, the likelihood $p(\mathbf{y}|\mathbf{f})$, the GP prior $p(\mathbf{f}|\boldsymbol{\theta})$ are Gaussian, the same follows for the GP posterior $p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})$, and the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$. Therefore, we just need to sit down with pen and paper to derive the resulting forms of the Gaussian distributions of these objects (see references). Luckily, everything is already implemented in Shogun.
In order to get some intuition about Gaussian Processes in general, let us first have a look at these latent Gaussian variables, which define a probability distribution over real values functions $f(\mathbf{x}):\mathcal{X} \rightarrow \mathbb{R}$, where in the regression case, $\mathcal{X}=\mathbb{R}$.
As mentioned above, the joint distribution of a finite number (say $n$) of variables $\mathbf{f}\in\mathbb{R}^n$ from a Gaussian Process $\mathcal{GP}(m(\mathbf{x}), k(\mathbf{x},\mathbf{x}'))$, takes the form
$\mathbf{f}|\boldsymbol{\theta}\sim\mathcal{N}(\mathbf{m}\theta, \mathbf{C}\theta),$
where $\mathbf{m}\theta$ is the mean function's mean and $\mathbf{C}\theta$ is the pairwise covariance or kernel matrix of the input covariates $\mathbf{x}_i$. This means, we can easily sample function realisations $\mathbf{f}^{(j)}$ from the Gaussian Process, and more important, visualise them.
To this end, let us consider the well-known and often used Gaussian Kernel or squared exponential covariance, which is implemented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a> in the parametric from (note that there are other forms in the literature)
$ k(\mathbf{x}, \mathbf{x}')=\exp\left( -\frac{||\mathbf{x}-\mathbf{x}'||_2^2}{\tau}\right),$
where $\tau$ is a hyper-parameter of the kernel. We will also use the constant <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CZeroMean.html">CZeroMean</a> mean function, which is suitable if the data's mean is zero (which can be achieved via removing it).
Let us consider some toy regression data in the form of a sine wave, which is observed at random points with some observations noise.
End of explanation
# bring data into shogun representation (features are 2d-arrays, organised as column vectors)
feats_train=RealFeatures(X_train.reshape(1,len(X_train)))
feats_test=RealFeatures(X_test.reshape(1,len(X_test)))
labels_train=RegressionLabels(y_train)
# compute covariances for different kernel parameters
taus=np.asarray([.1,4.,32.])
Cs=np.zeros(((len(X_train), len(X_train), len(taus))))
for i in range(len(taus)):
# compute unscalled kernel matrix (first parameter is maximum size in memory and not very important)
kernel=GaussianKernel(10, taus[i])
kernel.init(feats_train, feats_train)
Cs[:,:,i]=kernel.get_kernel_matrix()
# plot
plt.figure(figsize=(16,5))
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
plt.imshow(Cs[:,:,i], interpolation="nearest")
plt.xlabel("Covariate index")
plt.ylabel("Covariate index")
_=plt.title("tau=%.1f" % taus[i])
Explanation: First, we compute the kernel matrix $\mathbf{C}_\boldsymbol{\theta}$ using the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a> with hyperparameter $\boldsymbol{\theta}={\tau}$ with a few differnt values. Note that in Gaussian Processes, kernels usually have a scaling parameter. We skip this one for now and cover it later.
End of explanation
plt.figure(figsize=(16,5))
plt.suptitle("Random Samples from GP prior")
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
# sample a bunch of latent functions from the Gaussian Process
# note these vectors are stored row-wise
F=Statistics.sample_from_gaussian(np.zeros(len(X_train)), Cs[:,:,i], 3)
for j in range(len(F)):
# sort points to connect the dots with lines
sorted_idx=X_train.argsort()
plt.plot(X_train[sorted_idx], F[j,sorted_idx], '-', markersize=6)
plt.xlabel("$\mathbf{x}_i$")
plt.ylabel("$f(\mathbf{x}_i)$")
_=plt.title("tau=%.1f" % taus[i])
Explanation: This matrix, as any kernel or covariance matrix, is positive semi-definite and symmetric. It can be viewed as a similarity matrix. Here, elements on the diagonal (corresponding to $\mathbf{x}=\mathbf{x}'$) have largest similarity. For increasing kernel bandwidth $\tau$, more and more elements are similar. This matrix fully specifies a distribution over functions $f(\mathbf{x}):\mathcal{X}\rightarrow\mathbb{R}$ over a finite set of latent Gaussian variables $\mathbf{f}$, which we can sample from and plot. To this end, we use the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStatistics.html">CStatistics</a>, which offers a method to sample from multivariate Gaussians.
End of explanation
plt.figure(figsize=(16,5))
plt.suptitle("Random Samples from GP posterior")
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
# create inference method instance with very small observation noise to make
inf=ExactInferenceMethod(GaussianKernel(10, taus[i]), feats_train, ZeroMean(), labels_train, GaussianLikelihood())
C_post=inf.get_posterior_covariance()
m_post=inf.get_posterior_mean()
# sample a bunch of latent functions from the Gaussian Process
# note these vectors are stored row-wise
F=Statistics.sample_from_gaussian(m_post, C_post, 5)
for j in range(len(F)):
# sort points to connect the dots with lines
sorted_idx=sorted(range(len(X_train)),key=lambda x:X_train[x])
plt.plot(X_train[sorted_idx], F[j,sorted_idx], '-', markersize=6)
plt.plot(X_train, y_train, 'r*')
plt.xlabel("$\mathbf{x}_i$")
plt.ylabel("$f(\mathbf{x}_i)$")
_=plt.title("tau=%.1f" % taus[i])
Explanation: Note how the functions are exactly evaluated at the training covariates $\mathbf{x}_i$ which are randomly distributed on the x-axis. Even though these points do not visualise the full functions (we can only evaluate them at a finite number of points, but we connected the points with lines to make it more clear), this reveils that larger values of the kernel bandwidth $\tau$ lead to smoother latent Gaussian functions.
In the above plots all functions are equally possible. That is, the prior of the latent Gaussian variables $\mathbf{f}|\boldsymbol{\theta}$ does not favour any particular function setups. Computing the posterior given our training data, the distribution ober $\mathbf{f}|\mathbf{y},\boldsymbol{\theta}$ then corresponds to restricting the above distribution over functions to those that explain the training data (up to observation noise). We will now use the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CExactInferenceMethod.html">CExactInferenceMethod</a> to do exactly this. The class is the general basis of exact GP regression in Shogun. We have to define all parts of the Gaussian Process for the inference method.
End of explanation
# helper function that plots predictive distribution and data
def plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances):
# evaluate predictive distribution in this range of y-values and preallocate predictive distribution
y_values=np.linspace(-3,3)
D=np.zeros((len(y_values), len(X_test)))
# evaluate normal distribution at every prediction point (column)
for j in range(np.shape(D)[1]):
# create gaussian distributio instance, expects mean vector and covariance matrix, reshape
gauss=GaussianDistribution(np.array(means[j]).reshape(1,), np.array(variances[j]).reshape(1,1))
# evaluate predictive distribution for test point, method expects matrix
D[:,j]=np.exp(gauss.log_pdf_multiple(y_values.reshape(1,len(y_values))))
plt.pcolor(X_test,y_values,D)
plt.colorbar()
plt.contour(X_test,y_values,D)
plt.plot(X_test,y_test, 'b', linewidth=3)
plt.plot(X_test,means, 'm--', linewidth=3)
plt.plot(X_train, y_train, 'ro')
plt.legend(["Truth", "Prediction", "Data"])
plt.figure(figsize=(18,10))
plt.suptitle("GP inference for different kernel widths")
for i in range(len(taus)):
plt.subplot(len(taus),1,i+1)
# create GP instance using inference method and train
# use Shogun objects from above
inf.set_kernel(GaussianKernel(10,taus[i]))
gp=GaussianProcessRegression(inf)
gp.train()
# predict labels for all test data (note that this produces the same as the below mean vector)
means = gp.apply(feats_test)
# extract means and variance of predictive distribution for all test points
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
# note: y_predicted == means
# plot predictive distribution and training data
plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances)
_=plt.title("tau=%.1f" % taus[i])
Explanation: Note how the above function samples are constrained to go through our training data labels (up to observation noise), as much as their smoothness allows them. In fact, these are already samples from the predictive distribution, which gives a probability for a label $\mathbf{y}^$ for any covariate $\mathbf{x}^$. These distributions are Gaussian (!), nice to look at and extremely useful to understand the GP's underlying model. Let's plot them. We finally use the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianProcessRegression.html">CGaussianProcessRegression</a> to represent the whole GP under an interface to perform inference with. In addition, we use the helper class class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianDistribution.html">CGaussianDistribution</a> to evaluate the log-likelihood for every test point's $\mathbf{x}^_j$ value $\mathbf{y}_j^$.
End of explanation
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = ExactInferenceMethod(GaussianKernel(10, taus[i]), feats_train, ZeroMean(), labels_train, GaussianLikelihood())
gp = GaussianProcessRegression(inf)
# evaluate our inference method for its derivatives
grad = GradientEvaluation(gp, feats_train, labels_train, GradientCriterion(), False)
grad.set_function(inf)
# handles all of the above structures in memory
grad_search = GradientModelSelection(grad)
# search for best parameters and store them
best_combination = grad_search.select_model()
# apply best parameters to GP, train
best_combination.apply_to_machine(gp)
# we have to "cast" objects to the specific kernel interface we used (soon to be easier)
best_width=GaussianKernel.obtain_from_generic(inf.get_kernel()).get_width()
best_scale=inf.get_scale()
best_sigma=GaussianLikelihood.obtain_from_generic(inf.get_model()).get_sigma()
print "Selected tau (kernel bandwidth):", best_width
print "Selected gamma (kernel scaling):", best_scale
print "Selected sigma (observation noise):", best_sigma
Explanation: The question now is: Which set of hyper-parameters $\boldsymbol{\theta}={\tau, \gamma, \sigma}$ to take, where $\gamma$ is the kernel scaling (which we omitted so far), and $\sigma$ is the observation noise (which we left at its defaults value of one)? The question of model-selection will be handled in a bit more depth in the binary classification case. For now we just show code how to do it as a black box. See below for explanations.
End of explanation
# train gp
gp.train()
# extract means and variance of predictive distribution for all test points
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
# plot predictive distribution
plt.figure(figsize=(18,5))
plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances)
_=plt.title("Maximum Likelihood II based inference")
Explanation: Now we can output the best parameters and plot the predictive distribution for those.
End of explanation
# two classification likelihoods in Shogun
logit=LogitLikelihood()
probit=ProbitLikelihood()
# A couple of Gaussian response functions, 1-dimensional here
F=np.linspace(-5.0,5.0)
# Single observation label with +1
lab=BinaryLabels(np.array([1.0]))
# compute log-likelihood for all values in F
log_liks_logit=np.zeros(len(F))
log_liks_probit=np.zeros(len(F))
for i in range(len(F)):
# Shogun expects a 1D array for f, not a single number
f=np.array(F[i]).reshape(1,)
log_liks_logit[i]=logit.get_log_probability_f(lab, f)
log_liks_probit[i]=probit.get_log_probability_f(lab, f)
# in fact, loops are slow and Shogun offers a method to compute the likelihood for many f. Much faster!
log_liks_logit=logit.get_log_probability_fmatrix(lab, F.reshape(1,len(F)))
log_liks_probit=probit.get_log_probability_fmatrix(lab, F.reshape(1,len(F)))
# plot the sigmoid functions, note that Shogun computes it in log-domain, so we have to exponentiate
plt.figure(figsize=(12, 4))
plt.plot(F, np.exp(log_liks_logit))
plt.plot(F, np.exp(log_liks_probit))
plt.ylabel("$p(y_i|f_i)$")
plt.xlabel("$f_i$")
plt.title("Classification Likelihoods")
_=plt.legend(["Logit", "Probit"])
Explanation: Now the predictive distribution is very close to the true data generating process.
Non-Linear, Binary Bayesian Classification
In binary classification, the observed data comes from a space of discrete, binary labels, i.e. $\mathbf{y}\in\mathcal{Y}^n={-1,+1}^n$, which are represented via the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CBinaryLabels.html">CBinaryLabels</a>. To model these observations with a GP, we need a likelihood function $p(\mathbf{y}|\mathbf{f})$ that maps a set of such discrete observations to a probability, given a fixed response $\mathbf{f}$ of the Gaussian Process.
In regression, this way straight-forward, as we could simply use the response variable $\mathbf{f}$ itself, plus some Gaussian noise, which gave rise to a probability distribution. However, now that the $\mathbf{y}$ are discrete, we cannot do the same thing. We rather need a function that squashes the Gaussian response variable itself to a probability, given some data. This is a common problem in Machine Learning and Statistics and is usually done with some sort of Sigmoid function of the form $\sigma:\mathbb{R}\rightarrow[0,1]$. One popular choicefor such a function is the Logit likelihood, given by
$p(\mathbf{y}|\mathbf{f})=\prod_{i=1}^n p(y_i|f_i)=\prod_{i=1}^n \frac{1}{1-\exp(-y_i f_i)}.$
This likelihood is implemented in Shogun under <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLogitLikelihood.html">CLogitLikelihood</a> and using it is sometimes refered to as logistic regression. Using it with GPs results in non-linear Bayesian logistic regression. We can easily use the class to illustrate the sigmoid function for a 1D example and a fixed data point with label $+1$
End of explanation
def generate_classification_toy_data(n_train=100, mean_a=np.asarray([0, 0]), std_dev_a=1.0, mean_b=3, std_dev_b=0.5):
# positive examples are distributed normally
X1 = (np.random.randn(n_train, 2)*std_dev_a+mean_a).T
# negative examples have a "ring"-like form
r = np.random.randn(n_train)*std_dev_b+mean_b
angle = np.random.randn(n_train)*2*np.pi
X2 = np.array([r*np.cos(angle)+mean_a[0], r*np.sin(angle)+mean_a[1]])
# stack positive and negative examples in a single array
X_train = np.hstack((X1,X2))
# label positive examples with +1, negative with -1
y_train = np.zeros(n_train*2)
y_train[:n_train] = 1
y_train[n_train:] = -1
return X_train, y_train
def plot_binary_data(X_train, y_train):
plt.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro')
plt.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo')
X_train, y_train=generate_classification_toy_data()
plot_binary_data(X_train, y_train)
_=plt.title("2D Toy classification problem")
Explanation: Note how the logit function maps any input value to $[0,1]$ in a continuous way. The other plot above is for another classification likelihood is implemented in Shogun is the Gaussian CDF function
$p(\mathbf{y}|\mathbf{f})=\prod_{i=1}^n p(y_i|f_i)=\prod_{i=1}^n \Phi(y_i f_i),$
where $\Phi:\mathbb{R}\rightarrow [0,1]$ is the <a href="http://en.wikipedia.org/wiki/Cumulative_distribution_function">cumulative distribution function</a> (CDF) of the standard Gaussian distribution $\mathcal{N}(0,1)$. It is implemented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CProbitLikelihood.html">CProbitLikelihood</a> and using it is refered to as probit regression. While the Gaussian CDF has some convinient properties for integrating over it (and thus allowing some different modelling decisions), it doesn not really matter what you use in Shogun in most cases. However, for the sake of completeness, it is also potted above, being very similar to the logit likelihood.
TODO: Show a function squashed through the logit likelihood
Recall that in order to do inference, we need to solve two integrals (in addition to the Bayes rule, see above)
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta},$
and
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
In classification, the second integral is not available in closed form since it is the convolution of a Gaussian, $p(\mathbf{f}|\boldsymbol{\theta})$, and a non-Gaussian, $p(\mathbf{y}|\mathbf{f})$, distribution. Therefore, we have to rely on approximations in order to compute and integrate over the posterior $p(\mathbf{f}|\mathbf{y},\boldsymbol{\theta})$. Shogun offers various standard methods from the literature to deal with this problem, including the Laplace approximation (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLaplacianInferenceMethod.html">CLaplacianInferenceMethod</a>), Expectation Propagation (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CEPInferenceMethod.html">CEPInferenceMethod</a>) for inference and evaluatiing the marginal likelihood. These two approximations give rise to a Gaussian posterior $p(\mathbf{f}|\mathbf{y},\boldsymbol{\theta})$, which can then be easily computed and integrated over (all this is done by Shogun for you).
While the Laplace approximation is quite fast, EP usually has a better accuracy, in particular if one is not just interetsed in binary decisions but also in certainty values for these predictions. Go for Laplace if interested in binary decisions, and for EP otherwise.
TODO, add references to inference methods.
We will now give an example on how to do GP inference for binary classification in Shogun on some toy data. For that, we will first definea function to generate a classical non-linear classification problem.
End of explanation
# for building combinations of arrays
from itertools import product
# convert training data into Shogun representation
train_features = RealFeatures(X_train)
train_labels = BinaryLabels(y_train)
# generate all pairs in 2d range of testing data (full space), discretisation resultion is n_test
n_test=50
x1 = np.linspace(X_train[0,:].min()-1, X_train[0,:].max()+1, n_test)
x2 = np.linspace(X_train[1,:].min()-1, X_train[1,:].max()+1, n_test)
X_test = np.asarray(list(product(x1, x2))).T
# convert testing features into Shogun representation
test_features = RealFeatures(X_test)
# create Gaussian kernel with width = 2.0
kernel = GaussianKernel(10, 2)
# create zero mean function
zero_mean = ZeroMean()
# you can easily switch between probit and logit likelihood models
# by uncommenting/commenting the following lines:
# create probit likelihood model
# lik = ProbitLikelihood()
# create logit likelihood model
lik = LogitLikelihood()
# you can easily switch between Laplace and EP approximation by
# uncommenting/commenting the following lines:
# specify Laplace approximation inference method
#inf = LaplacianInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# specify EP approximation inference method
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
# create and train GP classifier, which uses Laplace approximation
gp = GaussianProcessClassification(inf)
gp.train()
test_labels=gp.apply(test_features)
# plot data and decision boundary
plot_binary_data(X_train, y_train)
plt.pcolor(x1, x2, test_labels.get_labels().reshape(n_test, n_test))
_=plt.title('Decision boundary')
Explanation: We will now pass this data into Shogun representation, and use the standard Gaussian kernel (or squared exponential covariance function (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a>)) and the Laplace approximation to obtain a decision boundary for the two classes. You can easily exchange different likelihood models and inference methods.
End of explanation
# obtain probabilities for
p_test = gp.get_probabilities(test_features)
# create figure
plt.title('Training data, predictive probability and decision boundary')
# plot training data
plot_binary_data(X_train, y_train)
# plot decision boundary
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
# plot probabilities
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
Explanation: This is already quite nice. The nice thing about Gaussian Processes now is that they are Bayesian, which means that have a full predictive distribution, i.e., we can plot the probability for a point belonging to a class. These can be obtained via the interface of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianProcessClassification.html">CGaussianProcessClassification</a>
End of explanation
# generate some non-negative kernel widths
widths=2**np.linspace(-5,6,20)
# compute marginal likelihood under Laplace apprixmation for every width
# use Shogun objects from above
marginal_likelihoods=np.zeros(len(widths))
for i in range(len(widths)):
# note that GP training is automatically done/updated if a parameter is changed. No need to call train again
kernel.set_width(widths[i])
marginal_likelihoods[i]=-inf.get_negative_log_marginal_likelihood()
# plot marginal likelihoods as a function of kernel width
plt.plot(np.log2(widths), marginal_likelihoods)
plt.title("Log Marginal likelihood for different kernels")
plt.xlabel("Kernel Width in log-scale")
_=plt.ylabel("Log-Marginal Likelihood")
print "Width with largest marginal likelihood:", widths[marginal_likelihoods.argmax()]
Explanation: If you are interested in the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$, for example for the sake of comparing different model parameters $\boldsymbol{\theta}$ (more in model-selection later), it is very easy to compute it via the interface of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CInferenceMethod.html">CInferenceMethod</a>, i.e., every inference method in Shogun can do that. It is even possible to obtain the mean and covariance of the Gaussian approximation to the posterior $p(\mathbf{f}|\mathbf{y})$ using Shogun. In the following, we plot the marginal likelihood under the EP inference method (more accurate approximation) as a one dimensional function of the kernel width.
End of explanation
# again, use Shogun objects from above, but a few extremal widths
widths_subset=np.array([widths[0], widths[marginal_likelihoods.argmax()], widths[len(widths)-1]])
plt.figure(figsize=(18, 5))
for i in range(len(widths_subset)):
plt.subplot(1,len(widths_subset),i+1)
kernel.set_width(widths_subset[i])
# obtain and plot predictive distribution
p_test = gp.get_probabilities(test_features)
title_str="Width=%.2f, " % widths_subset[i]
if i is 0:
title_str+="too complex, overfitting"
elif i is 1:
title_str+="just right"
else:
title_str+="too smooth, underfitting"
plt.title(title_str)
plot_binary_data(X_train, y_train)
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
Explanation: This plot clearly shows that there is one kernel width (aka hyper-parameter element $\theta$) for that the marginal likelihood is maximised. If one was interested in the single best parameter, the above concept can be used to learn the best hyper-parameters of the GP. In fact, this is possible in a very efficient way since we have a lot of information about the geometry of the marginal likelihood function, as for example its gradient: It turns out that for example the above function is smooth and we can use the usual optimisation techniques to find extrema. This is called maximum likelihood II. Let's have a closer look.
Excurs: Model-Selection with Gaussian Processes
First, let us have a look at the predictive distributions of some of the above kernel widths
End of explanation
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
gp = GaussianProcessClassification(inf)
# evaluate our inference method for its derivatives
grad = GradientEvaluation(gp, train_features, train_labels, GradientCriterion(), False)
grad.set_function(inf)
# handles all of the above structures in memory
grad_search = GradientModelSelection(grad)
# search for best parameters and store them
best_combination = grad_search.select_model()
# apply best parameters to GP
best_combination.apply_to_machine(gp)
# we have to "cast" objects to the specific kernel interface we used (soon to be easier)
best_width=GaussianKernel.obtain_from_generic(inf.get_kernel()).get_width()
best_scale=inf.get_scale()
print "Selected kernel bandwidth:", best_width
print "Selected kernel scale:", best_scale
Explanation: In the above plots, it is quite clear that the maximum of the marginal likelihood corresponds to the best single setting of the parameters. To give some more intuition: The interpretation of the marginal likelihood
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$
is the probability of the data given the model parameters $\boldsymbol{\theta}$. Note that this is averaged over all possible configurations of the latent Gaussian variables $\mathbf{f}|\boldsymbol{\theta}$ given a fixed configuration of parameters. However, since this is probability distribution, it has to integrate to $1$. This means that models that are too complex (and thus being able to explain too many different data configutations) and models that are too simple (and thus not able to explain the current data) give rise to a small marginal likelihood. Only when the model is just complex enough to explain the data well (but not more complex), the marginal likelihood is maximised. This is an implementation of a concept called <a href="http://en.wikipedia.org/wiki/Occam's_razor#Probability_theory_and_statistics">Occam's razor</a>, and is a nice motivation why you should be Bayesian if you can -- overfitting doesn't happen that quickly.
As mentioned before, Shogun is able to automagically learn all of the hyper-parameters $\boldsymbol{\theta}$ using gradient based optimisation on the marginal likelihood (whose derivatives are computed internally). To this is, we use the class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGradientModelSelection.html">CGradientModelSelection</a>. Note that we could also use <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGridSearchModelSelection.html">CGridSearchModelSelection</a> to do a standard grid-search, such as is done for Support Vector Machines. However, this is highly ineffective, in particular when the number of parameters grows. In addition, order to evaluate parameter states, we have to use the classes <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGradientEvaluation.html">CGradientEvaluation</a>, and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GradientCriterion.html">GradientCriterion</a>, which is also much cheaper than the usual <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCrossValidation.html">CCrossValidation</a>, since it just evaluates the gradient of the marginal likelihood rather than performing many training and testing runs. This is another very nice motivation for using Gaussian Processes: optimising parameters is much easier. In the following, we demonstrate how to select all parameters of the used model. In Shogun, parameter configurations (corresponding to $\boldsymbol{\theta}$ are stored in instances of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CParameterCombination.html">CParameterCombination</a>, which can be applied to machines.
This approach is known as maximum likelihood II (the 2 is for the second level, averaging over all possible $\mathbf{f}|\boldsymbol{\theta}$), or evidence maximisation.
End of explanation
# train gp
gp.train()
# visualise predictive distribution
p_test = gp.get_probabilities(test_features)
plot_binary_data(X_train, y_train)
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
Explanation: This now gives us a trained Gaussian Process with the best hyper-parameters. In the above setting, this is the s <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a> bandwith, and its scale (which is stored in the GP itself since Shogun kernels do not support scalling). We can now again visualise the predictive distribution, and also output the best parameters.
End of explanation
# parameter space, increase resolution if you want finer plots, takes long though
resolution=5
widths=2**np.linspace(-4,10,resolution)
scales=2**np.linspace(-5,10,resolution)
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
gp = GaussianProcessClassification(inf)
inf.set_tolerance(1e-3)
# compute marginal likelihood for every parameter combination
# use Shogun objects from above
marginal_likelihoods=np.zeros((len(widths), len(scales)))
for i in range(len(widths)):
for j in range(len(scales)):
kernel.set_width(widths[i])
inf.set_scale(scales[j])
marginal_likelihoods[i,j]=-inf.get_negative_log_marginal_likelihood()
# contour plot of marginal likelihood as a function of kernel width and scale
plt.contour(np.log2(widths), np.log2(scales), marginal_likelihoods)
plt.colorbar()
plt.xlabel("Kernel width (log-scale)")
plt.ylabel("Kernel scale (log-scale)")
_=plt.title("Log Marginal Likelihood")
# plot our found best parameters
_=plt.plot([np.log2(best_width)], [np.log2(best_scale)], 'r*', markersize=20)
Explanation: Note how nicely this predictive distribution matches the data generating distribution. Also note that the best kernel bandwidth is different to the one we saw in the above plot. This is caused by the different kernel scalling that was also learned automatically. The kernel scaling, roughly speaking, corresponds to the sharpness of the changes in the surface of the predictive likelihood. Since we have two hyper-parameters, we can plot the surface of the marginal likelihood as a function of both of them. This is sometimes interesting, for example when this surface has multiple maximum (corresponding to multiple "best" parameter settings), and thus might be useful for analysis. It is expensive however.
End of explanation
# for measuring runtime
import time
# simple regression data
X_train, y_train, X_test, y_test = generate_regression_toy_data(n=1000)
# bring data into shogun representation (features are 2d-arrays, organised as column vectors)
feats_train=RealFeatures(X_train.reshape(1,len(X_train)))
feats_test=RealFeatures(X_test.reshape(1,len(X_test)))
labels_train=RegressionLabels(y_train)
# inducing features (here: a random grid over the input space, try out others)
n_inducing=10
#X_inducing=linspace(X_train.min(), X_train.max(), n_inducing)
X_inducing=np.random.rand(X_train.min()+n_inducing)*X_train.max()
feats_inducing=RealFeatures(X_inducing.reshape(1,len(X_inducing)))
# create FITC inference method and GP instance
inf = FITCInferenceMethod(GaussianKernel(10, best_width), feats_train, ZeroMean(), labels_train, \
GaussianLikelihood(best_sigma), feats_inducing)
gp = GaussianProcessRegression(inf)
start=time.time()
gp.train()
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
print "FITC inference took %.2f seconds" % (time.time()-start)
# exact GP
start=time.time()
inf_exact = ExactInferenceMethod(GaussianKernel(10, best_width), feats_train, ZeroMean(), labels_train, \
GaussianLikelihood(best_sigma))
inf_exact.set_scale(best_scale)
gp_exact = GaussianProcessRegression(inf_exact)
gp_exact.train()
means_exact = gp_exact.get_mean_vector(feats_test)
variances_exact = gp_exact.get_variance_vector(feats_test)
print "Exact inference took %.2f seconds" % (time.time()-start)
# comparison plot FITC and exact inference, plot 95% confidence of both predictive distributions
plt.figure(figsize=(18,5))
plt.plot(X_test, y_test, color="black", linewidth=3)
plt.plot(X_test, means, 'r--', linewidth=3)
plt.plot(X_test, means_exact, 'b--', linewidth=3)
plt.plot(X_train, y_train, 'ro')
plt.plot(X_inducing, np.zeros(len(X_inducing)), 'g*', markersize=15)
# tube plot of 95% confidence
error=1.96*np.sqrt(variances)
plt.plot(X_test,means-error, color='red', alpha=0.3, linewidth=3)
plt.fill_between(X_test,means-error,means+error,color='red', alpha=0.3)
error_exact=1.96*np.sqrt(variances_exact)
plt.plot(X_test,means_exact-error_exact, color='blue', alpha=0.3, linewidth=3)
plt.fill_between(X_test,means_exact-error_exact,means_exact+error_exact,color='blue', alpha=0.3)
# plot upper confidence lines later due to legend
plt.plot(X_test,means+error, color='red', alpha=0.3, linewidth=3)
plt.plot(X_test,means_exact+error_exact, color='blue', alpha=0.3, linewidth=3)
plt.legend(["True", "FITC prediction", "Exact prediction", "Data", "Inducing points", "95% FITC", "95% Exact"])
_=plt.title("Comparison FITC and Exact Regression")
Explanation: Our found maximum nicely matches the result of the "grid-search". The take home message for this is: With Gaussian Processes, you neither need to do expensive brute force approaches to find best paramters (but you can use gradient descent), nor do you need to do expensive cross-validation to evaluate your model (but can use the Bayesian concept of maximum likelihood II).
Excurs: Large-Scale Regression
One "problem" with the classical method of Gaussian Process based inference is the computational complexity of $\mathcal{O}(n^3)$, where $n$ is the number of training examples. This is caused by matrix inversion, Cholesky factorization, etc. Up to a few thousand points, this is feasible. You will quickly run into memory and runtime problems for very large problems.
One way of approaching very large problems is called Fully Independent Training Components, which is a low-rank plus diagonal approximation to the exact covariance. The rough idea is to specify a set of $m\ll n$ inducing points and to base all computations on the covariance between training/test and inducing points only, which intuitively corresponds to combining various training points around an inducing point. This reduces the computational complexity to $\mathcal{O}(nm^2)$, where again $n$ is the number of training points, and $m$ is the number of inducing points. This is quite a significant decrease, in particular if the number of inducing points is much smaller than the number of examples.
The optimal way to specify inducing points is to densely and uniformly place them in the input space. However, this might be quickly become infeasible in high dimensions. In this case, a random subset of the training data might be a good idea.
In Shogun, the class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CFITCInferenceMethod.html">CFITCInferenceMethod</a> handles inference for regression with the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianLikelihood.html">CGaussianLikelihood</a>. Below, we demonstrate its usage on a toy example and compare to exact regression. Note that <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGradientModelSelection.html">CGradientModelSelection</a> still works as before. We compare the runtime for inference with both GP.
First, note that changing the inference method only requires the change of a single line of code
End of explanation |
10,265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mining Ulysses
Step1: A better method in Python 3 is -
Step2: These are the words that appear more than 200 times and I have excluded the really common words (greater than 944 times) | Python Code:
s
clean_s = removeDelimiter(s," ",[".",",",";","_","-",":","!","?","\"",")","("])
wordlist = clean_s.split()
dictionary = {}
for word in wordlist:
if word in dictionary:
tmp = dictionary[word]
dictionary[word]=tmp+1
else:
dictionary[word]=1
import operator
sorted_dict = sorted(dictionary.items(), key=operator.itemgetter(1))
Explanation: Mining Ulysses
End of explanation
sorted(dictionary.items(), key=lambda x: x[1])
#Much more interesting are the uncommon words
infreq_metric = []
for ordered_words,value_words in sorted_dict:
if value_words == 1:
infreq_metric.append((ordered_words,value_words))
for word in infreq_metric:
print('"{}"'.format(word[0]))
#the very common words
freq_metric = []
for ordered_words,value_words in sorted_dict:
if value_words > 944:
freq_metric.append((ordered_words,value_words))
for pair in freq_metric:
print('The word "{}" appears {} times'.format(pair[0],pair[1]))
Explanation: A better method in Python 3 is -
End of explanation
freq_metric = []
for ordered_words,value_words in sorted_dict:
if value_words >200 and value_words< 944:
freq_metric.append(str(value_words))
print(ordered_words,value_words)
%matplotlib inline
numbs = [int(x) for x in freq_metric]
plt.plot(numbs)
l = [makeSortable(str(dictionary[k])) + " # " + k for k in dictionary.keys()]
for w in sorted(l):
print(w)
count = {}
for k in dictionary.keys():
if dictionary[k] in count:
tmp = count[dictionary[k]]
count[dictionary[k]] = tmp + 1
else:
count[dictionary[k]] = 1
for k in sorted(count.keys()):
print(str(count[k]) + " words appear " + str(k) + " times")
# %load text_analysis.py
# this code is licenced under creative commons licence as long as you
# cite the author: Rene Pickhardt / www.rene-pickhardt.de
# adds leading zeros to a string so all result strings can be ordered
def makeSortable(w):
l = len(w)
tmp = ""
for i in range(5-l):
tmp = tmp + "0"
tmp = tmp + w
return tmp
#replaces all kind of structures passed in l in a text s with the 2nd argument
def removeDelimiter(s,new,l):
for c in l:
s = s.replace(c, new);
return s;
def analyzeWords(s):
s = removeDelimiter(s," ",[".",",",";","_","-",":","!","?","\"",")","("])
wordlist = s.split()
dictionary = {}
for word in wordlist:
if word in dictionary:
tmp = dictionary[word]
dictionary[word]=tmp+1
else:
dictionary[word]=1
l = [makeSortable(str(dictionary[k])) + " # " + k for k in dictionary.keys()]
for w in sorted(l):
print(w)
count = {}
for k in dictionary.keys():
if dictionary[k] in count:
tmp = count[dictionary[k]]
count[dictionary[k]] = tmp + 1
else:
count[dictionary[k]] = 1
for k in sorted(count.keys()):
print(str(count[k]) + " words appear " + str(k) + " times")
def differentWords(s):
s = removeDelimiter(s," ",[".",",",";","_","-",":","!","?","\"",")","("])
wordlist = s.split()
count = 0
dictionary = {}
for word in wordlist:
if word in dictionary:
tmp = dictionary[word]
dictionary[word]=tmp+1
else:
dictionary[word]=1
count = count + 1
print(str(count) + " different words")
print("every word was used " + str(float(len(wordlist))/float(count)) + " times on average")
return count
def analyzeSentences(s):
s = removeDelimiter(s,".",[".",";",":","!","?"])
sentenceList = s.split(".")
wordList = s.split()
wordCount = len(wordList)
sentenceCount = len(sentenceList)
print(str(wordCount) + " words in " + str(sentenceCount) + " sentences ==> " + str(float(wordCount)/float(sentenceCount)) + " words per sentence")
max = 0
satz = ""
for w in sentenceList:
if len(w) > max:
max = len(w);
satz = w;
print(satz + "laenge " + str(len(satz)))
texts = ["ulysses.txt"]
for text in texts:
print(text)
datei = open(text,'r')
s = datei.read().lower()
analyzeSentences(s)
differentWords(s)
analyzeWords(s)
datei.close()
Explanation: These are the words that appear more than 200 times and I have excluded the really common words (greater than 944 times)
End of explanation |
10,266 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scratch dans un notebook
Il existe une version javascript de Scratch
Step1: Si le résultat est vide, cela signifie que les fichiers ont déjà été copiés. On veut obtenir ceci
Step4: On exécute le code suivant pour faire apparaître la fenêtre. Dernière précision, sauver le notebook ne sauve pas l'animation Scratch, il faut le faire soi-même (le résultat ne passe pas très dans la documentation encore, problème de frame et de javascript). | Python Code:
import code_beatrix.jsscripts.snap
%load_ext code_beatrix
import os, glob
js_path = os.path.dirname(code_beatrix.jsscripts.snap.__file__)
files = [ os.path.split(_)[-1] for _ in glob.glob(js_path + "/*.js") ]
print(",".join(files))
path = "/static/snap/"
js_libs = [path + _ for _ in files ]
import notebook
print("fichier à récupérer dans ", "..." + js_path[-40:])
print("fichier à copier à ", os.path.join(os.path.dirname(notebook.__file__),"static", "snap"))
from code_beatrix.jsscripts import copy_jstool2notebook
copy_jstool2notebook("snap")
Explanation: Scratch dans un notebook
Il existe une version javascript de Scratch : snap. On peut récupérer les sources soit depuis le site, soit depuis github jmoenig/Snap--Build-Your-Own-Blocks (il y a un peu plus d'images, de personnages). J'ai recopié les sources dans le module code_beatrix. Voici un exemple qui montre comment faire apparaître une interface Snap depuis un notebook Jupyter. Tout d'abord, il faut recopier le code javascript dans un répertoire de Jupyter afin que le serveur local puisse les trouver.
End of explanation
from pyquickhelper.helpgen import NbImage
NbImage("screenshot_scratch_nb.png", width="75%")
Explanation: Si le résultat est vide, cela signifie que les fichiers ont déjà été copiés. On veut obtenir ceci :
End of explanation
html_src =
<h2>Simple Scratch</h2>
<div id="scratch1-div">
<iframe src="/static/snap/snap.html" width="1000" height="600" scrolling="auto">
</iframe>
</div>
test_js =
import IPython
from IPython.core.display import display_html, display_javascript, Javascript
display_html(IPython.core.display.HTML(data=html_src))
display_javascript( Javascript(data=test_js, lib= js_libs))
Explanation: On exécute le code suivant pour faire apparaître la fenêtre. Dernière précision, sauver le notebook ne sauve pas l'animation Scratch, il faut le faire soi-même (le résultat ne passe pas très dans la documentation encore, problème de frame et de javascript).
End of explanation |
10,267 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Prediction with LSTM Recurrent Neural Networks in Python with Keras
時系列データの予測
Step1: データセットの作成
Step2: 活性化関数にsigmoidやtanhを使うときは入力のスケールに大きな影響をうける
入力は[0, 1]に正規化するとよい
scikit-learnのMinMaxScalerが便利
Step3: 1つ前の時刻tの値から次の時刻t+1の値を予測するのが課題
(t)と(t+1)の値のペアを訓練データとする
Xの方はlook_back(いくつ前まで使うか)によて複数の値があるため2次元配列になる
Step4: trainXは (samples, features) の配列
LSTMでは入力を (samples, time steps, features) の配列にする必要がある
look_backが大きい場合は入力が系列であるがfeaturesと考える?
Step5: LSTMの訓練
LSTMはRNNの一種
BPTT (Backpropagation Through Time) で訓練する
LSTMは最近のシーケンスを記憶できる Memory Block を使う
Forgate Gate
Step6: 予測
Step7: 結果をプロット
Step8: Windows Size
過去の3つ分のデータ(t-2, t-1, t) から次のデータ (t+1) を予測する
このやり方では過去のデータを系列ではなく次元長として扱う(次元長は固定)
Step9: 系列長は1のままで入力次元を3としている
Step10: WindowSize=1より少し改善した!
入力を特徴ではなく系列として扱うアプローチ
過去の観測を別々の入力特徴量として表現するのではなく、入力特徴量の系列として使用することができます
これは実際に問題の正確な枠組みになります
入力の系列3の間はLSTMが記憶している?
系列長を変えられる?(0パディング?)
Step11: 結果はちょっと悪化した・・・
Step12: LSTMの内部状態
Kerasのデフォルトでは各訓練バッチで内部状態がリセットされる
またpredict()やevaluate()を呼ぶたびにリセットされる
statefulにすることで訓練中はずっと内部状態を維持することができる
```python
LSTMオブジェクトの作成
model.add(LSTM(4,
batch_input_shape=(batch_size, time_steps, features),
stateful=True)
訓練ループの書き方
for i in range(100)
Step13: Stacked LSTM
LSTMを複数つなげることができる
1つ前のLSTMが最終出力ではなく出力の系列を返す必要がある
return_sequences=Trueとする
python
model.add(LSTM(4,
batch_input_shape=(batch_size, look_back, 1),
stateful=True,
return_sequences=True))
model.add(LSTM(4,
batch_input_shape=(batch_size, look_back, 1),
stateful=True)) | Python Code:
%matplotlib inline
import pandas
import matplotlib.pyplot as plt
dataset = pandas.read_csv('data/international-airline-passengers.csv',
usecols=[1], engine='python', skipfooter=3)
plt.plot(dataset)
plt.show()
dataset
Explanation: Time Series Prediction with LSTM Recurrent Neural Networks in Python with Keras
時系列データの予測
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import pandas
import math
from keras.models import Sequential
from keras.layers import Dense, LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
# 再現性を担保するために固定したほうがよい
np.random.seed(7)
# load the dataset
dataframe = pandas.read_csv('data/international-airline-passengers.csv', usecols=[1], engine='python', skipfooter=3)
dataset = dataframe.values
type(dataframe), type(dataset)
dataset = dataset.astype('float32')
dataset.shape
Explanation: データセットの作成
End of explanation
# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
dataset[:10]
# split into train and test sets
train_size = int(len(dataset) * 0.67)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size, :], dataset[train_size:len(dataset), :]
print(len(train), len(test))
Explanation: 活性化関数にsigmoidやtanhを使うときは入力のスケールに大きな影響をうける
入力は[0, 1]に正規化するとよい
scikit-learnのMinMaxScalerが便利
End of explanation
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset) - look_back - 1):
a = dataset[i:(i + look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return np.array(dataX), np.array(dataY)
look_back = 1
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
print(trainX.shape)
print(trainY.shape)
print(testX.shape)
print(testY.shape)
Explanation: 1つ前の時刻tの値から次の時刻t+1の値を予測するのが課題
(t)と(t+1)の値のペアを訓練データとする
Xの方はlook_back(いくつ前まで使うか)によて複数の値があるため2次元配列になる
End of explanation
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
print(trainX.shape)
print(testX.shape)
Explanation: trainXは (samples, features) の配列
LSTMでは入力を (samples, time steps, features) の配列にする必要がある
look_backが大きい場合は入力が系列であるがfeaturesと考える?
End of explanation
model = Sequential()
# input_shape=(input_length, input_dim)
# look_back次元の系列長1のデータが入力、出力は4次元ベクトル
# 系列長1なので記憶は使われない? LSTMに入れたらすぐ出てくる
model.add(LSTM(4, input_shape=(1, look_back)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)
Explanation: LSTMの訓練
LSTMはRNNの一種
BPTT (Backpropagation Through Time) で訓練する
LSTMは最近のシーケンスを記憶できる Memory Block を使う
Forgate Gate: 記憶から何を捨て去るかを決める
Input Gate: 記憶状態を更新するための入力からの値を決める
Output Gate: 入力と記憶状態から何を出力するかを決める
End of explanation
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
# 出力は正規化されているため元のスケールに戻す
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
print(trainPredict.shape, trainY.shape)
print(testPredict.shape, testY.shape)
trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:, 0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:, 0]))
print('Test Score: %.2f RMSE' % (testScore))
Explanation: 予測
End of explanation
trainPredictPlot = np.empty_like(dataset)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(trainPredict) + look_back, :] = trainPredict
testPredictPlot = np.empty_like(dataset)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict) + (look_back * 2) + 1:len(dataset) - 1, :] = testPredict
# 元データをプロット(青)
plt.plot(scaler.inverse_transform(dataset))
# 訓練内データの予測をプロット(緑)
plt.plot(trainPredictPlot)
# テストデータの予測をプロット
plt.plot(testPredictPlot)
Explanation: 結果をプロット
End of explanation
look_back = 3
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
print(trainX.shape)
print(trainY.shape)
print(testX.shape)
print(testY.shape)
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
print(trainX.shape)
print(testX.shape)
Explanation: Windows Size
過去の3つ分のデータ(t-2, t-1, t) から次のデータ (t+1) を予測する
このやり方では過去のデータを系列ではなく次元長として扱う(次元長は固定)
End of explanation
model = Sequential()
# input_shape=(input_length, input_dim)
model.add(LSTM(4, input_shape=(1, look_back)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)
# 予測
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
# 元のスケールに戻す
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:, 0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:, 0]))
print('Test Score: %.2f RMSE' % (testScore))
Explanation: 系列長は1のままで入力次元を3としている
End of explanation
look_back = 3
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# [samples, time steps, features]
# 3次元の系列長1のデータ => 1次元の系列長3のデータ
trainX = np.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1))
testX = np.reshape(testX, (testX.shape[0], testX.shape[1], 1))
print(trainX.shape, testX.shape)
model = Sequential()
# input_shape=(input_length, input_dim)
# 入力データの次元が1で系列長がlook_backになった!
model.add(LSTM(4, input_shape=(look_back, 1)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)
# 予測
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
# 元のスケールに戻す
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:, 0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:, 0]))
print('Test Score: %.2f RMSE' % (testScore))
Explanation: WindowSize=1より少し改善した!
入力を特徴ではなく系列として扱うアプローチ
過去の観測を別々の入力特徴量として表現するのではなく、入力特徴量の系列として使用することができます
これは実際に問題の正確な枠組みになります
入力の系列3の間はLSTMが記憶している?
系列長を変えられる?(0パディング?)
End of explanation
trainPredictPlot = np.empty_like(dataset)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(trainPredict) + look_back, :] = trainPredict
testPredictPlot = np.empty_like(dataset)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict) + (look_back * 2) + 1:len(dataset) - 1, :] = testPredict
# 元データをプロット(青)
plt.plot(scaler.inverse_transform(dataset))
# 訓練内データの予測をプロット(緑)
plt.plot(trainPredictPlot)
# テストデータの予測をプロット
plt.plot(testPredictPlot)
Explanation: 結果はちょっと悪化した・・・
End of explanation
look_back = 3
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# [samples, time steps, features]
# 3次元の系列長1のデータ => 1次元の系列長3のデータ
trainX = np.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1))
testX = np.reshape(testX, (testX.shape[0], testX.shape[1], 1))
batch_size = 1
model = Sequential()
model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
#model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)
for i in range(100):
model.fit(trainX, trainY, epochs=1, batch_size=batch_size, verbose=2, shuffle=False)
model.reset_states()
# 予測
trainPredict = model.predict(trainX, batch_size=batch_size)
testPredict = model.predict(testX, batch_size=batch_size)
# 元のスケールに戻す
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:, 0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:, 0]))
print('Test Score: %.2f RMSE' % (testScore))
trainPredictPlot = np.empty_like(dataset)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(trainPredict) + look_back, :] = trainPredict
testPredictPlot = np.empty_like(dataset)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict) + (look_back * 2) + 1:len(dataset) - 1, :] = testPredict
# 元データをプロット(青)
plt.plot(scaler.inverse_transform(dataset))
# 訓練内データの予測をプロット(緑)
plt.plot(trainPredictPlot)
# テストデータの予測をプロット
plt.plot(testPredictPlot)
Explanation: LSTMの内部状態
Kerasのデフォルトでは各訓練バッチで内部状態がリセットされる
またpredict()やevaluate()を呼ぶたびにリセットされる
statefulにすることで訓練中はずっと内部状態を維持することができる
```python
LSTMオブジェクトの作成
model.add(LSTM(4,
batch_input_shape=(batch_size, time_steps, features),
stateful=True)
訓練ループの書き方
for i in range(100):
model.fit(trainX, trainY, epochs=1, batch_size=batch_size, verbose=2, shuffle=False)
model.reset_states()
予測の仕方
model.predict(trainX, batch_size=batch_size)
```
LSTM作成時にstateful=Trueを指定する
batch_input_shapeでバッチサイズなどの情報も追加する
fit時はshuffle=Falseにする
各エポックの最後で明示的にreset_states()する
predict時もbatch_sizeを与える
End of explanation
look_back = 3
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# [samples, time steps, features]
# 3次元の系列長1のデータ => 1次元の系列長3のデータ
trainX = np.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1))
testX = np.reshape(testX, (testX.shape[0], testX.shape[1], 1))
batch_size = 1
model = Sequential()
model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True,
return_sequences=True))
model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
#model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)
for i in range(100):
model.fit(trainX, trainY, epochs=1, batch_size=batch_size, verbose=2, shuffle=False)
model.reset_states()
# 予測
trainPredict = model.predict(trainX, batch_size=batch_size)
testPredict = model.predict(testX, batch_size=batch_size)
# 元のスケールに戻す
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:, 0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:, 0]))
print('Test Score: %.2f RMSE' % (testScore))
trainPredictPlot = np.empty_like(dataset)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(trainPredict) + look_back, :] = trainPredict
testPredictPlot = np.empty_like(dataset)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict) + (look_back * 2) + 1:len(dataset) - 1, :] = testPredict
# 元データをプロット(青)
plt.plot(scaler.inverse_transform(dataset))
# 訓練内データの予測をプロット(緑)
plt.plot(trainPredictPlot)
# テストデータの予測をプロット
plt.plot(testPredictPlot)
Explanation: Stacked LSTM
LSTMを複数つなげることができる
1つ前のLSTMが最終出力ではなく出力の系列を返す必要がある
return_sequences=Trueとする
python
model.add(LSTM(4,
batch_input_shape=(batch_size, look_back, 1),
stateful=True,
return_sequences=True))
model.add(LSTM(4,
batch_input_shape=(batch_size, look_back, 1),
stateful=True))
End of explanation |
10,268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Marginal Generating Units in ERCOT
Project Goals and Model Choices
The goal of this project is to build a model that can predict the type of fossil marginal generating units (MGU) that will provide electricity for additional demand at any given set of grid conditions. This type of problem can be very difficult to solve, especially if the model is also trying to predict grid conditions like demand or wind generation. We are simplifying the model by treating these inputs as exogenous - the time of day or day of the year doesn't matter.
Predicting which individual power plant will provide marginal generation under a given set of grid conditions is also difficult, and prone to overfitting. We will group individual fossil power plants based on their fuel type, heat rate, and historical operating behavior using k-means clustering. The model will predict which group or groups is likely to provide marginal generation.
We use 9 years (2007-2015) of hourly generation and load data for the ERCOT power grid. Over the 9 years of data we see a large increase in the amount of wind generation and increased generation from natural gas power plants.
Model Inputs
As planned, the model will have the following inputs
Step1: Hourly Fossil Power Plant Generation (EPA data)
This figure shows the hourly gross load from three sample plants over the last 6 months of 2015. These three sample plants happen to show a range of different sizes and behaviors.
The left and middle subplots represent coal plants with 1 and 2 units, all of which have minimum operating loads. We hope that aggregating facilities into groups will allow us to ignore shutoff below the minimum load - we care about the change in group generation rather than if an individual power plant goes from off to on.
The plant on the right consists of two natural gas combustion turbines, which can quickly turn on and ramp up. It never appears to hit its maximum generation of ~250MW.
Step2: Distribution of Wind Power Over Time (ERCOT data)
ERCOT provides hourly data of load, wind generation, and the percent of load that is served by wind generation. This figure shows the distribution of that last dataset by year. It is easy to see that more of the load is served by wind each year. The distribution also flattens out - fewer hours see a very small amount of the load covered by wind.
Each violin has a small boxplot in the middle of it. The white circle at the center shows the median value, which increases every year, and was over 10% in 2015.
Step3: ERCOT Load and Wind Generation Over Time
These figures show monthly average load and wind generation over each year. Load (on the left) follows a predictable pattern with peak demand in the summer months. Wind generation (on the right) is a little messier. Over most years there is a dip in the summer, but we don't see the same dip in 2015.
Step4: ERCOT Installed Wind Capacity
This helps explain the figure above. There was lots of wind installed over the course of 2015, which could have helped bump up the output. | Python Code:
from IPython.display import SVG
SVG('https://www.dropbox.com/s/k8ac0la03hkjo5f/ERCOT%20power%20plants%202007.svg?raw=1')
Explanation: Predicting Marginal Generating Units in ERCOT
Project Goals and Model Choices
The goal of this project is to build a model that can predict the type of fossil marginal generating units (MGU) that will provide electricity for additional demand at any given set of grid conditions. This type of problem can be very difficult to solve, especially if the model is also trying to predict grid conditions like demand or wind generation. We are simplifying the model by treating these inputs as exogenous - the time of day or day of the year doesn't matter.
Predicting which individual power plant will provide marginal generation under a given set of grid conditions is also difficult, and prone to overfitting. We will group individual fossil power plants based on their fuel type, heat rate, and historical operating behavior using k-means clustering. The model will predict which group or groups is likely to provide marginal generation.
We use 9 years (2007-2015) of hourly generation and load data for the ERCOT power grid. Over the 9 years of data we see a large increase in the amount of wind generation and increased generation from natural gas power plants.
Model Inputs
As planned, the model will have the following inputs:
- Current installed capacity for wind and each fossil group
- Current unused capacity of each fossil group
- This serves to normalize inputs, and is done rather than using total load as an input
- Current wind generation
- We only have hourly wind generation, but effects of intra-hour variability on the system should increase when wind generation is high
- Wind generation in recent past (past 1, 2, ..., n hours - maybe up to n=5?)
- Combined with change in demand to create a change in "net load" to account for ramp rate constraints
- Fuel price for electric customers in Texas
Data Sources
We are using data from ERCOT, EIA, and EPA as described below.
- ERCOT
- Hourly wind generation
- Total hourly demand
- Energy Information Agency (EIA)
- Average heat rate and primary fuel source for each power plant (Form 923)
- Maximum generating capacity of each power plant (Form 860)
- Historical annual and monthly capacity factors for each power plant (923 and 860)
- Historical natural gas/coal prices for electric customers in Texas (monthly/quarterly)
- Environmental Protection Agency (EPA)
- Hourly gross load (MW) for every fossil power plant in ERCOT
Initial Data Exploration
Our work so far has primarily consisted of model design and data collection. We have collected all of the data described above, and started importing/cleaning/merging data.
Fossil Power Plants (EIA data)
The figure below shows all ERCOT fossil power plants active in 2007. Non-fossil power plants - which are not available in the hourly EPA data - have been removed from the dataset. Coal power plants tended to be larger - at least 500 megawatts (MW) - and were running at high capacity factors. Natural gas power plants covered a wide range of sizes from a few MW to over 2 GW, but the largest plants had low capacity factors. There are also a few small diesel and petroleum coke facilities.
End of explanation
SVG('https://www.dropbox.com/s/k79xmwfbu4dt16h/Sample%20hourly%20load.svg?raw=1')
Explanation: Hourly Fossil Power Plant Generation (EPA data)
This figure shows the hourly gross load from three sample plants over the last 6 months of 2015. These three sample plants happen to show a range of different sizes and behaviors.
The left and middle subplots represent coal plants with 1 and 2 units, all of which have minimum operating loads. We hope that aggregating facilities into groups will allow us to ignore shutoff below the minimum load - we care about the change in group generation rather than if an individual power plant goes from off to on.
The plant on the right consists of two natural gas combustion turbines, which can quickly turn on and ramp up. It never appears to hit its maximum generation of ~250MW.
End of explanation
SVG('https://www.dropbox.com/s/wjpa9sbj3sklpfc/Wind%20violin%20plot.svg?raw=1')
Explanation: Distribution of Wind Power Over Time (ERCOT data)
ERCOT provides hourly data of load, wind generation, and the percent of load that is served by wind generation. This figure shows the distribution of that last dataset by year. It is easy to see that more of the load is served by wind each year. The distribution also flattens out - fewer hours see a very small amount of the load covered by wind.
Each violin has a small boxplot in the middle of it. The white circle at the center shows the median value, which increases every year, and was over 10% in 2015.
End of explanation
SVG('https://www.dropbox.com/s/onet87a33pjvhym/Monthly%20ERCOT%20load%20and%20wind2.svg?raw=1')
Explanation: ERCOT Load and Wind Generation Over Time
These figures show monthly average load and wind generation over each year. Load (on the left) follows a predictable pattern with peak demand in the summer months. Wind generation (on the right) is a little messier. Over most years there is a dip in the summer, but we don't see the same dip in 2015.
End of explanation
SVG('https://www.dropbox.com/s/9k7nyhjt78k1quy/Monthly%20ERCOT%20wind%20capacity.svg?raw=1')
Explanation: ERCOT Installed Wind Capacity
This helps explain the figure above. There was lots of wind installed over the course of 2015, which could have helped bump up the output.
End of explanation |
10,269 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basics
Step1: Let's create a simple regular 10x10 degree grid with grid points at the center of each 10x10 degree cell.
First by hand to understand what is going on underneath
Step2: These are just the dimensions or we can also call them the "sides" of the array that defines all the gridpoints.
Step3: now we can create a BasicGrid. We can also define the shape of the grid. The first part of the shape must be in longitude direction.
Step4: The grid point indices or numbers are useful when creating lookup tables between grids.
We can now use the manualgrid instance to find the nearest gpi to any longitude and latitude
Step5: The same grid can also be created by a method for creating regular grids
Step6: If your grid has a 2D shape like the ones we just created then you can also get the row and the column of a grid point.
This can be useful if you know that you have data stored on a specific grid and you want to read the data from a grid point.
Step7: Iteration over gridpoints
Step8: Calculation of lookup tables
If you have a two grids and you know that you want to get the nearest neighbors for all of its grid points in the second grid you can calculate a lookup table once and reuse it later.
Step9: Now lets calculate a lookup table to the regular 10x10° grid we created earlier
Step10: The lookup table contains the grid point indices of the other grid, autogrid in this case.
Step11: Storing and loading grids
Grids can be stored to disk as CF compliant netCDF files
Step12: Define geodetic datum for grid | Python Code:
import pygeogrids.grids as grids
import numpy as np
Explanation: Basics
End of explanation
# create the longitudes
lons = np.arange(-180 + 5, 180, 10)
print(lons)
lats = np.arange(90 - 5, -90, -10)
print(lats)
Explanation: Let's create a simple regular 10x10 degree grid with grid points at the center of each 10x10 degree cell.
First by hand to understand what is going on underneath
End of explanation
# create all the grid points by using the numpy.meshgrid function
longrid, latgrid = np.meshgrid(lons, lats)
Explanation: These are just the dimensions or we can also call them the "sides" of the array that defines all the gridpoints.
End of explanation
manualgrid = grids.BasicGrid(longrid.flatten(), latgrid.flatten(), shape=(18, 36))
# Each point of the grid automatically got a grid point number
gpis, gridlons, gridlats = manualgrid.get_grid_points()
print(gpis[:10], gridlons[:10], gridlats[:10])
Explanation: now we can create a BasicGrid. We can also define the shape of the grid. The first part of the shape must be in longitude direction.
End of explanation
ngpi, distance = manualgrid.find_nearest_gpi(15.84, 28.76)
print(ngpi, distance)
# convert the gpi to longitude and latitude
print(manualgrid.gpi2lonlat(ngpi))
Explanation: The grid point indices or numbers are useful when creating lookup tables between grids.
We can now use the manualgrid instance to find the nearest gpi to any longitude and latitude
End of explanation
autogrid = grids.genreg_grid(10, 10)
autogrid == manualgrid
Explanation: The same grid can also be created by a method for creating regular grids
End of explanation
row, col = autogrid.gpi2rowcol(ngpi)
print(row, col)
Explanation: If your grid has a 2D shape like the ones we just created then you can also get the row and the column of a grid point.
This can be useful if you know that you have data stored on a specific grid and you want to read the data from a grid point.
End of explanation
for i, (gpi, lon, lat) in enumerate(autogrid.grid_points()):
print(gpi, lon, lat)
if i==10: # this is just to keep the example output short
break
Explanation: Iteration over gridpoints
End of explanation
# lets generate a second grid with 10 random points on the Earth surface.
randlat = np.random.random(10) * 180 - 90
randlon = np.random.random(10) * 360 - 180
print(randlat)
print(randlon)
# This grid has no meaningful 2D shape so none is given
randgrid = grids.BasicGrid(randlon, randlat)
Explanation: Calculation of lookup tables
If you have a two grids and you know that you want to get the nearest neighbors for all of its grid points in the second grid you can calculate a lookup table once and reuse it later.
End of explanation
lut = randgrid.calc_lut(autogrid)
print(lut)
Explanation: Now lets calculate a lookup table to the regular 10x10° grid we created earlier
End of explanation
lut_lons, lut_lats = autogrid.gpi2lonlat(lut)
print(lut_lats)
print(lut_lons)
Explanation: The lookup table contains the grid point indices of the other grid, autogrid in this case.
End of explanation
import pygeogrids.netcdf as nc
nc.save_grid('example.nc', randgrid)
loadedgrid = nc.load_grid('example.nc')
loadedgrid
randgrid
Explanation: Storing and loading grids
Grids can be stored to disk as CF compliant netCDF files
End of explanation
grid_WGS84 = grids.BasicGrid(randlon, randlat, geodatum='WGS84')
grid_GRS80 = grids.BasicGrid(randlon, randlat, geodatum='GRS80')
grid_WGS84.geodatum.geod.a
grid_GRS80.geodatum.geod.a
grid_WGS84.kdTree.geodatum.geod.sphere
Explanation: Define geodetic datum for grid
End of explanation |
10,270 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compare Solutions - Homogenous 3D
Brendan Smithyman | November 2015
This notebook shows comparisons between the responses of the different solvers.
Step1: Error plots for MiniZephyr vs. the AnalyticalHelmholtz response
Response of the field (showing where the numerical case does not match the analytical case)
Step2: Relative error of the MiniZephyr solution (in %) | Python Code:
import sys
sys.path.append('../')
import numpy as np
from zephyr.backend import MiniZephyr25D, SparseKaiserSource, AnalyticalHelmholtz
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png')
matplotlib.rcParams['savefig.dpi'] = 150 # Change this to adjust figure size
systemConfig = {
'dx': 1., # m
'dz': 1., # m
'c': 2500., # m/s
'rho': 1., # kg/m^3
'nx': 100, # count
'nz': 200, # count
'freq': 2e2, # Hz
'nky': 160,
'3D': True,
}
nx = systemConfig['nx']
nz = systemConfig['nz']
dx = systemConfig['dx']
dz = systemConfig['dz']
MZ = MiniZephyr25D(systemConfig)
AH = AnalyticalHelmholtz(systemConfig)
SKS = SparseKaiserSource(systemConfig)
xs, zs = 25, 25
sloc = np.array([xs, zs]).reshape((1,2))
q = SKS(sloc)
uMZ = MZ*q
uAH = AH(sloc)
clip = 0.01
plotopts = {
'vmin': -np.pi,
'vmax': np.pi,
'extent': [0., dx * nx, dz * nz, 0.],
'cmap': cm.bwr,
}
fig = plt.figure()
ax1 = fig.add_subplot(1,4,1)
plt.imshow(np.angle(uAH.reshape((nz, nx))), **plotopts)
plt.title('AH Phase')
ax2 = fig.add_subplot(1,4,2)
plt.imshow(np.angle(uMZ.reshape((nz, nx))), **plotopts)
plt.title('MZ Phase')
plotopts.update({
'vmin': -clip,
'vmax': clip,
})
ax3 = fig.add_subplot(1,4,3)
plt.imshow(uAH.reshape((nz, nx)).real, **plotopts)
plt.title('AH Real')
ax4 = fig.add_subplot(1,4,4)
plt.imshow(uMZ.reshape((nz, nx)).real, **plotopts)
plt.title('MZ Real')
fig.tight_layout()
Explanation: Compare Solutions - Homogenous 3D
Brendan Smithyman | November 2015
This notebook shows comparisons between the responses of the different solvers.
End of explanation
fig = plt.figure()
ax = fig.add_subplot(1,1,1, aspect=1000)
plt.plot(uAH.real.reshape((nz, nx))[:,xs], label='AnalyticalHelmholtz')
plt.plot(uMZ.real.reshape((nz, nx))[:,xs], label='MiniZephyr')
plt.legend(loc=1)
plt.title('Real part of response through xs=%d'%xs)
Explanation: Error plots for MiniZephyr vs. the AnalyticalHelmholtz response
Response of the field (showing where the numerical case does not match the analytical case):
Source region
PML regions
End of explanation
uMZr = uMZ.reshape((nz, nx))
uAHr = uAH.reshape((nz, nx))
plotopts.update({
'cmap': cm.jet,
'vmin': 0.,
'vmax': 50.,
})
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
plotopts.update({'vmax': 10.})
ax2 = fig.add_subplot(1,2,2)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
fig.tight_layout()
Explanation: Relative error of the MiniZephyr solution (in %)
End of explanation |
10,271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MixUp and Friends
Callbacks that can apply the MixUp (and variants) data augmentation to your training
Step1: Most Mix variants will perform the data augmentation on the batch, so to implement your Mix you should adjust the before_batch event with however your training regiment requires. Also if a different loss function is needed, you should adjust the lf as well. alpha is passed to Beta to create a sampler.
MixUp -
Step2: This is a modified implementation of mixup that will always blend at least 50% of the original image. The original paper calls for a Beta distribution which is passed the same value of alpha for each position in the loss function (alpha = beta = #). Unlike the original paper, this implementation of mixup selects the max of lambda which means that if the value that is sampled as lambda is less than 0.5 (i.e the original image would be <50% represented, 1-lambda is used instead.
The blending of two images is determined by alpha.
$alpha=1.$
Step3: We can examine the results of our Callback by grabbing our data during fit at before_batch like so
Step4: We can see that every so often an image gets "mixed" with another.
How do we train? You can pass the Callback either to Learner directly or to cbs in your fit function
Step5: CutMix -
Step6: Similar to MixUp, CutMix will cut a random box out of two images and swap them together. We can look at a few examples below
Step7: We train with it in the exact same way as well
Step8: Export - | Python Code:
from fastai.vision.all import *
#|export
def reduce_loss(
loss:Tensor,
reduction:str='mean' # PyTorch loss reduction
)->Tensor:
"Reduce the loss based on `reduction`"
return loss.mean() if reduction == 'mean' else loss.sum() if reduction == 'sum' else loss
#|export
class MixHandler(Callback):
"A handler class for implementing `MixUp` style scheduling"
run_valid = False
def __init__(self,
alpha:float=0.5 # Determine `Beta` distribution in range (0.,inf]
):
self.distrib = Beta(tensor(alpha), tensor(alpha))
def before_train(self):
"Determine whether to stack y"
self.stack_y = getattr(self.learn.loss_func, 'y_int', False)
if self.stack_y: self.old_lf,self.learn.loss_func = self.learn.loss_func,self.lf
def after_train(self):
"Set the loss function back to the previous loss"
if self.stack_y: self.learn.loss_func = self.old_lf
def after_cancel_train(self):
"If training is canceled, still set the loss function back"
self.after_train()
def after_cancel_fit(self):
"If fit is canceled, still set the loss function back"
self.after_train()
def lf(self, pred, *yb):
"lf is a loss function that applies the original loss function on both outputs based on `self.lam`"
if not self.training: return self.old_lf(pred, *yb)
with NoneReduce(self.old_lf) as lf:
loss = torch.lerp(lf(pred,*self.yb1), lf(pred,*yb), self.lam)
return reduce_loss(loss, getattr(self.old_lf, 'reduction', 'mean'))
Explanation: MixUp and Friends
Callbacks that can apply the MixUp (and variants) data augmentation to your training
End of explanation
#|export
class MixUp(MixHandler):
"Implementation of https://arxiv.org/abs/1710.09412"
def __init__(self,
alpha:float=.4 # Determine `Beta` distribution in range (0.,inf]
):
super().__init__(alpha)
def before_batch(self):
"Blend xb and yb with another random item in a second batch (xb1,yb1) with `lam` weights"
lam = self.distrib.sample((self.y.size(0),)).squeeze().to(self.x.device)
lam = torch.stack([lam, 1-lam], 1)
self.lam = lam.max(1)[0]
shuffle = torch.randperm(self.y.size(0)).to(self.x.device)
xb1,self.yb1 = tuple(L(self.xb).itemgot(shuffle)),tuple(L(self.yb).itemgot(shuffle))
nx_dims = len(self.x.size())
self.learn.xb = tuple(L(xb1,self.xb).map_zip(torch.lerp,weight=unsqueeze(self.lam, n=nx_dims-1)))
if not self.stack_y:
ny_dims = len(self.y.size())
self.learn.yb = tuple(L(self.yb1,self.yb).map_zip(torch.lerp,weight=unsqueeze(self.lam, n=ny_dims-1)))
Explanation: Most Mix variants will perform the data augmentation on the batch, so to implement your Mix you should adjust the before_batch event with however your training regiment requires. Also if a different loss function is needed, you should adjust the lf as well. alpha is passed to Beta to create a sampler.
MixUp -
End of explanation
path = untar_data(URLs.PETS)
pat = r'([^/]+)_\d+.*$'
fnames = get_image_files(path/'images')
item_tfms = [Resize(256, method='crop')]
batch_tfms = [*aug_transforms(size=224), Normalize.from_stats(*imagenet_stats)]
dls = ImageDataLoaders.from_name_re(path, fnames, pat, bs=64, item_tfms=item_tfms,
batch_tfms=batch_tfms)
Explanation: This is a modified implementation of mixup that will always blend at least 50% of the original image. The original paper calls for a Beta distribution which is passed the same value of alpha for each position in the loss function (alpha = beta = #). Unlike the original paper, this implementation of mixup selects the max of lambda which means that if the value that is sampled as lambda is less than 0.5 (i.e the original image would be <50% represented, 1-lambda is used instead.
The blending of two images is determined by alpha.
$alpha=1.$:
* All values between 0 and 1 have an equal chance of being sampled.
* Any amount of mixing between the two images is possible
$alpha<1.$:
* The values closer to 0 and 1 become more likely to be sampled than the values near 0.5.
* It is more likely that one of the images will be selected with a slight amount of the other image.
$alpha>1.$:
* The values closer to 0.5 become more likely than the numbers close to 0 or 1.
* It is more likely that the images will be blended evenly.
First we'll look at a very minimalistic example to show how our data is being generated with the PETS dataset:
End of explanation
mixup = MixUp(1.)
with Learner(dls, nn.Linear(3,4), loss_func=CrossEntropyLossFlat(), cbs=mixup) as learn:
learn.epoch,learn.training = 0,True
learn.dl = dls.train
b = dls.one_batch()
learn._split(b)
learn('before_train')
learn('before_batch')
_,axs = plt.subplots(3,3, figsize=(9,9))
dls.show_batch(b=(mixup.x,mixup.y), ctxs=axs.flatten())
#|hide
test_ne(b[0], mixup.x)
test_eq(b[1], mixup.y)
Explanation: We can examine the results of our Callback by grabbing our data during fit at before_batch like so:
End of explanation
#|slow
learn = vision_learner(dls, resnet18, loss_func=CrossEntropyLossFlat(), metrics=[error_rate])
learn.fit_one_cycle(1, cbs=mixup)
Explanation: We can see that every so often an image gets "mixed" with another.
How do we train? You can pass the Callback either to Learner directly or to cbs in your fit function:
End of explanation
#|export
class CutMix(MixHandler):
"Implementation of https://arxiv.org/abs/1905.04899"
def __init__(self,
alpha:float=1. # Determine `Beta` distribution in range (0.,inf]
):
super().__init__(alpha)
def before_batch(self):
"Add `rand_bbox` patches with size based on `lam` and location chosen randomly."
bs, _, H, W = self.x.size()
self.lam = self.distrib.sample((1,)).to(self.x.device)
shuffle = torch.randperm(bs).to(self.x.device)
xb1,self.yb1 = self.x[shuffle], tuple((self.y[shuffle],))
x1, y1, x2, y2 = self.rand_bbox(W, H, self.lam)
self.learn.xb[0][..., y1:y2, x1:x2] = xb1[..., y1:y2, x1:x2]
self.lam = (1 - ((x2-x1)*(y2-y1))/float(W*H))
if not self.stack_y:
ny_dims = len(self.y.size())
self.learn.yb = tuple(L(self.yb1,self.yb).map_zip(torch.lerp,weight=unsqueeze(self.lam, n=ny_dims-1)))
def rand_bbox(self,
W:int, # Width bbox will be
H:int, # Height bbox will be
lam:Tensor # lambda sample from Beta distribution i.e tensor([0.3647])
)->tuple: # Represents the top-left pixel location and the bottom-right pixel location
"Give a bounding box location based on the size of the im and a weight"
cut_rat = torch.sqrt(1. - lam).to(self.x.device)
cut_w = torch.round(W * cut_rat).type(torch.long).to(self.x.device)
cut_h = torch.round(H * cut_rat).type(torch.long).to(self.x.device)
# uniform
cx = torch.randint(0, W, (1,)).to(self.x.device)
cy = torch.randint(0, H, (1,)).to(self.x.device)
x1 = torch.clamp(cx - cut_w // 2, 0, W)
y1 = torch.clamp(cy - cut_h // 2, 0, H)
x2 = torch.clamp(cx + cut_w // 2, 0, W)
y2 = torch.clamp(cy + cut_h // 2, 0, H)
return x1, y1, x2, y2
Explanation: CutMix -
End of explanation
cutmix = CutMix(1.)
with Learner(dls, nn.Linear(3,4), loss_func=CrossEntropyLossFlat(), cbs=cutmix) as learn:
learn.epoch,learn.training = 0,True
learn.dl = dls.train
b = dls.one_batch()
learn._split(b)
learn('before_train')
learn('before_batch')
_,axs = plt.subplots(3,3, figsize=(9,9))
dls.show_batch(b=(cutmix.x,cutmix.y), ctxs=axs.flatten())
Explanation: Similar to MixUp, CutMix will cut a random box out of two images and swap them together. We can look at a few examples below:
End of explanation
#|slow
learn = vision_learner(dls, resnet18, loss_func=CrossEntropyLossFlat(), metrics=[accuracy, error_rate])
learn.fit_one_cycle(1, cbs=cutmix)
Explanation: We train with it in the exact same way as well
End of explanation
#|hide
from nbdev.export import notebook2script
notebook2script()
Explanation: Export -
End of explanation |
10,272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Annonce
La branche étudiante de l'IEEE propose, ce jeudi 25 février de 13 h 30 à 17 h 30, une formation à Mathematica pour donner les bases de ce logiciel. L'inscription est nécessaire.
http
Step1: Importer toutes les fonctions et quelques variables de Sympy
Step2: Plusieurs choses ne sont pas dans les notes de cours
Calculer le pgcd de $$p(x)=x^5 - 20x^4 + 140x^3 - 430x^2 + 579x - 270$$ et $$q(x)=x^6 - 25x^5 + 243x^4 - 1163x^3 + 2852x^2 - 3348x + 1440$$
Step3: 7 Calcul différentiel et intégral
7.1 Limites
Step4: 7.2 Sommes
Calculer la somme des nombres 134,245,325,412,57.
Step5: Caluler la somme
$$\sum_{i=0}^n i$$
Step6: Calculer la somme
$$\sum_{k=1}^\infty {1 \over k^6}$$
Step7: 7.3 Produit
Calculer le produit
$$\prod_{n=1}^{2016} 2n+1$$
Step8: 7.4 Calcul différentiel
Calculer la dérivée de $$x^5+bx$$
Step9: Calculer la dérivée de $$\arcsin(x)$$
Step10: 7.5 Calcul intégral
Calculer l'intégrale $$\int\log(x)\, dx$$
Step11: Calculer l'intégrale $$\int a^x\, dx$$
Step12: Calculer l'intégrale $$\int x^a\, dx$$
Step13: Calculer l'intégrale
$$\int \sec^2(x)\,dx$$
Step14: $$\int_0^5\int_0^2 x^2y\,dx\,dy$$
Step15: 7.6 Sommes, produits, dérivées et intégrales non évaluées
Step16: 7.7 Développement en séries
Calculer la série de Taylor de $\tan(x)$ en $x_0=0$ d'ordre 14.
Step17: 7.8 Équations différentielles
Step18: Trouver une fonction $f(x)$ telle que ${d\over dx} f(x) = f(x)$
Step19: Trouver une fonction $f(x)$ telle que ${d^2\over dx^2} f(x) = -f(x)$
Step20: Résoudre$$y''-4y'+5y=0$$.
Step21: 8 Algèbre linéaire
8.1 Définir une matrice
Définir la matrice
$$M=\begin{bmatrix}
2& 9& 3\ 4& 5& 10\ 2& 0& 3
\end{bmatrix}$$
Step22: Définir la matrice
$$N=\begin{bmatrix}
2& 9& 3\ 4& 5& 10\ -6& -1& -17
\end{bmatrix}$$
Step23: Définir le vecteur
$$v=\begin{bmatrix}
5\ 2\ 1
\end{bmatrix}$$
Step24: 8.2 Opérations de base
Step25: 8.3 Accéder aux coefficients
Step26: 8.4 Construction de matrices particulières
Step27: 8.5 Matrice échelonnée réduite
Calculer la forme échelonnée réduite de $M$ et $N$.
Step28: 8.6 Noyau
Calculer le noyau des matrices $M$ et $N$.
Step29: 8.7 Déterminant
Calculer le déterminant des matrices $M$ et $N$.
Step30: 8.8 Polynôme caractéristique
Calculer le polynôme caractérisque de la matrice $M$ et de $N$.
Step31: 8.9 Valeurs propres et vecteurs propres
Calculer les valeurs propres et vecteurs propres de
$$K=\begin{bmatrix}
93& 27& -57\ -40& 180& -140\ -15& 27& 51
\end{bmatrix}$$
Step32: En général, les racines peuvent être plus compliquées | Python Code:
from __future__ import division
Explanation: Annonce
La branche étudiante de l'IEEE propose, ce jeudi 25 février de 13 h 30 à 17 h 30, une formation à Mathematica pour donner les bases de ce logiciel. L'inscription est nécessaire.
http://ieee.aees.be/fr/accueil/25-francais/activites/conferences/151-formation-mathematica
Initialisation
Pour que la division soit comme en Python 3:
End of explanation
from sympy import *
from sympy.abc import a,b,c,k,n,t,u,v,w,x,y,z
init_printing(pretty_print=True, use_latex='mathjax')
Explanation: Importer toutes les fonctions et quelques variables de Sympy:
End of explanation
p = x**5 - 20*x**4 + 140*x**3 - 430*x**2 + 579*x - 270
q = x**6 - 25*x**5 + 243*x**4 - 1163*x**3 + 2852*x**2 - 3348*x + 1440
gcd(4,6)
gcd(p,q)
Explanation: Plusieurs choses ne sont pas dans les notes de cours
Calculer le pgcd de $$p(x)=x^5 - 20x^4 + 140x^3 - 430x^2 + 579x - 270$$ et $$q(x)=x^6 - 25x^5 + 243x^4 - 1163x^3 + 2852x^2 - 3348x + 1440$$
End of explanation
from sympy.abc import x
limit(1/x, x, 0, dir='+')
oo
limit(1/x, x, oo)
Explanation: 7 Calcul différentiel et intégral
7.1 Limites
End of explanation
sum([134, 245, 325, 412, 57])
sum([134, 245, 325, 412, 57, x])
sum([134, 245, 325, 412, 57, x, []])
Explanation: 7.2 Sommes
Calculer la somme des nombres 134,245,325,412,57.
End of explanation
from sympy.abc import i
summation(i, (i,0,n))
summation(i**2, (i,0,2016))
Explanation: Caluler la somme
$$\sum_{i=0}^n i$$
End of explanation
summation(1/k**6, (k, 1, oo))
Explanation: Calculer la somme
$$\sum_{k=1}^\infty {1 \over k^6}$$
End of explanation
product(2*n+1, (n,1,2016))
Explanation: 7.3 Produit
Calculer le produit
$$\prod_{n=1}^{2016} 2n+1$$
End of explanation
b,x
diff(x**5+b*x, b)
diff(x**5+b*x, x)
Explanation: 7.4 Calcul différentiel
Calculer la dérivée de $$x^5+bx$$
End of explanation
diff(asin(x), x)
Explanation: Calculer la dérivée de $$\arcsin(x)$$
End of explanation
integrate(log(x), x)
Explanation: 7.5 Calcul intégral
Calculer l'intégrale $$\int\log(x)\, dx$$
End of explanation
integrate(a**x, x)
Explanation: Calculer l'intégrale $$\int a^x\, dx$$
End of explanation
integrate(x**a, x)
log(100, 10)
log?
Explanation: Calculer l'intégrale $$\int x^a\, dx$$
End of explanation
integrate(sec(x)**2, x)
integrate(integrate(x**2*y, x), y)
Explanation: Calculer l'intégrale
$$\int \sec^2(x)\,dx$$
End of explanation
integrate(x**2*y, (x,0,2), (y,0,5))
Explanation: $$\int_0^5\int_0^2 x^2y\,dx\,dy$$
End of explanation
A = Sum(1/k**6, (k,1,oo))
B = Product(2*n+1, (n,1,21))
C = Derivative(asin(x), x)
D = Integral(log(x), x)
Eq(A, A.doit())
Eq(B, B.doit())
Eq(C, C.doit())
Eq(D, D.doit())
Explanation: 7.6 Sommes, produits, dérivées et intégrales non évaluées
End of explanation
series(tan(x), x, 0, 14)
series(sin(x), x, 0, 10)
Explanation: 7.7 Développement en séries
Calculer la série de Taylor de $\tan(x)$ en $x_0=0$ d'ordre 14.
End of explanation
from sympy import E
A = Derivative(E**x, x)
Eq(A, A.doit())
Explanation: 7.8 Équations différentielles
End of explanation
f = Function('f')
f(x)
eq = Eq(Derivative(f(x),x), f(x))
dsolve(eq)
Explanation: Trouver une fonction $f(x)$ telle que ${d\over dx} f(x) = f(x)$
End of explanation
eq2 = Eq(Derivative(f(x),x,x), -f(x))
dsolve(eq2)
Derivative(f(x),x,x,x,x,x)
Derivative(f(x),x,5)
f(x).diff(x,5)
Explanation: Trouver une fonction $f(x)$ telle que ${d^2\over dx^2} f(x) = -f(x)$
End of explanation
from sympy.abc import x,y
eq = Eq(y(x).diff(x,x)-4*y(x).diff(x)+5*y(x),0)
dsolve(eq, y(x))
Explanation: Résoudre$$y''-4y'+5y=0$$.
End of explanation
Matrix([[2, 9, 3], [4, 5, 10], [2, 0, 3]])
M = Matrix(3,3,[2, 9, 3, 4, 5, 10, 2, 0, 3])
Explanation: 8 Algèbre linéaire
8.1 Définir une matrice
Définir la matrice
$$M=\begin{bmatrix}
2& 9& 3\ 4& 5& 10\ 2& 0& 3
\end{bmatrix}$$
End of explanation
N = Matrix(3,3,[2,9,3,4,5,10,-6,-1,-17]); N
Explanation: Définir la matrice
$$N=\begin{bmatrix}
2& 9& 3\ 4& 5& 10\ -6& -1& -17
\end{bmatrix}$$
End of explanation
v = Matrix([5,2,1]); v
Explanation: Définir le vecteur
$$v=\begin{bmatrix}
5\ 2\ 1
\end{bmatrix}$$
End of explanation
M, N
M + N
M * 3
M * N
M * v
M ** -1
N ** -1
M.transpose()
Explanation: 8.2 Opérations de base
End of explanation
M
M[1,1]
M[2,1]
M.row(0)
M.col(1)
M
M[0,0] = pi
Explanation: 8.3 Accéder aux coefficients
End of explanation
zeros(4,6)
ones(3,8)
eye(5)
diag(3,4,5)
diag(3,4,5,M)
Explanation: 8.4 Construction de matrices particulières
End of explanation
M
M.rref()
N
N.rref()
Explanation: 8.5 Matrice échelonnée réduite
Calculer la forme échelonnée réduite de $M$ et $N$.
End of explanation
M.nullspace()
N.nullspace()
Explanation: 8.6 Noyau
Calculer le noyau des matrices $M$ et $N$.
End of explanation
M.det()
N.det()
Explanation: 8.7 Déterminant
Calculer le déterminant des matrices $M$ et $N$.
End of explanation
from sympy.abc import x
M.charpoly(x)
M.charpoly(x).as_expr()
N.charpoly(x).as_expr()
Explanation: 8.8 Polynôme caractéristique
Calculer le polynôme caractérisque de la matrice $M$ et de $N$.
End of explanation
K = Matrix(3,3,[93,27,-57,-40,180,-140,-15,27,51])
K
K.eigenvals()
K.eigenvects()
Explanation: 8.9 Valeurs propres et vecteurs propres
Calculer les valeurs propres et vecteurs propres de
$$K=\begin{bmatrix}
93& 27& -57\ -40& 180& -140\ -15& 27& 51
\end{bmatrix}$$
End of explanation
M.eigenvals()
Explanation: En général, les racines peuvent être plus compliquées:
End of explanation |
10,273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas Data Munging
Step1: ...and you want to filter it on some criteria. Pandas makes that easy with Boolean Indexing
Step2: This works great right? Unfortunately not, because once we
Step3: There's the warning.
So what should we have done differently? The warning suggests using ".loc[row_indexer, col_indexer]". So let's try subsetting the DataFrame the same way as before, but this time using the df.loc[ ] method.
Re-Creating Our New Dataframe Using .loc[]
Step4: Two warnings this time!
OK, So What's Going On?
Recall that our "criteria" variable is a Pandas Series of Boolean True/False values, corresponding to whether a row of 'df' meets our Number>300 criteria.
Step5: The Pandas Docs say a "common operation is the use of boolean vectors to filter the data" as we've done here. But apparently a boolean vector is not the "row_indexer" the warning advises us to use with .loc[] for creating new dataframes. Instead, Pandas wants us to use .loc[] with a vector of row-numbers (technically, "row labels", which here are numbers).
Solution
We can get to that "row_indexer" with one extra line of code. Building on what we had before. Instead of creating our new dataframe by filtering rows with a vector of True/False like below...
Step6: We first grab the indices of that filtered dataframe using .index...
Step7: And pass that list of indices to .loc[ ] to create our new dataframe
Step8: Now we can add a new column without throwing The Warning <sup>(tm)</sup> | Python Code:
import pandas as pd
df = pd.DataFrame({'Number' : [100,200,300,400,500], 'Letter' : ['a','b','c', 'd', 'e']})
df
Explanation: Pandas Data Munging: Avoiding that 'SettingWithCopyWarning'
If you use Python for data analysis, you probably use Pandas for Data Munging. And if you use Pandas, you've probably come across the warning below:
```
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
```
The Pandas documentation is great in general, but it's easy to read through the link above and still be confused. Or if you're like me, you'll read the documentation page, think "Oh, I get it," and then get the same warning again.
A Simple Reproducible Example of The Warning<sup>(tm)</sup>
Here's where this issue pops up. Say you have some data:
End of explanation
criteria = df['Number']>300
criteria
#Keep only rows which correspond to 'Number'>300 ('True' in the 'criteria' vector above)
df[criteria]
Explanation: ...and you want to filter it on some criteria. Pandas makes that easy with Boolean Indexing
End of explanation
#Create a new DataFrame based on filtering criteria
df_2 = df[criteria]
#Assign a new column and print output
df_2['new column'] = 'new value'
df_2
Explanation: This works great right? Unfortunately not, because once we:
1. Use that filtering code to create a new Pandas DataFrame, and
2. Assign a new column or change an existing column in that DataFrame
like so...
End of explanation
df.loc[criteria, :]
#Create New DataFrame Based on Filtering Criteria
df_2 = df.loc[criteria, :]
#Add a New Column to the DataFrame
df_2.loc[:, 'new column'] = 'new value'
df_2
Explanation: There's the warning.
So what should we have done differently? The warning suggests using ".loc[row_indexer, col_indexer]". So let's try subsetting the DataFrame the same way as before, but this time using the df.loc[ ] method.
Re-Creating Our New Dataframe Using .loc[]
End of explanation
criteria
Explanation: Two warnings this time!
OK, So What's Going On?
Recall that our "criteria" variable is a Pandas Series of Boolean True/False values, corresponding to whether a row of 'df' meets our Number>300 criteria.
End of explanation
df_2 = df[criteria]
Explanation: The Pandas Docs say a "common operation is the use of boolean vectors to filter the data" as we've done here. But apparently a boolean vector is not the "row_indexer" the warning advises us to use with .loc[] for creating new dataframes. Instead, Pandas wants us to use .loc[] with a vector of row-numbers (technically, "row labels", which here are numbers).
Solution
We can get to that "row_indexer" with one extra line of code. Building on what we had before. Instead of creating our new dataframe by filtering rows with a vector of True/False like below...
End of explanation
criteria_row_indices = df[criteria].index
criteria_row_indices
Explanation: We first grab the indices of that filtered dataframe using .index...
End of explanation
new_df = df.loc[criteria_row_indices, :]
new_df
Explanation: And pass that list of indices to .loc[ ] to create our new dataframe
End of explanation
new_df['New Column'] = 'New Value'
new_df
Explanation: Now we can add a new column without throwing The Warning <sup>(tm)</sup>
End of explanation |
10,274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demo 1 - Single forward problem [local]
Brendan Smithyman | [email protected] | March, 2015
Import NumPy
Step1: Import plotting tools from matplotlib and set format defaults
Step2: Base system configuration
Step3: Plotting functions
These are convenience functions to allow us to set plotting limits in one place. Since we're defining these with closures to save effort, they need to go after the system configuration to grab things like the geometry after it's defined.
Step4: Numerical Examples
Base configuration
In this configuration, the PML is set to a reasonable default and no free-surface conditions are imposed.
Step5: Results
Wavefield results
The velocity model is shown along with wavefields for three different cases, computed above. | Python Code:
import numpy as np
Explanation: Demo 1 - Single forward problem [local]
Brendan Smithyman | [email protected] | March, 2015
Import NumPy
End of explanation
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png')
matplotlib.rcParams['savefig.dpi'] = 150 # Change this to adjust figure size
# Plotting options
font = {
'family': 'Bitstream Vera Sans',
'weight': 'normal',
'size': 8,
}
matplotlib.rc('font', **font)
Explanation: Import plotting tools from matplotlib and set format defaults
End of explanation
# ------------------------------------------------
# Model geometry
cellSize = 1 # m
freq = 2e2 # Hz
nx = 164 # count
nz = 264 # count
dims = (nz, nx)
# ------------------------------------------------
# Model properties
velocity = 2500 # m/s
vanom = 500 # m/s
density = 2700 # units of density
Q = np.inf # can be inf
cPert = np.zeros(dims)
cPert[(nz/2)-20:(nz/2)+20,(nx/2)-20:(nx/2)+20] = vanom
c = np.fliplr(np.ones(dims) * velocity)
c += np.fliplr(cPert)
rho = np.fliplr(np.ones((nz,nx)) * density)
# ------------------------------------------------
# Survey geometry
srcs = np.array([np.ones(101)*32, np.zeros(101), np.linspace(32, 232, 101)]).T
recs = np.array([np.ones(101)*132, np.zeros(101), np.linspace(32, 232, 101)]).T
nsrc = len(srcs)
nrec = len(recs)
recmode = 'fixed'
geom = {
'src': srcs,
'rec': recs,
'mode': recmode,
}
# ------------------------------------------------
# Other parameters
ky = 0
freeSurf = [False, False, False, False] # t r b l
nPML = 32
# Base configuration for all subproblems
systemConfig = {
'dx': cellSize, # m
'dz': cellSize, # m
'c': c, # m/s
'rho': rho, # density
'Q': Q, # can be inf
'nx': nx, # count
'nz': nz, # count
'freeSurf': freeSurf, # t r b l
'nPML': nPML,
'geom': geom,
'freq': freq,
'ky': 0,
}
from zephyr.Survey import HelmSrc, HelmRx
rxs = [HelmRx(loc, 1.) for loc in recs]
sxs = [HelmSrc(loc, 1., rxs) for loc in srcs]
Explanation: Base system configuration
End of explanation
sms = 4
rms = 0.5
def plotField(u):
clip = 0.1*abs(u).max()
plt.imshow(u.real, cmap=cm.bwr, vmin=-clip, vmax=clip)
def plotModel(v):
lclip = 2000
hclip = 3000
plt.imshow(v.real, cmap=cm.jet, vmin=lclip, vmax=hclip)
def plotGeometry():
srcpos = srcs[activeSource][::2]
recpos = recs[:,::2]
axistemp = plt.axis()
plt.plot(srcpos[0], srcpos[1], 'kx', markersize=sms)
plt.plot(recpos[:,0], recpos[:,1], 'kv', markersize=rms)
plt.axis(axistemp)
Explanation: Plotting functions
These are convenience functions to allow us to set plotting limits in one place. Since we're defining these with closures to save effort, they need to go after the system configuration to grab things like the geometry after it's defined.
End of explanation
from zephyr.Kernel import SeisFDFDKernel
sp = SeisFDFDKernel(systemConfig)
activeSource = 50
u, d = sp.forward(sxs[activeSource], False)
b = sp.backprop(sxs[activeSource], np.ones((len(sxs[activeSource].rxList),)))
u.shape = (nz, nx)
b.shape = (nz, nx)
Explanation: Numerical Examples
Base configuration
In this configuration, the PML is set to a reasonable default and no free-surface conditions are imposed.
End of explanation
fig = plt.figure()
ax1 = fig.add_subplot(1,3,1)
plotModel(c)
plotGeometry()
ax1.set_title('Velocity Model')
ax1.set_xlabel('X')
ax1.set_ylabel('Z')
ax2 = fig.add_subplot(1,3,2)
plotField(u)
ax2.set_title('Forward Wavefield')
ax2.set_xlabel('X')
ax2.set_ylabel('Z')
ax3 = fig.add_subplot(1,3,3)
plotField(b)
ax3.set_title('Backward Wavefield')
ax3.set_xlabel('X')
ax3.set_ylabel('Z')
fig.tight_layout()
# norm = abs(d).max()
fig = plt.figure()
plt.plot(d.real)
Explanation: Results
Wavefield results
The velocity model is shown along with wavefields for three different cases, computed above.
End of explanation |
10,275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nested Statements and Scope
Now that we have gone over on writing our own functions, its important to understand how Python deals with the variable names you assign. When you create a variable name in Python the name is stored in a name-space. Variable names also have a scope, the scope determines the visibility of that variable name to other parts of your code.
Lets start with a quick thought experiment, imagine the following code
Step1: What do you imagine the output of printer() is? 25 or 50? What is the output of print x? 25 or 50?
50 it is a local variable in printer
25 it is a global variable
Step2: Interesting! But how does Python know which x you're referring to in your code? This is where the idea of scope comes in. Python has a set of rules it follows to decide what variables (such as x in this case) you are referencing in your code. Lets break down the rules
Step3: Enclosing function locals
This occurs when we have a function inside a function (nested functions)
Step4: Note how Sammy was used, because the hello() function was enclosed inside of the greet function!
Global
Luckily in Jupyter a quick way to test for global variables is to see if another cell recognizes the variable!
Step5: Built-in
These are the built-in function names in Python (don't overwrite these!)
Step6: Local Variables
When you declare variables inside a function definition, they are not related in any way to other variables with the same names used outside the function - i.e. variable names are local to the function. This is called the scope of the variable. All variables have the scope of the block they are declared in starting from the point of definition of the name.
Example
Step7: The first time that we print the value of the name x with the first line in the function’s body, Python uses the value of the parameter declared in the main block, above the function definition.
Next, we assign the value 2 to x. The name x is local to our function. So, when we change the value of x in the function, the x defined in the main block remains unaffected.
With the last print statement, we display the value of x as defined in the main block, thereby confirming that it is actually unaffected by the local assignment within the previously called function.
The global statement
If you want to assign a value to a name defined at the top level of the program (i.e. not inside any kind of scope such as functions or classes), then you have to tell Python that the name is not local, but it is global. We do this using the global statement. It is impossible to assign a value to a variable defined outside a function without the global statement.
You can use the values of such variables defined outside the function (assuming there is no variable with the same name within the function). However, this is not encouraged and should be avoided since it becomes unclear to the reader of the program as to where that variable’s definition is. Using the global statement makes it amply clear that the variable is defined in an outermost block.
Example | Python Code:
x = 25
def printer():
x = 50
return x
print (x)
print (printer())
Explanation: Nested Statements and Scope
Now that we have gone over on writing our own functions, its important to understand how Python deals with the variable names you assign. When you create a variable name in Python the name is stored in a name-space. Variable names also have a scope, the scope determines the visibility of that variable name to other parts of your code.
Lets start with a quick thought experiment, imagine the following code:
End of explanation
print (x)
print (printer())
Explanation: What do you imagine the output of printer() is? 25 or 50? What is the output of print x? 25 or 50?
50 it is a local variable in printer
25 it is a global variable
End of explanation
# x is local here:
f = lambda x:x**2
Explanation: Interesting! But how does Python know which x you're referring to in your code? This is where the idea of scope comes in. Python has a set of rules it follows to decide what variables (such as x in this case) you are referencing in your code. Lets break down the rules:
This idea of scope in your code is very important to understand in order to properly assign and call variable names.
In simple terms, the idea of scope can be described by 3 general rules:
Name assignments will create or change local names by default.
Name references search (at most) four scopes, these are:
local
enclosing functions
global
built-in
Names declared in global and nonlocal statements map assigned names to enclosing module and function scopes.
The statement in #2 above can be defined by the LEGB rule.
LEGB Rule.
L: Local — Names assigned in any way within a function (def or lambda)), and not declared global in that function.
E: Enclosing function locals — Name in the local scope of any and all enclosing functions (def or lambda), from inner to outer.
G: Global (module) — Names assigned at the top-level of a module file, or declared global in a def within the file.
B: Built-in (Python) — Names preassigned in the built-in names module : open,range,SyntaxError,...
Quick examples of LEGB
Local
End of explanation
name = 'This is a global name'
def greet():
# Enclosing function
name = 'Sammy'
def hello():
print 'Hello '+name
hello()
greet()
Explanation: Enclosing function locals
This occurs when we have a function inside a function (nested functions)
End of explanation
print name
Explanation: Note how Sammy was used, because the hello() function was enclosed inside of the greet function!
Global
Luckily in Jupyter a quick way to test for global variables is to see if another cell recognizes the variable!
End of explanation
len
Explanation: Built-in
These are the built-in function names in Python (don't overwrite these!)
End of explanation
x = 50
def func(x):
print ('x is', x)
x = 2
print ('Changed local x to', x)
func(x)
print ('x is still', x)
Explanation: Local Variables
When you declare variables inside a function definition, they are not related in any way to other variables with the same names used outside the function - i.e. variable names are local to the function. This is called the scope of the variable. All variables have the scope of the block they are declared in starting from the point of definition of the name.
Example:
End of explanation
x = 50
def func():
global x
print ('This function is now using the global x!')
print ('Because of global x is: ', x)
x = 2
print ('Ran func(), changed global x to', x)
print ('Before calling func(), x is: ', x)
func()
print ('Value of x (outside of func()) is: ', x)
Explanation: The first time that we print the value of the name x with the first line in the function’s body, Python uses the value of the parameter declared in the main block, above the function definition.
Next, we assign the value 2 to x. The name x is local to our function. So, when we change the value of x in the function, the x defined in the main block remains unaffected.
With the last print statement, we display the value of x as defined in the main block, thereby confirming that it is actually unaffected by the local assignment within the previously called function.
The global statement
If you want to assign a value to a name defined at the top level of the program (i.e. not inside any kind of scope such as functions or classes), then you have to tell Python that the name is not local, but it is global. We do this using the global statement. It is impossible to assign a value to a variable defined outside a function without the global statement.
You can use the values of such variables defined outside the function (assuming there is no variable with the same name within the function). However, this is not encouraged and should be avoided since it becomes unclear to the reader of the program as to where that variable’s definition is. Using the global statement makes it amply clear that the variable is defined in an outermost block.
Example:
End of explanation |
10,276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encoder-Decoder Analysis
Model Architecture
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
Step7: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores | Python Code:
report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json'
log_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json'
import json
import matplotlib.pyplot as plt
with open(report_file) as f:
report = json.loads(f.read())
with open(log_file) as f:
logs = json.loads(f.read())
print'Encoder: \n\n', report['architecture']['encoder']
print'Decoder: \n\n', report['architecture']['decoder']
Explanation: Encoder-Decoder Analysis
Model Architecture
End of explanation
print('Train Perplexity: ', report['train_perplexity'])
print('Valid Perplexity: ', report['valid_perplexity'])
print('Test Perplexity: ', report['test_perplexity'])
Explanation: Perplexity on Each Dataset
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Loss vs. Epoch
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
Explanation: Perplexity vs. Epoch
End of explanation
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
for i, sample in enumerate(report['train_samples']):
print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None)
for i, sample in enumerate(report['valid_samples']):
print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None)
for i, sample in enumerate(report['test_samples']):
print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)
Explanation: Generations
End of explanation
def print_bleu(blue_struct):
print 'Overall Score: ', blue_struct['score'], '\n'
print '1-gram Score: ', blue_struct['components']['1']
print '2-gram Score: ', blue_struct['components']['2']
print '3-gram Score: ', blue_struct['components']['3']
print '4-gram Score: ', blue_struct['components']['4']
# Training Set BLEU Scores
print_bleu(report['train_bleu'])
# Validation Set BLEU Scores
print_bleu(report['valid_bleu'])
# Test Set BLEU Scores
print_bleu(report['test_bleu'])
# All Data BLEU Scores
print_bleu(report['combined_bleu'])
Explanation: BLEU Analysis
End of explanation
# Training Set BLEU n-pairs Scores
print_bleu(report['n_pairs_bleu_train'])
# Validation Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_valid'])
# Test Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_test'])
# Combined n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_all'])
# Ground Truth n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_gold'])
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
print 'Average (Train) Generated Score: ', report['average_alignment_train']
print 'Average (Valid) Generated Score: ', report['average_alignment_valid']
print 'Average (Test) Generated Score: ', report['average_alignment_test']
print 'Average (All) Generated Score: ', report['average_alignment_all']
print 'Average Gold Score: ', report['average_alignment_gold']
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation |
10,277 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Now, you'll use what you learned in the previous tutorial to improve the efficiency of several queries.
Before you get started, run the following cell to set everything up.
Step1: Exercises
1) You work for Pet Costumes International.
You need to write three queries this afternoon. You have enough time to write working versions of all three, but only enough time to think about optimizing one of them. Which of these queries is most worth optimizing?
A software engineer wrote an app for the shipping department, to see what items need to be shipped and which aisle of the warehouse to go to for those items. She wants you to write the query. It will involve data that is stored in an orders table, a shipments table and a warehouseLocation table. The employees in the shipping department will pull up this app on a tablet, hit refresh, and your query results will be shown in a nice interface so they can see what costumes to send where.
The CEO wants a list of all customer reviews and complaints… which are conveniently stored in a single reviews table. Some of the reviews are really long… because people love your pirate costumes for parrots, and they can’t stop writing about how cute they are.
Dog owners are getting more protective than ever. So your engineering department has made costumes with embedded GPS trackers and wireless communication devices. They send the costumes’ coordinates to your database once a second. You then have a website where owners can find the location of their dogs (or at least the costumes they have for those dogs). For this service to work, you need a query that shows the most recent location for all costumes owned by a given human. This will involve data in a CostumeLocations table as well as a CostumeOwners table.
So, which of these could benefit most from being written efficiently? Set the value of the query_to_optimize variable below to one of 1, 2, or 3. (Your answer should have type integer.)
Step2: 2) Make it easier to find Mitzie!
You have the following two tables | Python Code:
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql_advanced.ex4 import *
print("Setup Complete")
Explanation: Introduction
Now, you'll use what you learned in the previous tutorial to improve the efficiency of several queries.
Before you get started, run the following cell to set everything up.
End of explanation
# Fill in your answer
query_to_optimize = ____
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
#%%RM_IF(PROD)%%
query_to_optimize = 3
q_1.check()
Explanation: Exercises
1) You work for Pet Costumes International.
You need to write three queries this afternoon. You have enough time to write working versions of all three, but only enough time to think about optimizing one of them. Which of these queries is most worth optimizing?
A software engineer wrote an app for the shipping department, to see what items need to be shipped and which aisle of the warehouse to go to for those items. She wants you to write the query. It will involve data that is stored in an orders table, a shipments table and a warehouseLocation table. The employees in the shipping department will pull up this app on a tablet, hit refresh, and your query results will be shown in a nice interface so they can see what costumes to send where.
The CEO wants a list of all customer reviews and complaints… which are conveniently stored in a single reviews table. Some of the reviews are really long… because people love your pirate costumes for parrots, and they can’t stop writing about how cute they are.
Dog owners are getting more protective than ever. So your engineering department has made costumes with embedded GPS trackers and wireless communication devices. They send the costumes’ coordinates to your database once a second. You then have a website where owners can find the location of their dogs (or at least the costumes they have for those dogs). For this service to work, you need a query that shows the most recent location for all costumes owned by a given human. This will involve data in a CostumeLocations table as well as a CostumeOwners table.
So, which of these could benefit most from being written efficiently? Set the value of the query_to_optimize variable below to one of 1, 2, or 3. (Your answer should have type integer.)
End of explanation
# Line below will give you a hint
#_COMMENT_IF(PROD)_
q_2.hint()
# View the solution (Run this code cell to receive credit!)
q_2.solution()
Explanation: 2) Make it easier to find Mitzie!
You have the following two tables:
The CostumeLocations table shows timestamped GPS data for all of the pet costumes in the database, where CostumeID is a unique identifier for each costume.
The CostumeOwners table shows who owns each costume, where the OwnerID column contains unique identifiers for each (human) owner. Note that each owner can have more than one costume! And, each costume can have more than one owner: this allows multiple individuals from the same household (all with their own, unique OwnerID) to access the locations of their pets' costumes.
Say you need to use these tables to get the current location of one pet in particular: Mitzie the Dog recently ran off chasing a squirrel, but thankfully she was last seen in her hot dog costume!
One of Mitzie's owners (with owner ID MitzieOwnerID) logs into your website to pull the last locations of every costume in his possession. Currently, you get this information by running the following query:
sql
WITH LocationsAndOwners AS
(
SELECT *
FROM CostumeOwners co INNER JOIN CostumeLocations cl
ON co.CostumeID = cl.CostumeID
),
LastSeen AS
(
SELECT CostumeID, MAX(Timestamp)
FROM LocationsAndOwners
GROUP BY CostumeID
)
SELECT lo.CostumeID, Location
FROM LocationsAndOwners lo INNER JOIN LastSeen ls
ON lo.Timestamp = ls.Timestamp AND lo.CostumeID = ls.CostumeID
WHERE OwnerID = MitzieOwnerID
Is there a way to make this faster or cheaper?
End of explanation |
10,278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Measure-Dynamic-Functional-Connectivity" data-toc-modified-id="Measure-Dynamic-Functional-Connectivity-1"><span class="toc-item-num">1 </span>Measure Dynamic Functional Connectivity</a></div><div class="lev2 toc-item"><a href="#Initialize-Environment" data-toc-modified-id="Initialize-Environment-11"><span class="toc-item-num">1.1 </span>Initialize Environment</a></div><div class="lev2 toc-item"><a href="#Load-CoreData" data-toc-modified-id="Load-CoreData-12"><span class="toc-item-num">1.2 </span>Load CoreData</a></div><div class="lev2 toc-item"><a href="#Compute-Functional-Connectivity" data-toc-modified-id="Compute-Functional-Connectivity-13"><span class="toc-item-num">1.3 </span>Compute Functional Connectivity</a></div><div class="lev3 toc-item"><a href="#Functional-Connectivity-FuncDef" data-toc-modified-id="Functional-Connectivity-FuncDef-131"><span class="toc-item-num">1.3.1 </span>Functional Connectivity FuncDef</a></div><div class="lev3 toc-item"><a href="#Process-Navon" data-toc-modified-id="Process-Navon-132"><span class="toc-item-num">1.3.2 </span>Process Navon</a></div><div class="lev3 toc-item"><a href="#Process-Stroop" data-toc-modified-id="Process-Stroop-133"><span class="toc-item-num">1.3.3 </span>Process Stroop</a></div><div class="lev2 toc-item"><a href="#Generate-Population-Configuration-Matrix" data-toc-modified-id="Generate-Population-Configuration-Matrix-14"><span class="toc-item-num">1.4 </span>Generate Population Configuration Matrix</a></div><div class="lev3 toc-item"><a href="#Dictionary-of-all-adjacency-matrices" data-toc-modified-id="Dictionary-of-all-adjacency-matrices-141"><span class="toc-item-num">1.4.1 </span>Dictionary of all adjacency matrices</a></div><div class="lev3 toc-item"><a href="#Create-Lookup-Table-and-Full-Configuration-Matrix" data-toc-modified-id="Create-Lookup-Table-and-Full-Configuration-Matrix-142"><span class="toc-item-num">1.4.2 </span>Create Lookup-Table and Full Configuration Matrix</a></div><div class="lev2 toc-item"><a href="#Checking-Correlation-Biases" data-toc-modified-id="Checking-Correlation-Biases-15"><span class="toc-item-num">1.5 </span>Checking Correlation Biases</a></div><div class="lev3 toc-item"><a href="#Across-Subjects" data-toc-modified-id="Across-Subjects-151"><span class="toc-item-num">1.5.1 </span>Across Subjects</a></div><div class="lev3 toc-item"><a href="#Positive-vs-Negative" data-toc-modified-id="Positive-vs-Negative-152"><span class="toc-item-num">1.5.2 </span>Positive vs Negative</a></div><div class="lev3 toc-item"><a href="#Fixation-vs-Task" data-toc-modified-id="Fixation-vs-Task-153"><span class="toc-item-num">1.5.3 </span>Fixation vs Task</a></div><div class="lev3 toc-item"><a href="#Within-Experiment-(Hi-vs-Lo)" data-toc-modified-id="Within-Experiment-(Hi-vs-Lo)-154"><span class="toc-item-num">1.5.4 </span>Within Experiment (Hi vs Lo)</a></div><div class="lev3 toc-item"><a href="#Between-Experiment-(Stroop-vs-Navon)" data-toc-modified-id="Between-Experiment-(Stroop-vs-Navon)-155"><span class="toc-item-num">1.5.5 </span>Between Experiment (Stroop vs Navon)</a></div><div class="lev3 toc-item"><a href="#Performance-Between-Experiment" data-toc-modified-id="Performance-Between-Experiment-156"><span class="toc-item-num">1.5.6 </span>Performance Between Experiment</a></div><div class="lev1 toc-item"><a href="#System-Level-Connectivity" data-toc-modified-id="System-Level-Connectivity-2"><span class="toc-item-num">2 </span>System-Level Connectivity</a></div><div class="lev2 toc-item"><a href="#Assign-Lausanne-to-Yeo-Systems" data-toc-modified-id="Assign-Lausanne-to-Yeo-Systems-21"><span class="toc-item-num">2.1 </span>Assign Lausanne to Yeo Systems</a></div><div class="lev2 toc-item"><a href="#System-Level-Adjacency-Matrices" data-toc-modified-id="System-Level-Adjacency-Matrices-22"><span class="toc-item-num">2.2 </span>System-Level Adjacency Matrices</a></div><div class="lev3 toc-item"><a href="#Plot-Population-Average-Adjacency-Matrices-(Expr-+-Pos/Neg)" data-toc-modified-id="Plot-Population-Average-Adjacency-Matrices-(Expr-+-Pos/Neg)-221"><span class="toc-item-num">2.2.1 </span>Plot Population Average Adjacency Matrices (Expr + Pos/Neg)</a></div><div class="lev3 toc-item"><a href="#Construct-System-Adjacency-Matrices" data-toc-modified-id="Construct-System-Adjacency-Matrices-222"><span class="toc-item-num">2.2.2 </span>Construct System Adjacency Matrices</a></div><div class="lev2 toc-item"><a href="#Check-Contrasts" data-toc-modified-id="Check-Contrasts-23"><span class="toc-item-num">2.3 </span>Check Contrasts</a></div><div class="lev3 toc-item"><a href="#Stroop-vs-Navon" data-toc-modified-id="Stroop-vs-Navon-231"><span class="toc-item-num">2.3.1 </span>Stroop vs Navon</a></div><div class="lev3 toc-item"><a href="#Lo-vs-Hi" data-toc-modified-id="Lo-vs-Hi-232"><span class="toc-item-num">2.3.2 </span>Lo vs Hi</a></div>
# Measure Dynamic Functional Connectivity
## Initialize Environment
Step1: Load CoreData
Step2: Compute Functional Connectivity
Functional Connectivity FuncDef
Step3: Process Navon
Step4: Process Stroop
Step5: Generate Population Configuration Matrix
Dictionary of all adjacency matrices
Step6: Create Lookup-Table and Full Configuration Matrix
Step7: Checking Correlation Biases
Across Subjects
Step8: Positive vs Negative
Step9: Fixation vs Task
Step10: Within Experiment (Hi vs Lo)
Step11: Between Experiment (Stroop vs Navon)
Step12: Performance Between Experiment
Step13: System-Level Connectivity
Assign Lausanne to Yeo Systems
Step14: System-Level Adjacency Matrices
Step15: Plot Population Average Adjacency Matrices (Expr + Pos/Neg)
Step16: Construct System Adjacency Matrices
Step17: Check Contrasts
Stroop vs Navon
Step18: Lo vs Hi | Python Code:
try:
%load_ext autoreload
%autoreload 2
%reset
except:
print 'NOT IPYTHON'
from __future__ import division
import os
import sys
import glob
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.stats as stats
import statsmodels.api as sm
import scipy.io as io
import h5py
import matplotlib
import matplotlib.pyplot as plt
echobase_path = '/Users/akhambhati/Developer/hoth_research/Echobase'
#echobase_path = '/data/jag/akhambhati/hoth_research/Echobase'
sys.path.append(echobase_path)
import Echobase
convert_conn_vec_to_adj_matr = Echobase.Network.Transforms.configuration.convert_conn_vec_to_adj_matr
convert_adj_matr_to_cfg_matr = Echobase.Network.Transforms.configuration.convert_adj_matr_to_cfg_matr
rcParams = Echobase.Plotting.fig_format.update_rcparams(matplotlib.rcParams)
path_Remotes = '/Users/akhambhati/Remotes'
#path_Remotes = '/data/jag/bassett-lab/akhambhati'
path_CoreData = path_Remotes + '/CORE.fMRI_cogcontrol.medaglia'
path_PeriphData = path_Remotes + '/RSRCH.NMF_CogControl'
path_ExpData = path_PeriphData + '/e01-FuncNetw'
path_AtlasData = path_Remotes + '/CORE.MRI_Atlases'
path_Figures = './e01-Figures'
for path in [path_CoreData, path_PeriphData, path_ExpData, path_Figures]:
if not os.path.exists(path):
print('Path: {}, does not exist'.format(path))
os.makedirs(path)
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Measure-Dynamic-Functional-Connectivity" data-toc-modified-id="Measure-Dynamic-Functional-Connectivity-1"><span class="toc-item-num">1 </span>Measure Dynamic Functional Connectivity</a></div><div class="lev2 toc-item"><a href="#Initialize-Environment" data-toc-modified-id="Initialize-Environment-11"><span class="toc-item-num">1.1 </span>Initialize Environment</a></div><div class="lev2 toc-item"><a href="#Load-CoreData" data-toc-modified-id="Load-CoreData-12"><span class="toc-item-num">1.2 </span>Load CoreData</a></div><div class="lev2 toc-item"><a href="#Compute-Functional-Connectivity" data-toc-modified-id="Compute-Functional-Connectivity-13"><span class="toc-item-num">1.3 </span>Compute Functional Connectivity</a></div><div class="lev3 toc-item"><a href="#Functional-Connectivity-FuncDef" data-toc-modified-id="Functional-Connectivity-FuncDef-131"><span class="toc-item-num">1.3.1 </span>Functional Connectivity FuncDef</a></div><div class="lev3 toc-item"><a href="#Process-Navon" data-toc-modified-id="Process-Navon-132"><span class="toc-item-num">1.3.2 </span>Process Navon</a></div><div class="lev3 toc-item"><a href="#Process-Stroop" data-toc-modified-id="Process-Stroop-133"><span class="toc-item-num">1.3.3 </span>Process Stroop</a></div><div class="lev2 toc-item"><a href="#Generate-Population-Configuration-Matrix" data-toc-modified-id="Generate-Population-Configuration-Matrix-14"><span class="toc-item-num">1.4 </span>Generate Population Configuration Matrix</a></div><div class="lev3 toc-item"><a href="#Dictionary-of-all-adjacency-matrices" data-toc-modified-id="Dictionary-of-all-adjacency-matrices-141"><span class="toc-item-num">1.4.1 </span>Dictionary of all adjacency matrices</a></div><div class="lev3 toc-item"><a href="#Create-Lookup-Table-and-Full-Configuration-Matrix" data-toc-modified-id="Create-Lookup-Table-and-Full-Configuration-Matrix-142"><span class="toc-item-num">1.4.2 </span>Create Lookup-Table and Full Configuration Matrix</a></div><div class="lev2 toc-item"><a href="#Checking-Correlation-Biases" data-toc-modified-id="Checking-Correlation-Biases-15"><span class="toc-item-num">1.5 </span>Checking Correlation Biases</a></div><div class="lev3 toc-item"><a href="#Across-Subjects" data-toc-modified-id="Across-Subjects-151"><span class="toc-item-num">1.5.1 </span>Across Subjects</a></div><div class="lev3 toc-item"><a href="#Positive-vs-Negative" data-toc-modified-id="Positive-vs-Negative-152"><span class="toc-item-num">1.5.2 </span>Positive vs Negative</a></div><div class="lev3 toc-item"><a href="#Fixation-vs-Task" data-toc-modified-id="Fixation-vs-Task-153"><span class="toc-item-num">1.5.3 </span>Fixation vs Task</a></div><div class="lev3 toc-item"><a href="#Within-Experiment-(Hi-vs-Lo)" data-toc-modified-id="Within-Experiment-(Hi-vs-Lo)-154"><span class="toc-item-num">1.5.4 </span>Within Experiment (Hi vs Lo)</a></div><div class="lev3 toc-item"><a href="#Between-Experiment-(Stroop-vs-Navon)" data-toc-modified-id="Between-Experiment-(Stroop-vs-Navon)-155"><span class="toc-item-num">1.5.5 </span>Between Experiment (Stroop vs Navon)</a></div><div class="lev3 toc-item"><a href="#Performance-Between-Experiment" data-toc-modified-id="Performance-Between-Experiment-156"><span class="toc-item-num">1.5.6 </span>Performance Between Experiment</a></div><div class="lev1 toc-item"><a href="#System-Level-Connectivity" data-toc-modified-id="System-Level-Connectivity-2"><span class="toc-item-num">2 </span>System-Level Connectivity</a></div><div class="lev2 toc-item"><a href="#Assign-Lausanne-to-Yeo-Systems" data-toc-modified-id="Assign-Lausanne-to-Yeo-Systems-21"><span class="toc-item-num">2.1 </span>Assign Lausanne to Yeo Systems</a></div><div class="lev2 toc-item"><a href="#System-Level-Adjacency-Matrices" data-toc-modified-id="System-Level-Adjacency-Matrices-22"><span class="toc-item-num">2.2 </span>System-Level Adjacency Matrices</a></div><div class="lev3 toc-item"><a href="#Plot-Population-Average-Adjacency-Matrices-(Expr-+-Pos/Neg)" data-toc-modified-id="Plot-Population-Average-Adjacency-Matrices-(Expr-+-Pos/Neg)-221"><span class="toc-item-num">2.2.1 </span>Plot Population Average Adjacency Matrices (Expr + Pos/Neg)</a></div><div class="lev3 toc-item"><a href="#Construct-System-Adjacency-Matrices" data-toc-modified-id="Construct-System-Adjacency-Matrices-222"><span class="toc-item-num">2.2.2 </span>Construct System Adjacency Matrices</a></div><div class="lev2 toc-item"><a href="#Check-Contrasts" data-toc-modified-id="Check-Contrasts-23"><span class="toc-item-num">2.3 </span>Check Contrasts</a></div><div class="lev3 toc-item"><a href="#Stroop-vs-Navon" data-toc-modified-id="Stroop-vs-Navon-231"><span class="toc-item-num">2.3.1 </span>Stroop vs Navon</a></div><div class="lev3 toc-item"><a href="#Lo-vs-Hi" data-toc-modified-id="Lo-vs-Hi-232"><span class="toc-item-num">2.3.2 </span>Lo vs Hi</a></div>
# Measure Dynamic Functional Connectivity
## Initialize Environment
End of explanation
# Load BOLD
df_navon = io.loadmat('{}/NavonBlockedSeriesScale125.mat'.format(path_CoreData), struct_as_record=False)
df_stroop = io.loadmat('{}/StroopBlockedSeriesScale125.mat'.format(path_CoreData), struct_as_record=False)
n_subj = 28
n_fix_block = 12 # Disregard the final fixation block
n_tsk_block = 6
n_roi = 262
bad_roi = [242]
n_good_roi = n_roi-len(bad_roi)
# Load Motion Data
df_motion = {'Stroop': io.loadmat('{}/StroopMove.mat'.format(path_CoreData))['move'][:, 0],
'Navon': io.loadmat('{}/NavonMove.mat'.format(path_CoreData))['move'][:, 0]}
# Load Behavioral Data
df_blk = io.loadmat('{}/BlockwiseDataCorrectTrialsOnly.mat'.format(path_CoreData))
bad_subj_ix = [1, 6]
good_subj_ix = np.setdiff1d(np.arange(n_subj+2), bad_subj_ix)
df_perf = {'Stroop': {'lo': {'accuracy': df_blk['StroopData'][good_subj_ix, 1, :],
'meanRT': df_blk['StroopData'][good_subj_ix, 4, :],
'medianRT': df_blk['StroopData'][good_subj_ix, 5, :]},
'hi': {'accuracy': df_blk['StroopData'][good_subj_ix, 0, :],
'meanRT': df_blk['StroopData'][good_subj_ix, 2, :],
'medianRT': df_blk['StroopData'][good_subj_ix, 3, :]}
},
'Navon' : {'lo': {'accuracy': df_blk['NavonData'][good_subj_ix, 1, :],
'meanRT': df_blk['NavonData'][good_subj_ix, 4, :],
'medianRT': df_blk['NavonData'][good_subj_ix, 5, :]},
'hi': {'accuracy': df_blk['NavonData'][good_subj_ix, 0, :],
'meanRT': df_blk['NavonData'][good_subj_ix, 2, :],
'medianRT': df_blk['NavonData'][good_subj_ix, 3, :]}
}
}
Explanation: Load CoreData
End of explanation
def comp_fconn(bold, alpha=0.05, dependent=False):
n_roi, n_tr = bold.shape
adj = np.arctanh(np.corrcoef(bold))
cfg_vec = convert_adj_matr_to_cfg_matr(adj.reshape(-1, n_roi, n_roi))[0, :]
# Separate edges based on sign
cfg_vec_pos = cfg_vec.copy()
cfg_vec_pos[cfg_vec_pos < 0] = 0
cfg_vec_neg = -1*cfg_vec.copy()
cfg_vec_neg[cfg_vec_neg < 0] = 0
adj_pos = convert_conn_vec_to_adj_matr(cfg_vec_pos)
adj_neg = convert_conn_vec_to_adj_matr(cfg_vec_neg)
return adj_pos, adj_neg
Explanation: Compute Functional Connectivity
Functional Connectivity FuncDef
End of explanation
for subj_id in xrange(n_subj):
proc_item = '{}/Subject_{}.Navon'.format(path_ExpData, subj_id)
print(proc_item)
adj_dict = {'lo': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))}
},
'hi': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
}}
# Process Fixation Blocks
cnt = 0
for fix_block in xrange(n_fix_block):
data = np.array(df_navon['data'][subj_id][fix_block].NFix, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
if (fix_block % 2) == 0:
adj_dict['lo']['fix']['pos'][cnt, :, :], adj_dict['lo']['fix']['neg'][cnt, :, :] = comp_fconn(data)
if (fix_block % 2) == 1:
adj_dict['hi']['fix']['pos'][cnt, :, :], adj_dict['hi']['fix']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
# Process Task Blocks
cnt = 0
for tsk_block in xrange(n_tsk_block):
# Low demand
data = np.array(df_navon['data'][subj_id][tsk_block].NS, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['lo']['task']['pos'][cnt, :, :], adj_dict['lo']['task']['neg'][cnt, :, :] = comp_fconn(data)
# High demand
data = np.array(df_navon['data'][subj_id][tsk_block].S, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['hi']['task']['pos'][cnt, :, :], adj_dict['hi']['task']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
np.savez(proc_item, adj_dict=adj_dict)
Explanation: Process Navon
End of explanation
for subj_id in xrange(n_subj):
proc_item = '{}/Subject_{}.Stroop'.format(path_ExpData, subj_id)
print(proc_item)
adj_dict = {'lo': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))}
},
'hi': {'fix': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
'task': {'pos': np.zeros((n_tsk_block, n_good_roi, n_good_roi)),
'neg': np.zeros((n_tsk_block, n_good_roi, n_good_roi))},
}}
# Process Fixation Blocks
cnt = 0
for fix_block in xrange(n_fix_block):
data = np.array(df_stroop['data'][subj_id][fix_block].SFix, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
if (fix_block % 2) == 0:
adj_dict['lo']['fix']['pos'][cnt, :, :], adj_dict['lo']['fix']['neg'][cnt, :, :] = comp_fconn(data)
if (fix_block % 2) == 1:
adj_dict['hi']['fix']['pos'][cnt, :, :], adj_dict['hi']['fix']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
# Process Task Blocks
cnt = 0
for tsk_block in xrange(n_tsk_block):
# Low demand
data = np.array(df_stroop['data'][subj_id][tsk_block].IE, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['lo']['task']['pos'][cnt, :, :], adj_dict['lo']['task']['neg'][cnt, :, :] = comp_fconn(data)
# High demand
data = np.array(df_stroop['data'][subj_id][tsk_block].E, dtype='f').T
data = data[np.setdiff1d(np.arange(n_roi), bad_roi), :]
adj_dict['hi']['task']['pos'][cnt, :, :], adj_dict['hi']['task']['neg'][cnt, :, :] = comp_fconn(data)
cnt += 1
np.savez(proc_item, adj_dict=adj_dict)
Explanation: Process Stroop
End of explanation
expr_dict = {}
for expr_id in ['Stroop', 'Navon']:
df_list = glob.glob('{}/Subject_*.{}.npz'.format(path_ExpData, expr_id))
for df_subj in df_list:
subj_id = int(df_subj.split('/')[-1].split('.')[0].split('_')[1])
if subj_id not in expr_dict.keys():
expr_dict[subj_id] = {}
expr_dict[subj_id][expr_id] = df_subj
Explanation: Generate Population Configuration Matrix
Dictionary of all adjacency matrices
End of explanation
# Generate a dictionary of all key names
cfg_key_names = ['Subject_ID', 'Experiment_ID', 'Condition_ID', 'Task_ID', 'CorSign_ID', 'Block_ID']
cfg_key_label = {'Subject_ID': np.arange(n_subj),
'Experiment_ID': ['Stroop', 'Navon'],
'Condition_ID': ['lo', 'hi'],
'Task_ID': ['fix', 'task'],
'CorSign_ID': ['pos', 'neg'],
'Block_ID': np.arange(n_tsk_block)}
cfg_obs_lut = np.zeros((len(cfg_key_label[cfg_key_names[0]]),
len(cfg_key_label[cfg_key_names[1]]),
len(cfg_key_label[cfg_key_names[2]]),
len(cfg_key_label[cfg_key_names[3]]),
len(cfg_key_label[cfg_key_names[4]]),
len(cfg_key_label[cfg_key_names[5]])))
# Iterate over all cfg key labels and generate a LUT matrix and a config matrix
key_cnt = 0
cfg_matr = []
for key_0_ii, key_0_id in enumerate(cfg_key_label[cfg_key_names[0]]):
for key_1_ii, key_1_id in enumerate(cfg_key_label[cfg_key_names[1]]):
adj_dict = np.load(expr_dict[key_0_id][key_1_id])['adj_dict'][()]
for key_2_ii, key_2_id in enumerate(cfg_key_label[cfg_key_names[2]]):
for key_3_ii, key_3_id in enumerate(cfg_key_label[cfg_key_names[3]]):
for key_4_ii, key_4_id in enumerate(cfg_key_label[cfg_key_names[4]]):
for key_5_ii, cfg_vec in enumerate(convert_adj_matr_to_cfg_matr(adj_dict[key_2_id][key_3_id][key_4_id])):
cfg_obs_lut[key_0_ii, key_1_ii, key_2_ii,
key_3_ii, key_4_ii, key_5_ii] = key_cnt
cfg_matr.append(cfg_vec)
key_cnt += 1
cfg_matr = np.array(cfg_matr)
cfg_matr_orig = cfg_matr.copy()
# Normalize sum of edge weights to 1
cfg_L1 = np.linalg.norm(cfg_matr, axis=1, ord=1)
cfg_L1[cfg_L1 == 0] = 1.0
cfg_matr = (cfg_matr.T / cfg_L1).T
# Rescale edge weight to unit L2-Norm
cfg_L2 = np.zeros_like(cfg_matr)
for subj_ii in xrange(len(cfg_key_label['Subject_ID'])):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :].reshape(-1), dtype=int)
cfg_L2[grp_ix, :] = np.linalg.norm(cfg_matr[grp_ix, :], axis=0, ord=2)
cfg_L2[cfg_L2 == 0] = 1.0
cfg_matr = cfg_matr / cfg_L2
np.savez('{}/Population.Configuration_Matrix.npz'.format(path_ExpData),
cfg_matr_orig=cfg_matr_orig,
cfg_matr=cfg_matr,
cfg_L2=cfg_L2,
cfg_obs_lut=cfg_obs_lut,
cfg_key_label=cfg_key_label,
cfg_key_names=cfg_key_names)
Explanation: Create Lookup-Table and Full Configuration Matrix
End of explanation
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = []
for grp_ii in xrange(n_grp):
grp_ix = np.array(cfg_obs_lut[grp_ii, :, :, :, :, :].reshape(-1), dtype=int)
grp_edge_wt.append(np.mean(cfg_matr[grp_ix, :], axis=1))
grp_edge_wt = np.array(grp_edge_wt)
mean_grp_edge_wt = np.mean(grp_edge_wt, axis=1)
grp_ord_ix = np.argsort(mean_grp_edge_wt)[::-1]
### Plot Subject Distribution
print(stats.f_oneway(*(grp_edge_wt)))
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt[grp_ord_ix, :].T, sym='', patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels([])
ax.set_xlabel('Subjects')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Subjects.svg'.format(path_Figures))
plt.show()
Explanation: Checking Correlation Biases
Across Subjects
End of explanation
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['CorSign_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][:, :, :, grp_ii, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
""
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['CorSign_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.CorSign.svg'.format(path_Figures))
plt.show()
Explanation: Positive vs Negative
End of explanation
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Task_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][:, :, grp_ii, :, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['Task_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Task.svg'.format(path_Figures))
plt.show()
Explanation: Fixation vs Task
End of explanation
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Condition_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][:, grp_ii, :, :, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['Condition_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Condition.svg'.format(path_Figures))
plt.show()
Explanation: Within Experiment (Hi vs Lo)
End of explanation
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr_orig']
n_grp = len(df['cfg_key_label'][()]['Experiment_ID'])
n_subj = len(df['cfg_key_label'][()]['Subject_ID'])
grp_edge_wt = np.zeros((n_grp, n_subj))
for grp_ii in xrange(n_grp):
for subj_ii in xrange(n_subj):
grp_ix = np.array(cfg_obs_lut[subj_ii, :, :, :, :, :][grp_ii, :, :, :, :].reshape(-1), dtype=int)
grp_edge_wt[grp_ii, subj_ii] = np.mean(np.mean(cfg_matr[grp_ix, :], axis=1))
print(stats.ttest_rel(*(grp_edge_wt)))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot(grp_edge_wt.T, patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(n_grp)])
ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(df['cfg_key_label'][()]['Experiment_ID'])
ax.set_xlabel('')
ax.set_ylabel('Weighted Edge Density')
plt.savefig('{}/Wgt_Edge_Density.Expriment.svg'.format(path_Figures))
plt.show()
Explanation: Between Experiment (Stroop vs Navon)
End of explanation
perf_stroop_hi = df_perf['Stroop']['hi']['meanRT'].mean(axis=1)
perf_stroop_lo = df_perf['Stroop']['lo']['meanRT'].mean(axis=1)
perf_stroop_cost = perf_stroop_hi-perf_stroop_lo
perf_navon_hi = df_perf['Navon']['hi']['meanRT'].mean(axis=1)
perf_navon_lo = df_perf['Navon']['lo']['meanRT'].mean(axis=1)
perf_navon_cost = perf_navon_hi-perf_navon_lo
print(stats.ttest_rel(perf_stroop_cost, perf_navon_cost))
### Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
bp = ax.boxplot([perf_stroop_cost, perf_navon_cost], patch_artist=True)
Echobase.Plotting.fig_format.set_box_color(bp, [0.0, 0.0, 0.0], [[0.2, 0.2, 0.2] for iii in xrange(2)])
#ax.set_ylim(ymin=0)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticklabels(['Stroop', 'Navon'])
ax.set_xlabel('')
ax.set_ylabel('Reaction Time Cost (Hi-Lo)')
plt.savefig('{}/RT_Cost.Expriment.svg'.format(path_Figures))
plt.show()
Explanation: Performance Between Experiment
End of explanation
import nibabel as nib
df_yeo_atlas = nib.load('{}/Yeo_JNeurophysiol11_MNI152/Yeo2011_7Networks_MNI152_FreeSurferConformed1mm_LiberalMask.nii.gz'.format(path_AtlasData))
yeo_matr = df_yeo_atlas.get_data()[..., 0]
yeo_roi = np.unique(yeo_matr)[1:]
yeo_names = ['VIS', 'SMN', 'DAN', 'VAN', 'LIM', 'FPN', 'DMN']
yeo_xyz = {}
M = df_yeo_atlas.affine[:3, :3]
abc = df_yeo_atlas.affine[:3, 3]
for yeo_id in yeo_roi:
yeo_ijk = np.array(np.nonzero(yeo_matr == yeo_id)).T
yeo_xyz[yeo_id] = M.dot(yeo_ijk.T).T + abc.T
df_laus_atlas = nib.load('{}/Lausanne/ROIv_scale125_dilated.nii.gz'.format(path_AtlasData))
laus_matr = df_laus_atlas.get_data()
laus_roi = np.unique(laus_matr)[1:]
laus_xyz = {}
M = df_laus_atlas.affine[:3, :3]
abc = df_laus_atlas.affine[:3, 3]
for laus_id in laus_roi:
laus_ijk = np.array(np.nonzero(laus_matr == laus_id)).T
laus_xyz[laus_id] = M.dot(laus_ijk.T).T + abc.T
laus_yeo_assign = []
for laus_id in laus_roi:
dists = []
for yeo_id in yeo_roi:
dists.append(np.min(np.sum((yeo_xyz[yeo_id] - laus_xyz[laus_id].mean(axis=0))**2, axis=1)))
laus_yeo_assign.append(yeo_names[np.argmin(dists)])
laus_yeo_assign = np.array(laus_yeo_assign)
pd.DataFrame(laus_yeo_assign).to_csv('{}/Lausanne/ROIv_scale125_dilated.Yeo2011_7Networks_MNI152.csv'.format(path_AtlasData))
# Manually replaced subcortical and cerebellar structures as SUB and CBR, respectively.
Explanation: System-Level Connectivity
Assign Lausanne to Yeo Systems
End of explanation
# Read in Yeo Atlas
df_laus_yeo = pd.read_csv('{}/LausanneScale125.csv'.format(path_CoreData))
df_laus_yeo = df_laus_yeo[df_laus_yeo.Label_ID != bad_roi[0]+1]
system_lbl = np.array(df_laus_yeo['Yeo2011_7Networks'].as_matrix())
system_name = np.unique(df_laus_yeo['Yeo2011_7Networks'])
n_system = len(system_name)
n_roi = len(system_lbl)
triu_ix, triu_iy = np.triu_indices(n_roi, k=1)
sys_triu_ix, sys_triu_iy = np.triu_indices(n_system, k=0)
# Reorder System Labels and Count ROIs per System
system_srt_ix = np.argsort(system_lbl)
system_cnt = np.array([len(np.flatnonzero(system_lbl == sys_name))
for sys_name in system_name])
system_demarc = np.concatenate(([0], np.cumsum(system_cnt)))
np.savez('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData),
df_laus_yeo=df_laus_yeo,
yeo_lbl=system_lbl,
yeo_name=system_name,
sort_laus_to_yeo=system_srt_ix,
yeo_adj_demarc=system_demarc,
laus_triu=np.triu_indices(n_roi, k=1),
yeo_triu=np.triu_indices(n_system, k=0))
Explanation: System-Level Adjacency Matrices
End of explanation
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr']
df_to_yeo = np.load('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData))
n_laus = len(df_to_yeo['yeo_lbl'])
plt.figure(figsize=(5,5));
cnt = 0
for expr_ii, expr_id in enumerate(df['cfg_key_label'][()]['Experiment_ID']):
for sgn_ii, sgn_id in enumerate(df['cfg_key_label'][()]['CorSign_ID']):
grp_ix = np.array(cfg_obs_lut[:, expr_ii, :, :, :, :][:, :, :, sgn_ii, :].reshape(-1), dtype=int)
sel_cfg_matr = cfg_matr[grp_ix, :].mean(axis=0)
adj = convert_conn_vec_to_adj_matr(sel_cfg_matr)
adj_yeo = adj[df_to_yeo['sort_laus_to_yeo'], :][:, df_to_yeo['sort_laus_to_yeo']]
# Plot
ax = plt.subplot(2, 2, cnt+1)
mat = ax.matshow(adj_yeo,
cmap='magma', vmin=0.025)
plt.colorbar(mat, ax=ax, fraction=0.046, pad=0.04)
for xx in df_to_yeo['yeo_adj_demarc']:
ax.vlines(xx, 0, n_laus, color='w', lw=0.5)
ax.hlines(xx, 0, n_laus, color='w', lw=0.5)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_tick_params(width=0)
ax.xaxis.set_tick_params(width=0)
ax.grid(False)
ax.tick_params(axis='both', which='major', pad=-3)
ax.set_xticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_xticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_yticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_yticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_title('{}-{}'.format(expr_id, sgn_id), fontsize=5.0)
cnt += 1
plt.show()
Explanation: Plot Population Average Adjacency Matrices (Expr + Pos/Neg)
End of explanation
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_matr = df['cfg_matr']
# Compute Brain System Adjacency Matrices
sys_adj_matr = np.zeros((cfg_matr.shape[0], n_system, n_system))
for sys_ii, (sys_ix, sys_iy) in enumerate(zip(sys_triu_ix, sys_triu_iy)):
sys1 = system_name[sys_ix]
sys2 = system_name[sys_iy]
sys1_ix = np.flatnonzero(system_lbl[triu_ix] == sys1)
sys2_iy = np.flatnonzero(system_lbl[triu_iy] == sys2)
inter_sys_ii = np.intersect1d(sys1_ix, sys2_iy)
if len(inter_sys_ii) == 0:
sys1_ix = np.flatnonzero(system_lbl[triu_ix] == sys2)
sys2_iy = np.flatnonzero(system_lbl[triu_iy] == sys1)
inter_sys_ii = np.intersect1d(sys1_ix, sys2_iy)
mean_conn_sys1_sys2 = np.mean(cfg_matr[:, inter_sys_ii], axis=1)
sys_adj_matr[:, sys_ix, sys_iy] = mean_conn_sys1_sys2
sys_adj_matr[:, sys_iy, sys_ix] = mean_conn_sys1_sys2
np.savez('{}/Full_Adj.Yeo2011_7Networks.npz'.format(path_ExpData),
sys_adj_matr=sys_adj_matr,
cfg_obs_lut=df['cfg_obs_lut'],
cfg_key_label=df['cfg_key_label'],
cfg_key_names=df['cfg_key_names'])
Explanation: Construct System Adjacency Matrices
End of explanation
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr']
df_to_yeo = np.load('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData))
n_laus = len(df_to_yeo['yeo_lbl'])
coef_ix = np.array(cfg_obs_lut, dtype=int)
cfg_matr_reshape = cfg_matr[coef_ix, :]
for sgn_ii, sgn_id in enumerate(df['cfg_key_label'][()]['CorSign_ID']):
sel_cfg_matr = (cfg_matr_reshape[:, :, :, 1, sgn_ii, :, :]).mean(axis=-2).mean(axis=-2)
sel_cfg_matr_tv = np.nan*np.zeros(cfg_matr.shape[1])
sel_cfg_matr_pv = np.nan*np.zeros(cfg_matr.shape[1])
for cc in xrange(cfg_matr.shape[1]):
tv, pv = stats.ttest_rel(*sel_cfg_matr[:, :, cc].T)
mean_stroop = np.mean(sel_cfg_matr[:, :, cc], axis=0)[0]
mean_navon = np.mean(sel_cfg_matr[:, :, cc], axis=0)[1]
dv = (mean_stroop - mean_navon) / np.std(sel_cfg_matr[:, :, cc].reshape(-1))
sel_cfg_matr_tv[cc] = dv
sel_cfg_matr_pv[cc] = pv
sig_pv = Echobase.Statistics.FDR.fdr.bhp(sel_cfg_matr_pv, alpha=0.05, dependent=True)
sel_cfg_matr_tv[sig_pv == False] = 0.0
adj = convert_conn_vec_to_adj_matr(sel_cfg_matr_tv)
adj_yeo = adj[df_to_yeo['sort_laus_to_yeo'], :][:, df_to_yeo['sort_laus_to_yeo']]
adj_yeo[np.diag_indices_from(adj_yeo)] = np.nan
# Plot
plt.figure(figsize=(3,3), dpi=300.0)
ax = plt.subplot(111)
mat = ax.matshow(adj_yeo,
cmap='PuOr', vmin=-1.0, vmax=1.0)
plt.colorbar(mat, ax=ax, fraction=0.046, pad=0.04)
for xx in df_to_yeo['yeo_adj_demarc']:
ax.vlines(xx, 0, n_laus, color='k', lw=0.5)
ax.hlines(xx, 0, n_laus, color='k', lw=0.5)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_tick_params(width=0)
ax.xaxis.set_tick_params(width=0)
ax.grid(False)
ax.tick_params(axis='both', which='major', pad=-3)
ax.set_xticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_xticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_yticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_yticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
plt.savefig('{}/Contrast.Expr.{}.svg'.format(path_Figures, sgn_id))
plt.show()
Explanation: Check Contrasts
Stroop vs Navon
End of explanation
df = np.load('{}/Population.Configuration_Matrix.npz'.format(path_ExpData))
cfg_obs_lut = df['cfg_obs_lut']
cfg_matr = df['cfg_matr']
df_to_yeo = np.load('{}/Lausanne125_to_Yeo.npz'.format(path_ExpData))
n_laus = len(df_to_yeo['yeo_lbl'])
for expr_ii, expr_id in enumerate(df['cfg_key_label'][()]['Experiment_ID']):
for sgn_ii, sgn_id in enumerate(df['cfg_key_label'][()]['CorSign_ID']):
coef_ix = np.array(cfg_obs_lut, dtype=int)
cfg_matr_reshape = cfg_matr[coef_ix, :]
sel_cfg_matr = cfg_matr_reshape[:, expr_ii, :, 1, sgn_ii, :, :].mean(axis=-2)
sel_cfg_matr_tv = np.nan*np.zeros(cfg_matr.shape[1])
sel_cfg_matr_pv = np.nan*np.zeros(cfg_matr.shape[1])
for cc in xrange(cfg_matr.shape[1]):
tv, pv = stats.ttest_rel(*sel_cfg_matr[:, :, cc].T)
mean_lo = np.mean(sel_cfg_matr[:, :, cc], axis=0)[0]
mean_hi = np.mean(sel_cfg_matr[:, :, cc], axis=0)[1]
dv = (mean_hi - mean_lo) / np.std(sel_cfg_matr[:, :, cc].reshape(-1))
sel_cfg_matr_tv[cc] = dv
sel_cfg_matr_pv[cc] = pv
sig_pv = Echobase.Statistics.FDR.fdr.bhp(sel_cfg_matr_pv, alpha=0.05, dependent=True)
sel_cfg_matr_tv[sig_pv == False] = np.nan
adj = convert_conn_vec_to_adj_matr(sel_cfg_matr_tv)
adj_yeo = adj[df_to_yeo['sort_laus_to_yeo'], :][:, df_to_yeo['sort_laus_to_yeo']]
adj_yeo[np.diag_indices_from(adj_yeo)] = np.nan
# Plot
plt.figure(figsize=(3,3), dpi=300)
ax = plt.subplot(111)
mat = ax.matshow(adj_yeo,
cmap='coolwarm', vmin=-0.5, vmax=0.5)
plt.colorbar(mat, ax=ax, fraction=0.046, pad=0.04)
for xx in df_to_yeo['yeo_adj_demarc']:
ax.vlines(xx, 0, n_laus, color='k', lw=0.5)
ax.hlines(xx, 0, n_laus, color='k', lw=0.5)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_tick_params(width=0)
ax.xaxis.set_tick_params(width=0)
ax.grid(False)
ax.tick_params(axis='both', which='major', pad=-3)
ax.set_xticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_xticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_yticks((df_to_yeo['yeo_adj_demarc'][:-1] + (np.diff(df_to_yeo['yeo_adj_demarc']) * 0.5)));
ax.set_yticklabels(df_to_yeo['yeo_name'], fontsize=5.0, rotation=45)
ax.set_title('{}-{}'.format(expr_id, sgn_id), fontsize=5.0)
plt.savefig('{}/Contrast.{}.{}.Hi_Lo.svg'.format(path_Figures, expr_id, sgn_id))
plt.show()
Explanation: Lo vs Hi
End of explanation |
10,279 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
draw a pairplot using python for tips dataset and for column sex in the given dataset
| Python Code::
sns.pairplot(tips , hue = ''sex', palette = 'coolwarm')
|
10,280 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Benchmarking of various implementations of FADDEEVA's error functions
I. Setup
Import the multiprecision library mpmath as a reference for accuracy benchmarks
Step1: Import the rest of the usual lot
Step2: The dictionary wofz_impl accesses the various function implementations.
Input interface is wofz_impl[<implementation-folder>](x, y)
Step3: For argument types needing double pointers, use the ctypes pointer provided by NumPy
Step4: ctypes libraries are loaded into the dictionary dlls
Step5: mpmath (reference)
set the precision
Step6: prepare the wofz_impl entry
Step7: scipy (version > 0.14)
http
Step8: prepare the wofz_impl entry
Step9: cernlib-c-1
loading the external shared C library
Step10: prepare the wofz_impl entry
Step11: cernlib-c-2
loading the external shared C library
Step12: prepare the wofz_impl entry
Step13: cernlib-cuda-1
try whether PyCUDA is available for the CUDA FADDEEVA version
Step14: prepare the CUDA kernel for the wofz function
Step15: prepare the wofz_impl entry
Step16: cernlib-f90-1
import and numpy-vectorise the first f90 version
Step17: cernlib-f90-2
import and numpy-vectorise the second f90 version
Step18: cernlib-python-1
Step19: II. Accuracy Benchmark
dirty hands-on, just trying what happens
Step20: ^^^!!! cernlib-c-1 produces weird stuff. I (Adrian) found some differences compared to cernlib-c-2 but this does not seem to be all yet!
Step21: ^^^!!!
.
.
.
.
Accuracy within range 10^-8 to 10^8
(outside of this, the mpmath multiplications of extremely large and small factors do not behave well)
Step22: define range
Step23: the reference values via mpmath | Python Code:
import mpmath
Explanation: Benchmarking of various implementations of FADDEEVA's error functions
I. Setup
Import the multiprecision library mpmath as a reference for accuracy benchmarks:
End of explanation
import numpy as np
import scipy
import ctypes
import sys
Explanation: Import the rest of the usual lot:
End of explanation
wofz_impl = dict()
Explanation: The dictionary wofz_impl accesses the various function implementations.
Input interface is wofz_impl[<implementation-folder>](x, y):
x is the real and y is the imaginary part of the input, both should be numpy arrays (i.e. provide the ctypes field).
End of explanation
from numpy.ctypeslib import ndpointer
np_double_p = ndpointer(dtype=np.float64)
Explanation: For argument types needing double pointers, use the ctypes pointer provided by NumPy:
End of explanation
dlls = dict()
Explanation: ctypes libraries are loaded into the dictionary dlls:
End of explanation
mpmath.mp.dps = 50
Explanation: mpmath (reference)
set the precision:
End of explanation
def wofz(x, y):
z = mpmath.mpc(x, y)
w = mpmath.exp(-z**2) * mpmath.erfc(z * -1j)
return w.real, w.imag
wofz_impl['mp'] = np.vectorize(wofz)
Explanation: prepare the wofz_impl entry:
End of explanation
from scipy.special import wofz as scipy_wofz
Explanation: scipy (version > 0.14)
http://ab-initio.mit.edu/wiki/index.php/Faddeeva_Package
End of explanation
def wofz(x, y):
z = scipy_wofz(x + 1j*y)
return z.real, z.imag
wofz_impl['scipy'] = wofz
Explanation: prepare the wofz_impl entry:
End of explanation
dlls['c-1'] = ctypes.cdll.LoadLibrary('cernlib-c-1/wofz.so')
dlls['c-1'].errf.restype = None
dlls['c-1'].errf.argtypes = [np_double_p, np_double_p, np_double_p, np_double_p]
Explanation: cernlib-c-1
loading the external shared C library:
End of explanation
def wofz(x, y):
in_real = np.atleast_1d(x).astype(np.float64)
in_imag = np.atleast_1d(y).astype(np.float64)
out_real = np.empty(1, dtype=np.float64)
out_imag = np.empty(1, dtype=np.float64)
dlls['c-1'].errf(in_real, in_imag, out_real, out_imag)
return out_real[0], out_imag[0]
wofz_impl['c-1'] = np.vectorize(wofz)
Explanation: prepare the wofz_impl entry:
End of explanation
dlls['c-2'] = ctypes.cdll.LoadLibrary('cernlib-c-2/wofz.so')
dlls['c-2'].cerrf.restype = None
dlls['c-2'].cerrf.argtypes = [ctypes.c_double, ctypes.c_double, np_double_p, np_double_p]
Explanation: cernlib-c-2
loading the external shared C library:
End of explanation
def wofz(x, y):
in_real = ctypes.c_double(x)
in_imag = ctypes.c_double(y)
out_real = np.empty(1, dtype=np.float64)
out_imag = np.empty(1, dtype=np.float64)
dlls['c-2'].cerrf(in_real, in_imag, out_real, out_imag)
return out_real[0], out_imag[0]
wofz_impl['c-2'] = np.vectorize(wofz)
Explanation: prepare the wofz_impl entry:
End of explanation
i_pycuda = False
try:
from pycuda.autoinit import context
from pycuda import gpuarray
from pycuda.elementwise import ElementwiseKernel
i_pycuda = True
except ImportError as e:
print 'No PyCUDA available, as per error message:'
print e.message
Explanation: cernlib-cuda-1
try whether PyCUDA is available for the CUDA FADDEEVA version:
End of explanation
if i_pycuda:
kernel = ElementwiseKernel(
'double* in_real, double* in_imag, double* out_real, double* out_imag',
# 'out_real[i] = in_real[i]; out_imag[i] = in_imag[i]',
'wofz(in_real[i], in_imag[i], &out_real[i], &out_imag[i]);',
'wofz_kernel',
preamble=open('cernlib-cuda-1/wofz.cu', 'r').read()
)
Explanation: prepare the CUDA kernel for the wofz function:
End of explanation
if i_pycuda:
def wofz(x, y):
in_real = gpuarray.to_gpu(np.atleast_1d(x).astype(np.float64))
in_imag = gpuarray.to_gpu(np.atleast_1d(y).astype(np.float64))
out_real = gpuarray.empty(in_real.shape, dtype=np.float64)
out_imag = gpuarray.empty(in_imag.shape, dtype=np.float64)
kernel(in_real, in_imag, out_real, out_imag)
return out_real.get(), out_imag.get()
wofz_impl['cuda'] = wofz
Explanation: prepare the wofz_impl entry:
End of explanation
sys.path.append('cernlib-f90-1')
from wwerf import ccperrfr
wofz_impl['f90-1'] = np.vectorize(ccperrfr)
Explanation: cernlib-f90-1
import and numpy-vectorise the first f90 version:
End of explanation
sys.path.append('cernlib-f90-2')
from wwerf2 import errf
wofz_impl['f90-2'] = np.vectorize(errf)
Explanation: cernlib-f90-2
import and numpy-vectorise the second f90 version:
End of explanation
sys.path.append('cernlib-python-1')
from mywwerf import wwerf
wofz_impl['py'] = np.vectorize(wwerf)
Explanation: cernlib-python-1
End of explanation
wofz_impl['scipy'](3, 2)
wofz_impl['c-1'](3, 2)
Explanation: II. Accuracy Benchmark
dirty hands-on, just trying what happens:
End of explanation
wofz_impl['c-2'](3, 2)
wofz_impl['cuda'](3, 2)
wofz_impl['f90-1'](3, 2)
wofz_impl['f90-2'](3, 2)
wofz_impl['py'](3, 2)
Explanation: ^^^!!! cernlib-c-1 produces weird stuff. I (Adrian) found some differences compared to cernlib-c-2 but this does not seem to be all yet!
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: ^^^!!!
.
.
.
.
Accuracy within range 10^-8 to 10^8
(outside of this, the mpmath multiplications of extremely large and small factors do not behave well)
End of explanation
exp_min = -8
exp_max = 8
r = 10**np.linspace(exp_min, exp_max, 101)
x, y = np.meshgrid(r, r)
Explanation: define range:
End of explanation
wr_ref, wi_ref = wofz_impl['mp'](x, y)
for implementation, function in wofz_impl.iteritems():
wr, wi = function(x, y)
plt.figure()
plt.imshow(np.vectorize(mpmath.log10)(abs(wr - wr_ref)).astype(np.float64),
origin="bottom",extent=[exp_min,exp_max,exp_min,exp_max],aspect='auto')
plt.colorbar()
plt.suptitle(implementation)
Explanation: the reference values via mpmath:
End of explanation |
10,281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read OpendTect horizons
The best way to export horizons from OpendTect is with these options
Step1: IL/XL and XY, multi-line header, multiple attributes
Load everything (default)
X and Y are loaded as cdp_x and cdp_y, to be consistent with the seisnc standard in segysak.
Step2: Load only inline, crossline, TWT
There is only one attribute here
Step3: XY only
If you have a file with no IL/XL, gio can try to load data using only X and Y
Step4: No header, more than one attribute
Step5: Sparse data
Sometimes a surface only exists at a few points, e.g. a 3D seismic interpretation grid. In general, loading data like this is completely safe if you have inline and xline locations. If you only have (x, y) locations, gio will attempt to load it, but you should inspect the result carefullly.
Step6: There's some sort of artifact with the default plot style, which uses pcolormesh I think.
Step7: Multiple horizons in one file
You can export multiple horizons from OpendTect. These will be loaded as one xarray.Dataset as different Data variables. (The actual attribute you exported from OdT is always called Z; this information is not retained in the xarray.)
Step8: Multi-horizon, no header
Unfortunately, OdT exports (x, y) in the first two columns, meaning you can't assume that columns 3 and 4 are inline, crossline. So if there's no header, and XY as well as inline/xline, you have to give the column names
Step9: Undefined values
These are exported as '1e30' by default. You can override this (not add to it, which is the default pandas behaviour) by passing one or more na_values.
Step10: Writing OdT files
You can write an OdT file using gio.to_odt() | Python Code:
import gio
ds = gio.read_odt('../../data/OdT/3d_horizon/Segment_ILXL_Single-line-header.dat')
ds
ds['twt'].plot()
Explanation: Read OpendTect horizons
The best way to export horizons from OpendTect is with these options:
x/y and inline/crossline
with header (single or multi-line, it doesn't matter)
choose all the attributes you want
On the last point, if you choose multiple horizons in one file, you can only have one attribute in the file.
IL/XL only, single-line header, multiple attributes
End of explanation
ds = gio.read_odt('../../data/OdT/3d_horizon/Segment_XY-and-ILXL_Multi-line-header.dat')
ds
import matplotlib.pyplot as plt
plt.scatter(ds.coords['cdp_x'], ds.coords['cdp_y'], s=5)
Explanation: IL/XL and XY, multi-line header, multiple attributes
Load everything (default)
X and Y are loaded as cdp_x and cdp_y, to be consistent with the seisnc standard in segysak.
End of explanation
fname = '../../data/OdT/3d_horizon/Segment_XY-and-ILXL_Multi-line-header.dat'
names = ['Inline', 'Crossline', 'Z'] # Must match OdT DAT file.
ds = gio.read_odt(fname, names=names)
ds
Explanation: Load only inline, crossline, TWT
There is only one attribute here: Z, which is the two-way time of the horizon.
Note that when loading data from OpendTect, you always get an xarray.Dataset, even if there's only a single attribute. This is because the format supports multiple grids and we didn't want you to have to guess what a given file would produce.
End of explanation
fname = '../../data/OdT/3d_horizon/Segment_XY_Single-line-header.dat'
ds = gio.read_odt(fname, origin=(376, 812), step=(2, 2))
ds
ds['twt'].plot()
Explanation: XY only
If you have a file with no IL/XL, gio can try to load data using only X and Y:
If there's a header you can load any number of attributes.
If there's no header, you can only one attribute (e.g. TWT) automagically...
OR, if there's no header, you can provide names to tell gio what everything is.
gio must create fake inline and crossline numbers; you can provide an origin and a step size. For example, notice above that the true inline and crossline numbers are:
inline: 376, 378, 380, etc.
crossline: 812, 814, 816, etc.
So we can pass an origin of (376, 812) and a step of (2, 2) to mimic these.
Header present
End of explanation
fname = '../../data/OdT/3d_horizon/Segment_XY_No-header.dat'
ds = gio.read_odt(fname)
ds
# Raises an error:
fname = '../../data/OdT/3d_horizon/Segment_XY_No-header.dat'
ds = gio.read_odt(fname, names=['X', 'Y', 'TWT'])
ds
ds['twt'].plot()
Explanation: No header, more than one attribute: raises an error
End of explanation
fname = '../../data/OdT/3d_horizon/Nimitz_Salmon_XY-and-ILXL_Single-line-header.dat'
ds = gio.read_odt(fname)
ds
ds['twt'].plot.imshow()
Explanation: Sparse data
Sometimes a surface only exists at a few points, e.g. a 3D seismic interpretation grid. In general, loading data like this is completely safe if you have inline and xline locations. If you only have (x, y) locations, gio will attempt to load it, but you should inspect the result carefullly.
End of explanation
ds['twt'].plot()
Explanation: There's some sort of artifact with the default plot style, which uses pcolormesh I think.
End of explanation
fname = '../../data/OdT/3d_horizon/F3_Multi-H2-H4_ILXL_Single-line-header.dat'
ds = gio.read_odt(fname)
ds
ds['F3_Demo_2_FS6'].plot()
ds['F3_Demo_4_Truncation'].plot()
Explanation: Multiple horizons in one file
You can export multiple horizons from OpendTect. These will be loaded as one xarray.Dataset as different Data variables. (The actual attribute you exported from OdT is always called Z; this information is not retained in the xarray.)
End of explanation
import gio
fname = '../../data/OdT/3d_horizon/Test_Multi_XY-and-ILXL_Z-only.dat'
ds = gio.read_odt(fname, names=['Horizon', 'X', 'Y', 'Inline', 'Crossline', 'Z'])
ds
Explanation: Multi-horizon, no header
Unfortunately, OdT exports (x, y) in the first two columns, meaning you can't assume that columns 3 and 4 are inline, crossline. So if there's no header, and XY as well as inline/xline, you have to give the column names:
End of explanation
fname = '../../data/OdT/3d_horizon/Segment_XY_No-header_NULLs.dat'
ds = gio.read_odt(fname, names=['X', 'Y', 'TWT'])
ds
ds['twt'].plot()
Explanation: Undefined values
These are exported as '1e30' by default. You can override this (not add to it, which is the default pandas behaviour) by passing one or more na_values.
End of explanation
_ = gio.to_odt('out.dat', ds)
!head out.dat
Explanation: Writing OdT files
You can write an OdT file using gio.to_odt():
End of explanation |
10,282 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploracion y Visualizacion de Bases de Datos
Para el siguiente ejemplo tomaremos como referencia el archivo snie1213.csv que se encuentra en la carpeta data, esta base de datos contiene los siguientes campos
Step1: Numero total de escuelas
Step2: Numero total de escuelas por entidad
Step3: Numero total de alumnos por entidad
Step4: Grafica
Step5: Numero de alumnos inscritos vs numero de alumnos existentes
Step6: Numero de alumnos que desertaron
Step7: Ordenados menor a mayor
Step8: los 5 estados con mayor desercion
Step9: Grafica
Step10: regresion lineal | Python Code:
# librerias
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.formula.api as sm
import seaborn as sns
%matplotlib inline
plt.style.use('ggplot')
# leer archivo
data = pd.read_csv('../data/snie1213.csv', low_memory=False)
# verificar su contenido
data.head()
data.info()
Explanation: Exploracion y Visualizacion de Bases de Datos
Para el siguiente ejemplo tomaremos como referencia el archivo snie1213.csv que se encuentra en la carpeta data, esta base de datos contiene los siguientes campos:
* CCTT Clave del Centro de Trabajo + turno
* TIPO_EDUCA Tipo educativo
* NIVEL Nivel educativo
* SUBNIVEL Servicio educativo
* C_TIPO Clave tipo educativo
* C_NIVEL Clave nivel educativo
* C_SUBN Clave servicio educativo
* SOS Clave de sostenimiento
* CLAS Clasificador
* CONTROL Tipo de control (1= Público, 2= Privado)
* NOM_CON Nombre del control (Público y Privado)
* C_T_SOS Clave de tipo de sostenimiento
* TIPO_SOS Tipo de sostenimiento
* EML Clave de entidad + municipio + localidad
* EM Clave de entidad + municipio
* ENTIDAD Clave de Entidad Federativa
* NOM_ENT Nombre de la entidad
* MUNICIPIO Clave del municipio o delegación
* NOM_MUN Nombre del municipio o delegación
* LOCALIDAD Clave de la localidad
* NOM_LOC Nombre de localidad
* CCT Clave del Centro de Trabajo
* TURNO Clave del turno
* NOM_TURNO Nombre del turno
* NOM_CCT Nombre del Centro de Trabajo
* DOMICILIO Domicilio
* ENTRECALLE Entre calle
* YCALLE Y calle
* COLONIA Colonia
* NOM_COL Nombre de la colonia
* CODPOST Código postal
* TELEFONO Teléfono
* CORREOELE Correo electrónico
* PAGINAWEB Pagina web
* DIRECTOR Nombre del director
* PERIODO Ciclo escolar
* DGSPOH Directivo sin grupo - hombres
* DGSPOM Directivo sin grupo - mujeres
* DGSPO Directivo sin grupo
* CGPOH Directivo con grupo - hombres
* CGPOM Directivo con grupo - mujeres
* CGPO Directivo con grupo
* DOCH Docentes - hombres
* DOCM Docentes - mujeres
* DOC Docentes
* DOC_TOTH Docentes total - hombres
* DOC_TOTM Docentes total - mujeres
* DOC_TOT Docentes total
* PROMOTORH Promotor educativo - hombres
* PROMOTORM Promotor educativo - mujeres
* PROMOTOR Promotor educativo
* ADMVOH Personal administrativo - hombres
* ADMVOM Personal administrativo - mujeres
* ADMVO Personal administrativo
* PER_ESPH Personal especial - hombres
* PER_ESPM Personal especial - mujeres
* PER_ESP Personal especial
* TOT_PERSH Total de personal - hombres
* TOT_PERSM Total de personal - mujeres
* TOT_PERS Total de personal
* AULAS_USO Aulas en uso
* AULAS_EXIS Aulas existentes
* LABORATORI Laboratorios
* TALLERES Talleres
* A_1H Alumnos 1 grado - hombres
* A_1M Alumnos 1 grado - mujeres
* A_1 Alumnos 1 grado
* A_2H Alumnos 2 grado - hombres
* A_2M Alumnos 2 grado - mujeres
* A_2 Alumnos 2 grado
* A_3H Alumnos 3 grado - hombres
* A_3M Alumnos 3 grado - mujeres
* A_3 Alumnos 3 grado
* A_4H Alumnos 4 grado - hombres
* A_4M Alumnos 4 grado - mujeres
* A_4 Alumnos 4 grado
* A_5H Alumnos 5 grado - hombres
* A_5M Alumnos 5 grado - mujeres
* A_5 Alumnos 5 grado
* A_6H Alumnos 6 grado - hombres
* A_6M Alumnos 6 grado - mujeres
* A_6 Alumnos 6 grado
* A_TH Alumnos total - hombres
* A_TM Alumnos total - mujeres
* A_T Alumnos total
* A_NT Alumnos nuevo ingreso total
* A_ET Alumnos egresados total
* A_TT Alumnos titulados total
* G_1 Grupos 1 grado
* G_2 Grupos 2 grado
* G_3 Grupos 3 grado
* G_4 Grupos 4 grado
* G_5 Grupos 5 grado
* G_6 Grupos 6 grado
* G_T Grupos total
* D_1 Docentes 1 grado
* D_2 Docentes 2 grado
* D_3 Docentes 3 grado
* D_4 Docentes 4 grado
* D_5 Docentes 5 grado
* D_6 Docentes 6 grado
* D_MAS_1G Docentes que atienden a más de un grado
* D_T Docentes total (suma de docentes por grado y Docentes que atienden a más de un grado)
* ITH Inscritos total - hombres
* ITM Inscritos total - mujeres
* IT Inscritos total
* ETH Existentes total - hombres
* ETM Existentes total - mujeres
* ET Existentes total
* APRO_TH Aprobados total - hombres
* APRO_TM Aprobados total - mujeres
* APRO_T Aprobados total
* D_E_FISH Docente de educación física - hombres
* D_E_FISM Docente de educación física - mujeres
* D_E_FIS Docente de educación física
* D_A_ARTH Docente de educación artísticas - hombres
* D_A_ARTM Docente de educación artísticas - mujeres
* D_A_ART Docente de educación artísticas
* D_A_TECH Docente de educación tecnológicas - hombres
* D_A_TECM Docente de educación tecnológicas - mujeres
* D_A_TEC Docente de educación tecnológicas
* D_IDIOMAH Docente de idiomas - hombres
* D_IDIOMAM Docente de idiomas - mujeres
* D_IDIOMA Docente de idiomas
* DOC_CM Docentes que se encuentran en el programa de carrera magisterial
* TIPO_LOC Tipo de localidad (Rural, Urbana y No Especificado=" ")
* INDICE Índice de marginación
* GRADO Grado de marginación
* GRADO_TEX Grado de marginación, Texto
* AMBITO U - Urbano, R - Rural
* ALTITUD Altitud en metro sobre el nivel del mar
* EML_NO_H Distingue cuando es no nulo, a las escuelas que no se han homologado las EML_SEP vs EML_INEGI
* UCLU Escuela ubicada en el centroide de la localidad urbana (si no es nulo distingue a las escuelas urbanas que aun no se han ubicado a nivel de manzana)
* EML_INEGI Concatenación de claves de Entidad, Municipio, Localidad de INEGI
* LON_DMS Ubicación de la escuela-localidad al Oeste del Meridiano de Greenwich, expresada en grados, minutos y segundos
* LAT_DMS Ubicación de la escuela-localidad al norte del Ecuador, expresada en grados, minutos y segundos
* E_S_A Escuela siempre abierta+
* E_S Escuela segura+
* E_T_C Escuela de tiempo completo+
* PEC Programa escuela de calidad+
* PNL Programa nacional de lectura+
+La informacion de los programas fue proporcionada con caracter preliminar por parte de la Direccion General de Desarrollo de la Gestion e Innovacion Educativa en el mes de octubre 2012
End of explanation
data['NOM_ENT'].count()
Explanation: Numero total de escuelas
End of explanation
data['NOM_ENT'].value_counts()
Explanation: Numero total de escuelas por entidad
End of explanation
data.groupby(['NOM_ENT']).sum()[['A_T']]
Explanation: Numero total de alumnos por entidad
End of explanation
data.groupby(['NOM_ENT']).sum()[['A_T']].plot.bar(figsize=(20,5), title='Numero total de alumnos por entidad')
Explanation: Grafica
End of explanation
data.groupby(['NOM_ENT']).sum()[['IT','ET']]
data.groupby(['NOM_ENT']).sum()[['IT','ET']].plot.bar(figsize=(20,4))
data['RES'] = data['IT'] - data['ET']
data['RES'].head()
Explanation: Numero de alumnos inscritos vs numero de alumnos existentes
End of explanation
data.groupby(['NOM_ENT']).sum()[['RES']]
Explanation: Numero de alumnos que desertaron
End of explanation
data.groupby(['NOM_ENT']).sum()[['RES']].sort_values(by='RES')
Explanation: Ordenados menor a mayor
End of explanation
data.groupby(['NOM_ENT']).sum()[['RES']].sort_values(by='RES')[-5:]
Explanation: los 5 estados con mayor desercion
End of explanation
data.groupby(['NOM_ENT']).sum()[['RES']].plot.bar(figsize=(20,4))
Explanation: Grafica
End of explanation
import seaborn as sns
sns.jointplot('APRO_T', 'GRADO',data=data, kind='reg')
OLS
Explanation: regresion lineal
End of explanation |
10,283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Raspberry Pi Programming
GPIO digital output
Following example sets PIN18 to High(3.3V).
Python's import statement includes external library
GPIO.setmode function defines which numbering method (pysical, GPIO number) is to be used.
GPIO.setup function defines the usage of GPIO pins (Input or Output)
GPIO.output function sets the GPIO pin value to be 1(3.3V) or 0(0V)
Step1: GPIO Digital input - 1
Following example read the value of PIN18
GPIO.input function returns the value of GPIO pin. 3.3V or 0V
Step2: GPIO Digital input - 2 (Pull Down)
Raspberry Pi has internal Pull Up/Down register, which can be enabled with setup() function.
Pull down register is enabled with GPIO.setup() function | Python Code:
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM) # use GPIO numbering, see output of gpio command
#GPIO.setmode(GPIO.BOARD) # use Physical pin numbering
GPIO.setup(18, GPIO.OUT) # PIN18 (Physical:Pin12) : Output
GPIO.output(18, GPIO.HIGH) # Ping18 -> High (3.3V)
Explanation: Raspberry Pi Programming
GPIO digital output
Following example sets PIN18 to High(3.3V).
Python's import statement includes external library
GPIO.setmode function defines which numbering method (pysical, GPIO number) is to be used.
GPIO.setup function defines the usage of GPIO pins (Input or Output)
GPIO.output function sets the GPIO pin value to be 1(3.3V) or 0(0V)
End of explanation
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM) # use GPIO numbering, see output of gpio command
GPIO.setup(18, GPIO.IN) # PIN18: Input
if GPIO.input(18) == GPIO.HIGH:
print('PIN18: HIGH')
else:
print('PIN18: LOW')
Explanation: GPIO Digital input - 1
Following example read the value of PIN18
GPIO.input function returns the value of GPIO pin. 3.3V or 0V
End of explanation
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM) # use GPIO numbering, see output of gpio command
GPIO.setup(18, GPIO.IN, pull_up_down=GPIO_PUP_DOWN) # PIN18: Input (Pull Down enabled)
if GPIO.input(18) == GPIO.HIGH:
print('PIN18: HIGH')
else:
print('PIN18: LOW')
Explanation: GPIO Digital input - 2 (Pull Down)
Raspberry Pi has internal Pull Up/Down register, which can be enabled with setup() function.
Pull down register is enabled with GPIO.setup() function
End of explanation |
10,284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align = 'center'> Neural Networks Demystified </h1>
<h2 align = 'center'> Part 6
Step1: So far we’ve built a neural network in python, computed a cost function to let us know how well our network is performing, computed the gradient of our cost function so we can train our network, and last time we numerically validated our gradient computations. After all that work, it’s finally time to train our neural network.
Back in part 3, we decided to train our network using gradient descent. While gradient descent is conceptually pretty straight forward, its implementation can actually be quite complex- especially as we increase the size and number of layers in our neural network. If we just march downhill with consistent step sizes, we may get stuck in a local minimum or flat spot, we may move too slowly and never reach our minimum, or we may move to quickly and bounce out of our minimum. And remember, all this must happen in high-dimensional space, making things significantly more complex. Gradient descent is a wonderfully clever method, but provides no guarantees that we will converge to a good solution, that we will converge to a solution in a certain amount of time, or that we will converge to a solution at all.
The good and bad news here is that this problem is not unique to Neural Networks - there’s an entire field dedicated to finding the best combination of inputs to minimize the output of an objective function
Step2: If we plot the cost against the number of iterations through training, we should see a nice, monotonically decreasing function. Further, we see that the number of function evaluations required to find the solution is less than 100, and far less than the 10^27 function evaluation that would have been required to find a solution by brute force, as shown in part 3. Finally, we can evaluate our gradient at our solution and see very small values – which make sense, as our minimum should be quite flat.
Step3: The more exciting thing here is that we finally have a trained network that can predict your score on a test based on how many hours you sleep and how many hours you study the night before. If we run our training data through our forward method now, we see that our predictions are excellent. We can go one step further and explore the input space for various combinations of hours sleeping and hours studying, and maybe we can find an optimal combination of the two for your next test. | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('9KM9Td6RVgQ')
Explanation: <h1 align = 'center'> Neural Networks Demystified </h1>
<h2 align = 'center'> Part 6: Training </h2>
<h4 align = 'center' > @stephencwelch </h4>
End of explanation
%pylab inline
#Import code from previous videos:
from partFive import *
from scipy import optimize
class trainer(object):
def __init__(self, N):
#Make Local reference to network:
self.N = N
def callbackF(self, params):
self.N.setParams(params)
self.J.append(self.N.costFunction(self.X, self.y))
def costFunctionWrapper(self, params, X, y):
self.N.setParams(params)
cost = self.N.costFunction(X, y)
grad = self.N.computeGradients(X,y)
return cost, grad
def train(self, X, y):
#Make an internal variable for the callback function:
self.X = X
self.y = y
#Make empty list to store costs:
self.J = []
params0 = self.N.getParams()
options = {'maxiter': 200, 'disp' : True}
_res = optimize.minimize(self.costFunctionWrapper, params0, jac=True, method='BFGS', \
args=(X, y), options=options, callback=self.callbackF)
self.N.setParams(_res.x)
self.optimizationResults = _res
Explanation: So far we’ve built a neural network in python, computed a cost function to let us know how well our network is performing, computed the gradient of our cost function so we can train our network, and last time we numerically validated our gradient computations. After all that work, it’s finally time to train our neural network.
Back in part 3, we decided to train our network using gradient descent. While gradient descent is conceptually pretty straight forward, its implementation can actually be quite complex- especially as we increase the size and number of layers in our neural network. If we just march downhill with consistent step sizes, we may get stuck in a local minimum or flat spot, we may move too slowly and never reach our minimum, or we may move to quickly and bounce out of our minimum. And remember, all this must happen in high-dimensional space, making things significantly more complex. Gradient descent is a wonderfully clever method, but provides no guarantees that we will converge to a good solution, that we will converge to a solution in a certain amount of time, or that we will converge to a solution at all.
The good and bad news here is that this problem is not unique to Neural Networks - there’s an entire field dedicated to finding the best combination of inputs to minimize the output of an objective function: the field of Mathematical Optimization. The bad news is that optimization can be a bit overwhelming; there are many different techniques we could apply to our problem.
Part of what makes the optimization challenging is the broad range of approaches covered - from very rigorous, theoretical methods to hands-on, more heuristics-driven methods. Yann Lecun’s 1998 publication Efficient BackProp presents an excellent review of various optimization techniques as applied to neural networks.
Here, we’re going to use a more sophisticated variant on gradient descent, the popular Broyden-Fletcher-Goldfarb-Shanno numerical optimization algorithm. The BFGS algorithm overcomes some of the limitations of plain gradient descent by estimating the second derivative, or curvature, of the cost function surface, and using this information to make more informed movements downhill. BFGS will allow us to find solutions more often and more quickly.
We’ll use the BFGS implementation built into the scipy optimize package, specifically within the minimize function. To use BFGS, the minimize function requires us to pass in an objective function that accepts a vector of parameters, input data, and output data, and returns both the cost and gradients. Our neural network implementation doesn’t quite follow these semantics, so we’ll use a wrapper function to give it this behavior. We’ll also pass in initial parameters, set the jacobian parameter to true since we’re computing the gradient within our neural network class, set the method to BFGS, pass in our input and output data, and some options. Finally, we’ll implement a callback function that allows us to track the cost function value as we train the network. Once the network is trained, we’ll replace the original, random parameters, with the trained parameters.
End of explanation
NN = Neural_Network()
T = trainer(NN)
T.train(X,y)
plot(T.J)
grid(1)
xlabel('Iterations')
ylabel('Cost')
NN.costFunctionPrime(X,y)
Explanation: If we plot the cost against the number of iterations through training, we should see a nice, monotonically decreasing function. Further, we see that the number of function evaluations required to find the solution is less than 100, and far less than the 10^27 function evaluation that would have been required to find a solution by brute force, as shown in part 3. Finally, we can evaluate our gradient at our solution and see very small values – which make sense, as our minimum should be quite flat.
End of explanation
NN.forward(X)
y
#Test network for various combinations of sleep/study:
hoursSleep = linspace(0, 10, 100)
hoursStudy = linspace(0, 5, 100)
#Normalize data (same way training data way normalized)
hoursSleepNorm = hoursSleep/10.
hoursStudyNorm = hoursStudy/5.
#Create 2-d versions of input for plotting
a, b = meshgrid(hoursSleepNorm, hoursStudyNorm)
#Join into a single input matrix:
allInputs = np.zeros((a.size, 2))
allInputs[:, 0] = a.ravel()
allInputs[:, 1] = b.ravel()
allOutputs = NN.forward(allInputs)
#Contour Plot:
yy = np.dot(hoursStudy.reshape(100,1), np.ones((1,100)))
xx = np.dot(hoursSleep.reshape(100,1), np.ones((1,100))).T
CS = contour(xx,yy,100*allOutputs.reshape(100, 100))
clabel(CS, inline=1, fontsize=10)
xlabel('Hours Sleep')
ylabel('Hours Study')
#3D plot:
##Uncomment to plot out-of-notebook (you'll be able to rotate)
#%matplotlib qt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(xx, yy, 100*allOutputs.reshape(100, 100), \
cmap=cm.jet)
ax.set_xlabel('Hours Sleep')
ax.set_ylabel('Hours Study')
ax.set_zlabel('Test Score')
Explanation: The more exciting thing here is that we finally have a trained network that can predict your score on a test based on how many hours you sleep and how many hours you study the night before. If we run our training data through our forward method now, we see that our predictions are excellent. We can go one step further and explore the input space for various combinations of hours sleeping and hours studying, and maybe we can find an optimal combination of the two for your next test.
End of explanation |
10,285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Methods for Sampling from the Unit Simplex
1. Uniform Distributions on the Unit Simplex
A $\textit{n}$-unit simplex (https
Step2: 2. Exponential Distributions on the Unit Simplex
The exponential distribution on the $\textit{n}$-unit simplex $\Delta^n$ is defined as
Step3: 2.2 Sampling from Cubes | Python Code:
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import ternary
## generate order statistics from uniform distribution between 0 and 1
U = [[np.random.uniform(), np.random.uniform()] for i in range(50000)]
for u in U:
u.sort()
## calculate the spacing and plot the sampling results
x_1 = [u[0] for u in U]
x_2 = [u[1]-u[0] for u in U]
plt.plot(x_1,x_2,'.',markersize = 1)
plt.xlabel(r'x_1')
plt.ylabel(r'x_2')
Explanation: Methods for Sampling from the Unit Simplex
1. Uniform Distributions on the Unit Simplex
A $\textit{n}$-unit simplex (https://en.wikipedia.org/wiki/Simplex#The_standard_simplex) is defined as the subset of $\mathbb{R}^{n+1}$ given by
$\Delta^{n} = {(x_1,x_2,...,x_{n+1}) \in \mathbb{R}^{n+1} | \sum_{k=1}^{n+1}{x_k}=1; x_k \geq 0, k = 1,2,...,n+1}$. Methods for sampling directly from the uniform distribution on this $\textit{n}$-unit simplex is not that obvious. The easiest method I have found for this task is from Luc Devroye's book "$\textbf{Non-Uniform Random Variate Generation} $" $\textbf{Chapter V}$ (http://www.nrbook.com/devroye/Devroye_files/chapter_five.pdf). This method utilizes the spacing between order statistics of random samples from the uniform distribution between 0 and 1.
For simplicity and easy visilization, here we sample from the $\textbf{3-unit simplex}$.
End of explanation
def restricted_exp_sample(c = 1, x_low = 0, x_up = 1):
Sample from the exponential distribution $p(x) \propto \text{exp}(c*x)$ restricted in the interval [a,b]
u = np.random.uniform()
return 1.0/c * np.log(np.exp(c*x_low) + u * (np.exp(c*x_up) - np.exp(c*x_low)))
def Gibbs_Sampler(a_list):
n = len(a_list)
a_p = np.array(a_list[0:-1]) - a_list[-1]
x = np.zeros(n-1)
K = np.random.choice(range(50,100))
for k in range(K):
idx = np.random.choice(range(n-1))
x_low = 0
x_up = 1 - np.sum([x[i] for i in range(n-1) if i != idx])
x[idx] = restricted_exp_sample(a_p[idx], x_low, x_up)
return list(x) + [1-np.sum(x)]
a_list = [1,3,6]
num_samples = 5000
samples = []
for i in range(num_samples):
samples.append(Gibbs_Sampler(a_list))
fontsize = 20
figure, tax = ternary.figure(scale = 1)
tax.boundary(linewidth=2.0)
tax.gridlines(multiple = 0.2, color="blue")
tax.clear_matplotlib_ticks()
tax.ticks(locations = np.linspace(0,1,5))
tax.left_axis_label("$x_1$", fontsize = fontsize)
tax.right_axis_label("$x_3$", fontsize = fontsize)
tax.bottom_axis_label("$x_2$", fontsize = fontsize)
tax.scatter(samples, marker = ".", color = "blue")
x_1 = [s[0] for s in samples]
x_2 = [s[1] for s in samples]
plt.plot(x_1, x_2, '.')
plt.xlabel("$x_1$", fontsize = fontsize)
plt.ylabel("$x_2$", fontsize = fontsize)
plt.plot(np.linspace(0,1,20), 1-np.linspace(0,1,20), color = "r")
Explanation: 2. Exponential Distributions on the Unit Simplex
The exponential distribution on the $\textit{n}$-unit simplex $\Delta^n$ is defined as:
$$p((x_1,x_2,...,x_{n+1})) = \frac{\text{exp}(a_1 x_1 + a_2 x_2 + ... + a_{n+1}x_{n+1})}{Z}$$
$\text{where }(x_1,x_2,...,x_{n+1}) \in \Delta^n;$
$Z = \sum_{i=1}^{n+1}{\frac{\text{exp}(a_i)}{\prod_{j\neq i}{(a_i - a_j)}}},$ and $a_1,a_2,...,a_{n+1}$ are constants
This distribution arise from one of my research projects. In the project, sampling from the distribution is required. I originally thought that the distribution seems pretty standard and there must be some standard method for sampling directly from it, like the method for sampling from the uniform distribution on the unit simplex. However, after some searching, I realize it is not the case. I still do not find existing methods for sampling directly from it.
To work around it, I turn to Markov Chain Monte Carlo methods.
2.1 Gibbs Sampler
After a simple manipulation, we can see that sampling from the above distribution is equivalent to sampling from the following distribution:
$$p((x_1,x_2,...,x_n)) = \text{exp}(a_{n+1}) \cdot \frac{\text{exp}((a_1 - a_{n+1}) x_1 + (a_2 - a_{n+1}) x_2 + ... + (a_{n} - a_{n+1})x_{n})}{Z}$$
where $0 \leq \sum_{i=1}^{n}{x_i} \leq 1$ and $x_i \geq 0$ for $i = 1,2,...,n.$
One method for sampling from this distribution is the Gibbs sampler, which samples each individual $x_i$ successively from its conditional distribution given all other $x_j, j \neq i$.
End of explanation
def sampling_from_cubes(a_list):
a_p = np.array(a_list[0:-1]) - a_list[-1]
while True:
x = []
for v in a_p:
x.append(restricted_exp_sample(v, 0, 1))
if np.sum(x) <= 1.0:
break
return x + [1-np.sum(x)]
a_list = [1,3,6]
num_samples = 5000
samples = []
for n in range(num_samples):
samples.append(sampling_from_cubes(a_list))
fontsize = 20
figure, tax = ternary.figure(scale = 1)
tax.boundary(linewidth=2.0)
tax.gridlines(multiple = 0.2, color="blue")
tax.clear_matplotlib_ticks()
tax.ticks(locations = np.linspace(0,1,5))
tax.left_axis_label("$x_1$", fontsize = fontsize)
tax.right_axis_label("$x_3$", fontsize = fontsize)
tax.bottom_axis_label("$x_2$", fontsize = fontsize)
tax.scatter(samples, marker = ".", color = "blue")
x_1 = [s[0] for s in samples]
x_2 = [s[1] for s in samples]
plt.plot(x_1, x_2, '.')
plt.xlabel("$x_1$", fontsize = fontsize)
plt.ylabel("$x_2$", fontsize = fontsize)
plt.plot(np.linspace(0,1,20), 1-np.linspace(0,1,20), color = "r")
Explanation: 2.2 Sampling from Cubes
End of explanation |
10,286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This paper presents a novel set of stimuli, which we refer to as "logpolar gratings" or "scaled gratings". These stimuli are sinusoidal gratings whose spatial frequency decreases as the reciprocal of the eccentricity. To be specific local spatial frequency is
$$\omega_l(r,\theta)=\frac{\sqrt{\omega_r^2+\omega_a^2}}{r}$$
where coordinates $(r,\theta)$ specify the eccentricity and polar angle, relative to the center of the image. The angular frequency $\omega_a$ is an integer specifying the number of grating cycles per revolution around the image, while the radial frequency $\omega_r$ specifies the number of radians per unit increase in $\ln(r)$.
See the paper for more details about the stimuli and their specific use in this experiment. This notebook shows how you can use the software included in this repo to create these stimuli and presents linear approximations of them (but see spatial-frequency-stimuli for a stand-alone repo with a command-line interface for creating sets of these stimuli).
Step1: Creation of stimuli
Our stimuli can be created by calling sfp.stimuli.log_polar_grating, and specifying the size and several other parameters (note that we only support square gratings)
Step2: In the visualization above, with w_r=0,w_a=3, we can see clearly that w_a specifies the number of grating cycles per revolution across the image.
Step3: w_r is a little bit harder to grok, but in the above we can see that increasing it increases the number of times the grating cycles in the radial direction.
In our experiment, we created an anti-aliasing mask to put at the center of each stimulus image, as well as rescaled the stimuli to account for the projector's modulation transfer function (MTF).
Linear approximations of stimuli
The following shows some linear approximations of our stimuli. First, we generate the stimuli, and then we calculate the analytic spatial frequency with respect to x and y everywhere in the image.
Step4: We then use these spatial frequency maps to create a linear approximation
Step5: If we decrease the number of windows, we can see this more clearly | Python Code:
# import necessary packages
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('..')
import sfp
import pandas as pd
import torch
import numpy as np
import matplotlib.pyplot as plt
from tqdm.auto import tqdm
import seaborn as sns
import pyrtools as pt
Explanation: This paper presents a novel set of stimuli, which we refer to as "logpolar gratings" or "scaled gratings". These stimuli are sinusoidal gratings whose spatial frequency decreases as the reciprocal of the eccentricity. To be specific local spatial frequency is
$$\omega_l(r,\theta)=\frac{\sqrt{\omega_r^2+\omega_a^2}}{r}$$
where coordinates $(r,\theta)$ specify the eccentricity and polar angle, relative to the center of the image. The angular frequency $\omega_a$ is an integer specifying the number of grating cycles per revolution around the image, while the radial frequency $\omega_r$ specifies the number of radians per unit increase in $\ln(r)$.
See the paper for more details about the stimuli and their specific use in this experiment. This notebook shows how you can use the software included in this repo to create these stimuli and presents linear approximations of them (but see spatial-frequency-stimuli for a stand-alone repo with a command-line interface for creating sets of these stimuli).
End of explanation
stim = sfp.stimuli.log_polar_grating(200, w_a=3)
pt.imshow(stim);
Explanation: Creation of stimuli
Our stimuli can be created by calling sfp.stimuli.log_polar_grating, and specifying the size and several other parameters (note that we only support square gratings):
End of explanation
stim = []
for i in [3, 6]:
stim.append(sfp.stimuli.log_polar_grating(200, w_r=i))
pt.imshow(stim);
Explanation: In the visualization above, with w_r=0,w_a=3, we can see clearly that w_a specifies the number of grating cycles per revolution across the image.
End of explanation
w_r = 6
w_a = 11
phase = 0
stim = sfp.stimuli.log_polar_grating(500, w_r=w_r, w_a=w_a, phi=phase)
dx, dy, _, _ = sfp.stimuli.create_sf_maps_cpp(stim.shape[0], w_r=w_r, w_a=w_a)
pt.imshow([stim], zoom=.5);
Explanation: w_r is a little bit harder to grok, but in the above we can see that increasing it increases the number of times the grating cycles in the radial direction.
In our experiment, we created an anti-aliasing mask to put at the center of each stimulus image, as well as rescaled the stimuli to account for the projector's modulation transfer function (MTF).
Linear approximations of stimuli
The following shows some linear approximations of our stimuli. First, we generate the stimuli, and then we calculate the analytic spatial frequency with respect to x and y everywhere in the image.
End of explanation
masked_grating, masked_approx = sfp.plotting.plot_grating_approximation(stim, dx, dy, num_windows=20, w_r=w_r, w_a=w_a, phase=phase)
Explanation: We then use these spatial frequency maps to create a linear approximation: the top shows a windowed view of our actual stimuli, while the bottom shows a windowed view of the linear approximation. The approximation is created by creating a standard sinusoid (that is, one with a constant spatial frequency and orientation) with the analytic dx and dy calculated above at the center of each of the little windows. For relatively small values of w_r,w_a and relatively large numbers of windows, the approximation willb e pretty good, though you can see it start to fail near the center of the image, where the actual stimulus's spatial frequency is changing rapidly, and so the linear approximation (which has a constant spatial frequency in each image) is notably different.
End of explanation
masked_grating, masked_approx = sfp.plotting.plot_grating_approximation(stim, dx, dy, num_windows=10, w_r=w_r, w_a=w_a, phase=phase)
Explanation: If we decrease the number of windows, we can see this more clearly:
End of explanation |
10,287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SVM classification/SMOTE oversampling for an imbalanced data set
Date created
Step1: <h3>II. Preprocessing </h3>
We process the missing values first, dropping columns which have a large number of missing values and imputing values for those that have only a few missing values.
The Random Forest variable importance is used to rank the variables in terms of their importance. The one-class SVM exercise has a more detailed version of these steps.
Step2: <h3>III. SVM Classification </h3>
<h4> Preprocessing </h4>
The SVM is sensitive to feature scale so the first step is to center and normalize the data. The train and test sets are scaled separately using the mean and variance computed from the training data. This is done to estimate the ability of the model to generalize.
Step3: <h4> Finding parameters </h4>
The usual way to select parameters is via grid-search and cross-validation (CV). The scoring is based on the accuracy. When the classes are imbalanced, the true positive of the majority class dominates. Often, there is a high cost associated with the misclassification of the minority class, and in those cases alternative scoring measures such as the F1 and $F_{\beta}$ scores or the Matthews Correlation Coefficient (which uses all four values of the confusion matrix) are used.
In CV experiments on this data, the majority class still dominates so that for the best CV F1-scores, the True Negative Rate (TNR - the rate at which the minority class is correctly classified) is zero.
Instead of automating the selection of hyperparameters, I have manually selected <i>C</i> and $\gamma$ values for which the precision/recall/F1 values as well as the TNR are high.
An example is shown below.
Step4: For these manually selected parameters, the TNR is 0.38, the Matthews correlation coefficient is 0.21 and the precision/recall/F1 is in the 0.86 - 0.90 range. Selecting the best CV score (usually in the 0.90 range), on the other hand, would have given a TNR of 0 for all the scoring metrics I looked at.
<h4>The Pipeline -- Oversampling, classification and ROC computations </h4>
The imblearn package includes a pipeline module which allows one to chain transformers, resamplers and estimators. We compute the ROC curves for each of the oversampling ratios and corresponding hyperparameters C and gamma and for this we use the pipeline to oversample with SMOTE and classify with the SVM. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.preprocessing import Imputer
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split as tts
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from imblearn.over_sampling import SMOTE
from imblearn.pipeline import Pipeline
from sklearn.metrics import roc_curve, auc
from __future__ import division
import warnings
warnings.filterwarnings("ignore")
url = "http://archive.ics.uci.edu/ml/machine-learning-databases/secom/secom.data"
secom = pd.read_table(url, header=None, delim_whitespace=True)
url = "http://archive.ics.uci.edu/ml/machine-learning-databases/secom/secom_labels.data"
y = pd.read_table(url, header=None, usecols=[0], squeeze=True, delim_whitespace=True)
print 'The dataset has {} observations/rows and {} variables/columns.' \
.format(secom.shape[0], secom.shape[1])
print 'The ratio of majority class to minority class is {}:1.' \
.format(int(y[y == -1].size/y[y == 1].size))
Explanation: SVM classification/SMOTE oversampling for an imbalanced data set
Date created: Oct 14, 2016
Last modified: Nov 16, 2016
Tags: SVM, SMOTE, ROC/AUC, oversampling, imbalanced data set, semiconductor data
About: Rebalance imbalanced semicondutor manufacturing dataset by oversampling the minority class using SMOTE. Classify using SVM. Assess the value of oversampling using ROC/AUC.
<h3>I. Introduction</h3>
The SECOM dataset in the UCI Machine Learning Repository is semicondutor manufacturing data. There are 1567 records, 590 anonymized features and 104 fails. This makes it an imbalanced dataset with a 14:1 ratio of pass to fails. The process yield has a simple pass/fail response (encoded -1/1).
<h4>Objective</h4>
We consider some of the different approaches to classify imbalanced data. In the previous example we looked at one-class SVM.
Another strategy is to rebalance the dataset by oversampling the minority class and/or undersampling the majority class. This is done to improve the sensitivity (i.e the true positive rate) of the minority class. For this exercise, we will look at:
- rebalancing the dataset using SMOTE (which oversamples the minority class)
- ROC curves for different oversampling ratios
<h4>Methodology</h4>
The sklearn imblearn toolbox has many methods for oversamplng/undersampling. We will use the SMOTE (Synthetic Minority Over-sampling Technique) method introduced in 2002 by Chawla et al. <a href="#ref1">[1]</a>, <a href="#ref2">[2]</a>. With SMOTE, synthetic examples are interpolated along the line segments joining some/all of the <i>k</i> minority class nearest neighbors.
In the experiment, the oversampling rate is varied between 10-70%, in 10% increments. The percentage represents the final minority class fraction after oversampling: if the majority class has 1000 data points (and the minority class 50), at 10% the minority class will have 100 data points after oversampling (not 5 or 50+5 = 55).
The rebalanced data is classified using an SVM. The imblearn toolbox has a pipeline method which will be used to chain all the steps. The SMOTE+SVM method is evaluated by the area under the Receiver Operating Characteristic curve (AUC).
<h4>Preprocessing</h4>
The data represents measurements from a large number of processes or sensors and many of the records are missing. In addition some measurements are identical/constant and so not useful for prediction. We will remove those columns with high missing count or constant values.
The Random Forest variable importance is used to rank the variables in terms of their importance. For the random forest, we will impute the remaining missing values with the median for the column.
We will additionally scale the data that is applied to the SVM. We will use the <i>sklearn preprocessing</i> module for both imputing and scaling.
These are the same steps used for the one-class SVM and a more detailed explanation can be seen there.
End of explanation
# dropping columns which have large number of missing entries
m = map(lambda x: sum(secom[x].isnull()), xrange(secom.shape[1]))
m_200thresh = filter(lambda i: (m[i] > 200), xrange(secom.shape[1]))
secom_drop_200thresh = secom.dropna(subset=[m_200thresh], axis=1)
dropthese = [x for x in secom_drop_200thresh.columns.values if \
secom_drop_200thresh[x].std() == 0]
secom_drop_200thresh.drop(dropthese, axis=1, inplace=True)
print 'The SECOM data set now has {} variables.'\
.format(secom_drop_200thresh.shape[1])
# imputing missing values for the random forest
imp = Imputer(missing_values='NaN', strategy='median', axis=0)
secom_imp = pd.DataFrame(imp.fit_transform(secom_drop_200thresh))
# use Random Forest to assess variable importance
rf = RandomForestClassifier(n_estimators=100, random_state=7)
rf.fit(secom_imp, y)
# sorting features according to their rank
importance = rf.feature_importances_
ranked_indices = np.argsort(importance)[::-1]
Explanation: <h3>II. Preprocessing </h3>
We process the missing values first, dropping columns which have a large number of missing values and imputing values for those that have only a few missing values.
The Random Forest variable importance is used to rank the variables in terms of their importance. The one-class SVM exercise has a more detailed version of these steps.
End of explanation
# split data into train and holdout sets
# stratify the sample used for modeling to preserve the class proportions
X_train, X_holdout, y_train, y_holdout = tts(secom_imp[ranked_indices[:40]], y, \
test_size=0.2, stratify=y, random_state=5)
print 'Train data: The majority/minority class have {} and {} elements respectively.'\
.format(y_train[y_train == -1].size, y_train[y_train == 1].size)
print 'The maj/min class ratio is: {0:2.0f}' \
.format(round(y_train[y_train == -1].size/y_train[y_train == 1].size))
print 'Holdout data: The majority/minority class have {} and {} elements respectively.'\
.format(y_holdout[y_holdout == -1].size, y_holdout[y_holdout == 1].size)
print 'The maj/min class ratio for the holdout set is: {0:2.0f}' \
.format(round(y_holdout[y_holdout == -1].size/y_holdout[y_holdout == 1].size))
# scaling the split data. The holdout data uses scaling parameters
# computed from the training data
standard_scaler = StandardScaler()
X_train_scaled = pd.DataFrame(standard_scaler.fit_transform(X_train), \
index=X_train.index)
X_holdout_scaled = pd.DataFrame(standard_scaler.transform(X_holdout))
# Note: we convert to a DataFrame because the plot functions
# we will use need DataFrame inputs.
Explanation: <h3>III. SVM Classification </h3>
<h4> Preprocessing </h4>
The SVM is sensitive to feature scale so the first step is to center and normalize the data. The train and test sets are scaled separately using the mean and variance computed from the training data. This is done to estimate the ability of the model to generalize.
End of explanation
# oversampling
ratio = 0.5
smote = SMOTE(ratio = ratio, kind='regular')
smox, smoy = smote.fit_sample(X_train_scaled, y_train)
print 'Before resampling: \n\
The majority/minority class have {} and {} elements respectively.'\
.format(y_train[y == -1].size, y_train[y == 1].size)
print 'After oversampling at {}%: \n\
The majority/minority class have {} and {} elements respectively.'\
.format(ratio, smoy[smoy == -1].size, smoy[smoy == 1].size)
# plotting minority class distribution after SMOTE
# column 4 displayed
from IPython.html.widgets import interact
@interact(ratio=[0.1,1.0])
def plot_dist(ratio):
sns.set(style="white", font_scale=1.3)
fig, ax = plt.subplots(figsize=(7,5))
smote = SMOTE(ratio = ratio, kind='regular')
smox, smoy = smote.fit_sample(X_train_scaled, y_train)
smox_df = pd.DataFrame(smox)
ax = sns.distplot(smox_df[4][smoy == 1], color='b', \
kde=False, label='after')
ax = sns.distplot(X_train_scaled[4][y_train == 1], color='r', \
kde=False, label='before')
ax.set_ylim([0, 130])
ax.set(xlabel='')
ax.legend(title='Ratio = {}'.format(ratio))
plt.title('Minority class distribution before and after oversampling')
plt.show()
# classification results
from sklearn.metrics import confusion_matrix, matthews_corrcoef,\
classification_report, roc_auc_score, accuracy_score
# manually selected parameters
clf = SVC(C = 2, gamma = .0008)
clf.fit(smox, smoy)
y_predicted = clf.predict(X_holdout_scaled)
print 'The accuracy is: {0:4.2} \n' \
.format(accuracy_score(y_holdout, y_predicted))
print 'The confusion matrix: '
cm = confusion_matrix(y_holdout, y_predicted)
print cm
print '\nThe True Negative rate is: {0:4.2}' \
.format(float(cm[1][1])/np.sum(cm[1]))
print '\nThe Matthews correlation coefficient: {0:4.2f} \n' \
.format(matthews_corrcoef(y_holdout, y_predicted))
print(classification_report(y_holdout, y_predicted))
print 'The AUC is: {0:4.2}'\
.format(roc_auc_score(y_holdout, y_predicted))
Explanation: <h4> Finding parameters </h4>
The usual way to select parameters is via grid-search and cross-validation (CV). The scoring is based on the accuracy. When the classes are imbalanced, the true positive of the majority class dominates. Often, there is a high cost associated with the misclassification of the minority class, and in those cases alternative scoring measures such as the F1 and $F_{\beta}$ scores or the Matthews Correlation Coefficient (which uses all four values of the confusion matrix) are used.
In CV experiments on this data, the majority class still dominates so that for the best CV F1-scores, the True Negative Rate (TNR - the rate at which the minority class is correctly classified) is zero.
Instead of automating the selection of hyperparameters, I have manually selected <i>C</i> and $\gamma$ values for which the precision/recall/F1 values as well as the TNR are high.
An example is shown below.
End of explanation
# oversampling, classification and computing ROC values
fpr = dict()
tpr = dict()
roc_auc = dict()
ratio = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7]
C = [3, 3, 3, 2, 2, 2, 2]
gamma = [.02, .009, .009, .005, .0008, .0009, .0007]
estimators = [('smt', SMOTE(random_state=42)),
('clf', SVC(probability=True, random_state=42))]
pipe = Pipeline(estimators)
print pipe
for i, ratio, C, gamma in zip(range(7), ratio, C, gamma):
pipe.set_params(smt__ratio = ratio, clf__C = C, clf__gamma = gamma)
probas_ = pipe.fit(X_train_scaled, y_train).predict_proba(X_holdout_scaled)
fpr[i], tpr[i], _ = roc_curve(y_holdout, probas_[:,1])
roc_auc[i] = auc(fpr[i], tpr[i])
# plotting the ROC curves
def plot_roc(fpr, tpr, roc_auc):
colors = ['darkorange', 'deeppink', 'red', 'aqua', 'cornflowerblue','navy', 'blue']
plt.figure(figsize=(10,8.5))
for i, color in zip(range(7), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=2, linestyle=':',
label='{0} (area = {1:0.2f})'
''.format((i+1)/10, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=1)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curves: SMOTE oversampled minority class', fontsize=14)
plt.legend(title='Class ratio after oversampling', loc="lower right")
plt.show()
plt.savefig('ROC_oversampling.png')
plot_roc(fpr, tpr, roc_auc)
Explanation: For these manually selected parameters, the TNR is 0.38, the Matthews correlation coefficient is 0.21 and the precision/recall/F1 is in the 0.86 - 0.90 range. Selecting the best CV score (usually in the 0.90 range), on the other hand, would have given a TNR of 0 for all the scoring metrics I looked at.
<h4>The Pipeline -- Oversampling, classification and ROC computations </h4>
The imblearn package includes a pipeline module which allows one to chain transformers, resamplers and estimators. We compute the ROC curves for each of the oversampling ratios and corresponding hyperparameters C and gamma and for this we use the pipeline to oversample with SMOTE and classify with the SVM.
End of explanation |
10,288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LVA Tables (Storage Geometry)
We can retrieve the geometry of a given storage.
We can also set the geometry of a storage (or set multiple storages to the same geometry)
Step1: Specifying full supply and initial conditions
Storage full supply and initial conditions can be specified by either Level or Volume.
A method exists for specifying Full Supply including whether it should be considered a Level or a Volume.
Initial conditions should then be set by either '
Step2: Releases
We can query outlet paths and individual release mechanisms.
We can also create and parameterise new mechanisms
Step3: We can create each type of release mechanism.
In each case we need to provide a table of level, minimum (release) and maximum (release). (units m, m^3.s^-1, m^3.s^-1, respectively) | Python Code:
lva = v.model.node.storages.lva('IrrigationOnlyStorage')
lva
scaled_lva = lva * 2
scaled_lva
# v.model.node.storages.load_lva(scaled_lva) # Would load the same table into ALL storages
# v.model.node.storages.load_lva(scaled_lva,nodes=['StorageOnlyStorage','BothStorage']) # Will load into two storages
v.model.node.storages.load_lva(scaled_lva,nodes='IrrigationOnlyStorage')
v.model.node.storages.lva('IrrigationOnlyStorage')
Explanation: LVA Tables (Storage Geometry)
We can retrieve the geometry of a given storage.
We can also set the geometry of a storage (or set multiple storages to the same geometry)
End of explanation
v.model.node.storages.set_full_supply(95000,'Volume',nodes='IrrigationOnlyStorage')
v.model.node.storages.set_param_values('InitialVolume',50000,nodes='IrrigationOnlyStorage')
# OR
# v.model.node.storages.set_param_values('InitialStorageLevel',4.5,nodes='IrrigationOnlyStorage')
Explanation: Specifying full supply and initial conditions
Storage full supply and initial conditions can be specified by either Level or Volume.
A method exists for specifying Full Supply including whether it should be considered a Level or a Volume.
Initial conditions should then be set by either '
End of explanation
v.model.node.storages.outlets(nodes='IrrigationOnlyStorage')
v.model.node.storages.releases(nodes='IrrigationOnlyStorage',path=1)
v.model.node.storages.release_table('IrrigationOnlyStorage','Ungated Spillway #0')
Explanation: Releases
We can query outlet paths and individual release mechanisms.
We can also create and parameterise new mechanisms
End of explanation
release_table = [(0,0,0),(1,1,1),(5,10,12)]
release_table = pd.DataFrame(release_table,columns=['level','minimum','maximum'])
release_table
# To a particular outlet or a particular storage
#v.model.node.storages.add_valve(release_table,'IrrigationOnlyStorageOverflow',nodes='IrrigationOnlyStorage')
# Add the same valve to EVERY storage (on first outlet path)
#v.model.node.storages.add_valve(release_table)
# Optionally specify the name of the release
#v.model.node.storages.add_valve(release_table,name='my new valve')
Explanation: We can create each type of release mechanism.
In each case we need to provide a table of level, minimum (release) and maximum (release). (units m, m^3.s^-1, m^3.s^-1, respectively)
End of explanation |
10,289 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 5
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Create new features
As in Week 2, we consider features that are some transformations of inputs.
Step3: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
Step4: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
Step5: Find what features had non-zero weight.
Step6: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION
Step7: Next, we write a loop that does the following
Step8: QUIZ QUESTIONS
1. What was the best value for the l1_penalty?
2. What is the RSS on TEST data of the model with the best l1_penalty?
Step9: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
Step10: Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal
Step11: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values
Step12: Now, implement a loop that search through this space of possible l1_penalty values
Step13: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find
Step14: QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found
Step15: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20)
Step16: QUIZ QUESTIONS
1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
2. What features in this model have non-zero coefficients? | Python Code:
import sys
sys.path.append('C:\Anaconda2\envs\dato-env\Lib\site-packages')
import graphlab
Explanation: Regression Week 5: Feature Selection and LASSO (Interpretation)
In this notebook, you will use LASSO to select features, building on a pre-implemented solver for LASSO (using GraphLab Create, though you can use other solvers). You will:
* Run LASSO with different L1 penalties.
* Choose best L1 penalty using a validation set.
* Choose best L1 penalty using a validation set, with additional constraint on the size of subset.
In the second notebook, you will implement your own LASSO solver, using coordinate descent.
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to float, before creating a new feature.
sales['floors'] = sales['floors'].astype(float)
sales['floors_square'] = sales['floors']*sales['floors']
Explanation: Create new features
As in Week 2, we consider features that are some transformations of inputs.
End of explanation
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
End of explanation
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=1e10)
Explanation: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
End of explanation
# non_zero_weight = model_all.get("coefficients")["value"]
non_zero_weight = model_all["coefficients"][model_all["coefficients"]["value"] > 0]
non_zero_weight.print_rows(num_rows=20)
Explanation: Find what features had non-zero weight.
End of explanation
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate
Explanation: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION:
According to this list of weights, which of the features have been chosen?
Selecting an L1 penalty
To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:
* Split our sales data into 2 sets: training and test
* Further split our training data into two sets: train, validation
Be very careful that you use seed = 1 to ensure you get the same answer!
End of explanation
import numpy as np
import pprint
validation_rss = {}
for l1_penalty in np.logspace(1, 7, num=13):
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=l1_penalty)
predictions = model.predict(validation)
residuals = validation['price'] - predictions
rss = sum(residuals**2)
validation_rss[l1_penalty] = rss
# pprint.pprint(result_dict)
print min(validation_rss.items(), key=lambda x: x[1])
Explanation: Next, we write a loop that does the following:
* For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
* Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
* Report which l1_penalty produced the lowest RSS on validation data.
When you call linear_regression.create() make sure you set validation_set = None.
Note: you can turn off the print out of linear_regression.create() with verbose = False
End of explanation
model_test = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=10.0)
predictions_test = model.predict(testing)
residuals_test = testing['price'] - predictions_test
rss_test = sum(residuals_test**2)
print rss_test
Explanation: QUIZ QUESTIONS
1. What was the best value for the l1_penalty?
2. What is the RSS on TEST data of the model with the best l1_penalty?
End of explanation
non_zero_weight_test = model_test["coefficients"][model_test["coefficients"]["value"] > 0]
print model_test["coefficients"]["value"].nnz()
non_zero_weight_test.print_rows(num_rows=20)
Explanation: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
End of explanation
max_nonzeros = 7
Explanation: Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal:
1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.
End of explanation
l1_penalty_values = np.logspace(8, 10, num=20)
print l1_penalty_values
Explanation: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values:
End of explanation
coef_dict = {}
for l1_penalty in l1_penalty_values:
model = graphlab.linear_regression.create(training, target ='price', features=all_features,
validation_set=None, verbose=None,
l2_penalty=0., l1_penalty=l1_penalty)
coef_dict[l1_penalty] = model['coefficients']['value'].nnz()
pprint.pprint(coef_dict)
Explanation: Now, implement a loop that search through this space of possible l1_penalty values:
For l1_penalty in np.logspace(8, 10, num=20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
Hint: model['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!
End of explanation
l1_penalty_min = 2976351441.6313128
l1_penalty_max = 3792690190.7322536
Explanation: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find:
* The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
* Store this value in the variable l1_penalty_min (we will use it later)
* The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
* Store this value in the variable l1_penalty_max (we will use it later)
Hint: there are many ways to do this, e.g.:
* Programmatically within the loop above
* Creating a list with the number of non-zeros for each value of l1_penalty and inspecting it to find the appropriate boundaries.
End of explanation
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
Explanation: QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found:
End of explanation
validation_rss = {}
for l1_penalty in l1_penalty_values:
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=l1_penalty)
predictions = model.predict(validation)
residuals = validation['price'] - predictions
rss = sum(residuals**2)
validation_rss[l1_penalty] = rss, model['coefficients']['value'].nnz()
for k,v in validation_rss.iteritems():
if (v[1] == max_nonzeros) and (v[0] < bestRSS):
bestRSS = v[0]
bestl1 = k
print bestRSS, bestl1
for k,v in validation_rss.iteritems():
if (v[1] == max_nonzeros) and (v[0] < bestRSS):
bestRSS = v[0]
print k, bestRSS
Explanation: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Measure the RSS of the learned model on the VALIDATION set
Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.
End of explanation
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=3448968612.16)
non_zero_weight_test = model["coefficients"][model["coefficients"]["value"] > 0]
non_zero_weight_test.print_rows(num_rows=8)
Explanation: QUIZ QUESTIONS
1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
2. What features in this model have non-zero coefficients?
End of explanation |
10,290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'cams-csm1-0', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CAMS
Source ID: CAMS-CSM1-0
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
10,291 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CIFAR–10 Codealong
Codealong of Radek Osmulski's notebook establishing a CIFAR-10 baseline with the Fastai ImageNet WideResNet22. For a Fastai CV research collaboration.
Wayne Nixalo –– 2018/6/1
Step1: We construct the data object manually from low level components in a way that can be used with the fastsai library.
Step2: This is very similar to how Fastai initializes its ModelData, except Fastai uses the Pytorch default pin_memory=False.
What is the disadvantage of using pin_memory? –
Pytorch docs
Step3: WiP Fastai DAWN Bench submission
My copy of Radek's reimplementation of the FastAI DAWN Bench submission in terms of archutecture and training parameters – from the imagenet-fast repo.
Step4: Fastai DAWN Bench submission w/ fastai dataloaders
I've been having trouble getting the pytorch dataloaders above to play nice
Step5: scrap
Step6:
Step7:
Step8:
Step9:
Step10:
Step11: TypeError | Python Code:
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from fastai.conv_learner import *
# fastai/imagenet-fast/cifar10/models/ repo
from imagenet_fast_cifar_models.wideresnet import wrn_22
from torchvision import transforms, datasets
# allows you to enable the inbuilt cudnn auto-tuner to find the
# best algorithm for your hardware. https://discuss.pytorch.org/t/what-does-torch-backends-cudnn-benchmark-do/5936/2
torch.backends.cudnn.benchmark = True
PATH = Path("data/cifar10")
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
stats = (np.array([ 0.4914 , 0.48216, 0.44653]), np.array([ 0.24703, 0.24349, 0.26159]))
## temp small dataset for fast iteration
# import cifar_utils
# PATH = cifar_utils.create_cifar_subset(PATH, copydirs=['train','test'], p=0.1)
PATH = Path("data/cifar10_tmp")
## Aside: making a small subset of the dataset for fast troubleshooting
## also bc idk how to marry pytorch dataloaders w/ csv's yet.
import cifar_utils
PATH = cifar_utils.create_cifar_subset(PATH, copydirs=['train','test'], p=0.1)
## Aside: checking the normalization transforms
tensor = T(np.ones((3,32,32)))
t1 = transforms.Normalize(stats[0], stats[1])(tensor)
t2 = transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))(tensor)
np.unique(np.isclose(t1, t2))
Explanation: CIFAR–10 Codealong
Codealong of Radek Osmulski's notebook establishing a CIFAR-10 baseline with the Fastai ImageNet WideResNet22. For a Fastai CV research collaboration.
Wayne Nixalo –– 2018/6/1
End of explanation
def get_loaders(bs, num_workers):
traindir = str(PATH/'train')
valdir = str(PATH/'test')
tfms = [transforms.ToTensor(),
transforms.Normalize(*stats)]
#transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))]
aug_tfms = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
] + tfms)
train_dataset = datasets.ImageFolder(traindir, aug_tfms)
val_dataset = datasets.ImageFolder(valdir, transforms.Compose(tfms))
aug_dataset = datasets.ImageFolder(valdir, aug_tfms)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=bs, shuffle=True, num_workers=num_workers, pin_memory=False)
val_loader = torch.utils.data.DataLoader(
val_dataset, batch_size=bs, shuffle=False, num_workers=num_workers, pin_memory=False)
aug_loader = torch.utils.data.DataLoader(
aug_dataset, batch_size=bs, shuffle=False, num_workers=num_workers, pin_memory=False)
## NOTE--didnt work: Want automated GPU/CPU handling so using fastai dataloaders
# train_loader = DataLoader(train_dataset, batch_size=bs, shuffle=True,
# num_workers=num_workers, pin_memory=True)
# val_loader = DataLoader(val_dataset, batch_size=bs, shuffle=False,
# num_workers=num_workers, pin_memory=True)
# aug_loader = DataLoader(aug_dataset, batch_size=bs, shuffle=False,
# num_workers=num_workers, pin_memory=True)
return train_loader, val_loader, aug_loader
Explanation: We construct the data object manually from low level components in a way that can be used with the fastsai library.
End of explanation
def get_data(bs, num_workers):
trn_dl, val_dl, aug_dl = get_loaders(bs, num_workers)
data = ModelData(PATH, trn_dl, val_dl)
data.aug_dl = aug_dl
data.sz = 32
return data
def get_learner(arch, bs):
learn = ConvLearner.from_model_data(arch.cuda(), get_data(bs, num_cpus()))
learn.crit = nn.CrossEntropyLoss()
learn.metrics = [accuracy]
return learn
def get_TTA_accuracy(learn):
preds, targs = learn.TTA()
# combining the predictions across augmented and non augmented inputs
preds = 0.6 * preds[0] + 0.4 * preds[1:].sum(0)
return accuracy_np(preds, targs)
Explanation: This is very similar to how Fastai initializes its ModelData, except Fastai uses the Pytorch default pin_memory=False.
What is the disadvantage of using pin_memory? –
Pytorch docs:
by pinning your batch in cpu memory, data transfer to gpu can be much faster
Soumith:
pinned memory is page-locked memory. It's easy to shoot yourself in the foot if you enable it for everything because it can't be pre-empted. ... if you're seeing system freeze or swap being used a lot, disable it.
End of explanation
# learner = get_learner(wrn_22(), 512)
# learner.lr_find(wds=1e-4)
# learner.sched.plot(n_skip_end=1)
learner = get_learner(wrn_22(), 16)
learner.lr_find(wds=1e-4)
learner.sched.plot(n_skip_end=1)
learner = get_learner(wrn_22(), 16)
learner.lr_find(wds=1e-4)
learner.sched.plot(n_skip_end=1)
learner = get_learner(wrn_22(), 16)
learner.lr_find(wds=1e-4)
learner.sched.plot(n_skip_end=1)
learner = get_learner(wrn_22(), 8)
learner.lr_find(wds=1e-4)
learner.sched.plot(n_skip_end=1)
learner = get_learner(wrn_22(), 8)
learner.lr_find(wds=1e-4)
learner.sched.plot(n_skip_end=1)
Explanation: WiP Fastai DAWN Bench submission
My copy of Radek's reimplementation of the FastAI DAWN Bench submission in terms of archutecture and training parameters – from the imagenet-fast repo.
End of explanation
bs=8; sz=32; fulldata=False
aug_tfms = [RandomFlip(), RandomCrop(32)] # hopefully this is the same as aug_tfms above
tfms = tfms_from_stats(stats, sz=32, aug_tfms=aug_tfms, pad=4)
if not fulldata:
# quick prototyping csv (10% dataset)
val_idxs = get_cv_idxs(n=pd.read_csv(PATH/'tmp.csv').count()[0])
model_data = ImageClassifierData.from_csv(
PATH, 'train', PATH/'tmp.csv', bs=bs, tfms=tfms, val_idxs=val_idxs)
else:
# full dataset
model_data = ImageClassifierData.from_paths(
PATH, bs=bs, tfms=tfms, trn_name='train',val_name='test')
learner = ConvLearner.from_model_data(wrn_22(), model_data)
learner.crit = nn.CrossEntropyLoss(); learner.metrics = [accuracy]
learner.lr_find(wds=1e-4)
learner.sched.plot(n_skip_end=1)
learner = ConvLearner.from_model_data(wrn_22(), model_data)
learner.crit = nn.CrossEntropyLoss(); learner.metrics = [accuracy]
learner.lr_find(wds=1e-4)
learner.sched.plot(n_skip_end=1)
learner = ConvLearner.from_model_data(wrn_22(), model_data)
learner.crit = nn.CrossEntropyLoss(); learner.metrics = [accuracy]
learner.lr_find(wds=1e-4)
learner.sched.plot(n_skip_end=1)
Explanation: Fastai DAWN Bench submission w/ fastai dataloaders
I've been having trouble getting the pytorch dataloaders above to play nice: for some reason they won't return cuda tensors, only cpu float tensors... but the model is waiting to run on data on the gpu...
I don't want to manually build fastai dataloaders either – since in that case they require a validation split and more low-level thinking that'll distract a lot from wider issues; so I'm using the automated fastai way.
End of explanation
DataLoader.__init__
torch.utils.data.DataLoader.__init__
def get_loaders(bs, num_workers):
traindir = str(PATH/'train')
valdir = str(PATH/'test')
tfms = [transforms.ToTensor(),
transforms.Normalize(stats[0], stats[1])]
#transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))]
aug_tfms = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
] + tfms)
train_dataset = datasets.ImageFolder(traindir, aug_tfms)
val_dataset = datasets.ImageFolder(valdir, transforms.Compose(tfms))
aug_dataset = datasets.ImageFolder(valdir, aug_tfms)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=bs, shuffle=True, num_workers=num_workers, pin_memory=False)
val_loader = torch.utils.data.DataLoader(
val_dataset, batch_size=bs, shuffle=False, num_workers=num_workers, pin_memory=False)
aug_loader = torch.utils.data.DataLoader(
aug_dataset, batch_size=bs, shuffle=False, num_workers=num_workers, pin_memory=False)
## Want automated GPU/CPU handling so using fastai dataloaders
# train_loader = DataLoader(train_dataset, batch_size=bs, shuffle=True,
# num_workers=num_workers, pin_memory=True)
# val_loader = DataLoader(val_dataset, batch_size=bs, shuffle=False,
# num_workers=num_workers, pin_memory=True)
# aug_loader = DataLoader(aug_dataset, batch_size=bs, shuffle=False,
# num_workers=num_workers, pin_memory=True)
return train_loader, val_loader, aug_loader
bs=64; sz=32; num_workers=1
traindir = str(PATH/'train')
valdir = str(PATH/'test')
tfms = [transforms.ToTensor(),
transforms.Normalize(stats[0], stats[1])]
#transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))]
aug_tfms = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
] + tfms)
train_dataset = datasets.ImageFolder(traindir, aug_tfms)
val_dataset = datasets.ImageFolder(valdir, transforms.Compose(tfms))
aug_dataset = datasets.ImageFolder(valdir, aug_tfms)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=bs, shuffle=True, num_workers=num_workers, pin_memory=True)
val_loader = torch.utils.data.DataLoader(
val_dataset, batch_size=bs, shuffle=False, num_workers=num_workers, pin_memory=True)
aug_loader = torch.utils.data.DataLoader(
aug_dataset, batch_size=bs, shuffle=False, num_workers=num_workers, pin_memory=True)
train_loader.cuda()
Explanation: scrap
End of explanation
x,y = next(iter(train_loader))
x[0].type()
type(y[0])
tfms = tfms_from_stats(stats, sz=32)
md = ImageClassifierData.from_csv(PATH, 'train', PATH/'tmp.csv', tfms=tfms)
next(iter(md.trn_dl))[0].type()
md.trn_ds.
train_dataset.
train_loader,_,_ = get_loaders(bs=64, num_workers=1)
next(iter(train_loader))[0].type()
Explanation:
End of explanation
## Aside: making a small subset of the dataset for fast troubleshooting
## also bc idk how to marry pytorch dataloaders w/ csv's yet.
import cifar_utils
PATH = cifar_utils.create_cifar_subset(PATH, copydirs=['train','test'], p=0.1)
# df.head()
import cifar_utils
df = cifar_utils.generate_csv(PATH)
df.to_csv(PATH/'tmp.csv', index=False)
df = pd.read_csv(PATH/'tmp.csv', index_col=0, header=0, dtype=str)
fnames = df.index.values
# fnames = df.iloc[:,0].values
df.iloc[:,0] = df.iloc[:,0].str.split(' ')
fnames,csv_labels= sorted(fnames), list(df.to_dict().values())[0]
fnames[:10], list(csv_labels.items())[:10]
Explanation:
End of explanation
learner = get_learner(wrn_22(), 512)
x,y = next(iter(learner.data.trn_dl))
type(x[0]), type(y[0])
Explanation:
End of explanation
next(iter(learner.data.val_dl))[0].type(), next(iter(learner.data.trn_dl))[0].type()
Explanation:
End of explanation
tfms = tfms_from_stats(stats, sz=32)
md = ImageClassifierData.from_csv(PATH, 'train', PATH/'tmp.csv', tfms=tfms)
next(iter(md.trn_dl))[0].type()
learner = get_learner(wrn_22(), 512)
learner.lr_find(wds=1e-4)
learner.sched.plot(n_skip_end=1)
Explanation:
End of explanation
learner = get_learner(wrn_22(), 512)
learner.lr_find(wds=1e-4)
learner.sched.plot(n_skip_end=1)
Explanation: TypeError: eq received an invalid combination of arguments - got (torch.LongTensor), but expected one of:
* (int value)
didn't match because some of the arguments have invalid types: (torch.LongTensor)
* (torch.cuda.LongTensor other)
didn't match because some of the arguments have invalid types: (torch.LongTensor)
End of explanation |
10,292 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Shap Summary Plots
| Python Code::
import shap
shap.initjs()
shap_values = shap.TreeExplainer(model).shap_values(X_train)
shap.summary_plot(shap_values, X_train, plot_type="bar")
shap.summary_plot(shap_values, X_train)
|
10,293 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is duck typing, and it is everywhere although you did not notice it
Duck typing
Step1: EAFP | Python Code:
# Compare
def let_duck_swim_and_quack(d):
if hasattr(d, "swim") and hasattr(d, "quack"):
d.swim()
d.quack()
else:
print "It does not look like a duck"
raise AttributeError
def let_duck_swim_and_quack(d):
try:
d.swim()
d.quack()
except AttributeError:
print "It does not look like a duck"
raise
Explanation: This is duck typing, and it is everywhere although you did not notice it
Duck typing:
When I see a bird that walks like a duck and swims like a duck and quacks
like a duck, I call that bird a duck.
In other words
Duck typing:
You have to worry and care about the methods and attributes of an used object,
rather than about its exact type
You make your code more extendable, portable, reusable, mantenible...
It requires testing, ofc
Typical approach: treat your system as black box and only check inputs and outputs
Here we also applied another Pythonic principle
EAFP:
It is Easier to Ask for Forgiveness than Permission
End of explanation
# However... there is always THAT case
def get_three_keys_value(d):
it = iter(d)
try:
key1 = it.next()
key2 = it.next()
key3 = it.next()
return d[key1], d[key2], d[key3]
except (StopIteration, KeyError, IndexError):
return None
eggs = {0: "zero", 1: "one", 2: "two", 3: "three", 4: "four"}
print get_three_keys_value(eggs)
spam = [0, 1, 2, 3, 4]
print get_three_keys_value(spam) # Uuuups
def multi_upper(texts):
return map(str.upper, texts)
spam = ['zero', 'one', 'two', 'three', 'four']
print multi_upper(spam)
eggs = "eggs"
print multi_upper(eggs) # Uuuups
# Sadly, in some cases you may need type checking
def multi_upper(texts):
if isinstance(texts, basestring): # basestring is the superclass of str and unicode
texts = [texts]
return map(str.upper, texts)
print multi_upper(spam)
print multi_upper(eggs)
def get_three_keys_value(d):
if isinstance(d, (tuple, list)): # You can provide several types inside a tuple
return None
it = iter(d)
try:
key1 = it.next()
key2 = it.next()
key3 = it.next()
return d[key1], d[key2], d[key3]
except (StopIteration, KeyError, IndexError):
return None
eggs = {0: "zero", 1: "one", 2: "two", 3: "three", 4: "four"}
print get_three_keys_value(eggs)
spam = [0, 1, 2, 3, 4]
print get_three_keys_value(spam)
print isinstance(advice_mallard, Duck) # You can provide also classes instead of types
Explanation: EAFP:
- In positive cases try / except is faster than if / else
- More understandable
- You don't have to know all cases to check
End of explanation |
10,294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Configure the plotting machinery
Step1: Configure the rendering of this notebook with CSS
Step2: Response Analysis in the Frequency Domain<br> <small>an Example</small>
Samples
Step3: We want to replicate the solution obtained for a triangular load using the Duhamel Integral.
Our load is 3s long, but we need a stretch of zeros to damp out the response and simulate rest initial conditions. We add zeros to the end of the load up to
a total duration of 8s. The period of our loading is hence, rather arbitrarily,
$T=8\,{}$s.
In our exercise, we are free to choose the number of samples per second,
so we chose 512 sps.
How many samples are there? $N = 8\times512=4096$, note that $N$ is a power of 2.
Step4: Load definition
The array t contains the times at which our signal was sampled, the load p is computed using the library function where, syntactically very similar to IF in a spreadsheet
Step5: Am I sure that the list p contains the values of the loading?
Let's try to plot p vs t...
Step6: It looks OK...
FFT of the loading
Now, the fast Fourier transform of the sequence p is computed, and given a name, P.
It is customary to denote Fourier pairs by the same letter, the small letter for the time domain representation and the capital letter for the frequency domain representation.
Step7: I have computed also the inverse FFT of the FFT of the loading, naming it iP, it is a sequence of complex numbers and here we plot the real and the imaginary part of each component versus time.
Step8: It seems OK...
Next, we use a convenience function to compute a sequence of frequencies (in Hertz!) associated with the components of P, the FFT of p. The parameters are the number of points and the sampling interval..
Note that the sequence of frequencies has a discontinuity when the Nyquist frequency
is reached, i.e., the next frequency is the most negative one.
Step9: Plots of P, the FFT of p
The x axis is streching over the interval $-f_\text{Ny}$, $+f_\text{Ny}$
Step10: The plot above is not much clear, because the frequency components are significantly different from zero only in a narrow range of frequencies around the origin.
In the next 3 plots we zoom near the origin of the frequency axis to have a bit more of detail. There are 3 plots, first the absolute value of P vs f, then the real part and finally the imaginary part of P, versus f.
Step11: Not afwully nice, this last plot...the baseline and the missing line
at the left of the zero are artifacts, due do the particular sequence
with which the positive and negative frequencies are arranged in the DFT output.
To obviate these problems we can use the function fftshift, that reorders (shifts)
the elements in an array such that the sequence goes from the most negative
frequency to the most positive.
Step12: and now the other two plots I promised,
Step13: The response function
Until now, we did without the SDOF, now it's time to describe it and derive its response function.
All the parameters are the same as in the excel example, we compute k because we need it to normalize the response.
Step14: As usual, we plot the response function, or rather the absolute value of, against a short span of the frequency axis, centered about the origin, to show the details of the response function itself.
Step15: Computing the response
The FFT of the response is computed multiplying, term by term, P by the response or transfer function, then we compute the IFFT of X to obtain x, the time domain representation of the response domain.
Step16: Note that the response function is periodic with period $T=8\,{}$s.
In the end, we remain with the task of plotting the response function, that is the real part of x. Just to be certain we plot also the imaginary part of x, so we can be sure that it is negligible with respect to the real part
Step17: The zero trail
The importance of the zero trail to adjust for initial rest condition cannot be
underestimated. The length required depends, of course, on how much damped our system is,
the lesser the damping, the longer the time required to damp out the response.
Lets try to see what happens if we go from $\zeta=0.10$ to $\zeta=0.01$ | Python Code:
%pylab inline
%config InlineBackend.figure_format = 'svg'
import json
s = json.load( open("mplrc.json") )
matplotlib.rcParams.update(s)
matplotlib.rcParams['figure.figsize'] = 9,4
black="#404060" # plots containing "real black" elements look artificial
Explanation: Configure the plotting machinery
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("custom.css", "r").read()
return HTML(styles)
# css_styling()
Explanation: Configure the rendering of this notebook with CSS
End of explanation
figsize(9.0,1.1);
subplot(1,2,1);plot((0,1,3),(0,40,0));xticks((0,1,3)); yticks((0,40))
xlabel('t/s');ylabel('p(t)/kN');grid();
subplot(1,2,2);plot((0,1,3,8),(0,40,0,0));xticks((0,1,3,8)); yticks((0,40))
xlabel('t/s');ylabel('p(t)/kN');grid();
Explanation: Response Analysis in the Frequency Domain<br> <small>an Example</small>
Samples
End of explanation
T = 8
sps = 512
print("The Nyquist frequency is ", sps/2.0,"Hz")
Explanation: We want to replicate the solution obtained for a triangular load using the Duhamel Integral.
Our load is 3s long, but we need a stretch of zeros to damp out the response and simulate rest initial conditions. We add zeros to the end of the load up to
a total duration of 8s. The period of our loading is hence, rather arbitrarily,
$T=8\,{}$s.
In our exercise, we are free to choose the number of samples per second,
so we chose 512 sps.
How many samples are there? $N = 8\times512=4096$, note that $N$ is a power of 2.
End of explanation
t = arange(0,T,1./sps)
p = where(t>3, 0, where(t<1, t*40000, 40000*(3-t)/2))
Explanation: Load definition
The array t contains the times at which our signal was sampled, the load p is computed using the library function where, syntactically very similar to IF in a spreadsheet
End of explanation
matplotlib.rcParams['figure.figsize'] = 9,4
plot(t, p, black) ; xlabel("t/s") ; ylabel("p(t)/N")
ylim((-5000,45000))
Explanation: Am I sure that the list p contains the values of the loading?
Let's try to plot p vs t...
End of explanation
P = fft.fft(p)
iP = fft.ifft(P)
Explanation: It looks OK...
FFT of the loading
Now, the fast Fourier transform of the sequence p is computed, and given a name, P.
It is customary to denote Fourier pairs by the same letter, the small letter for the time domain representation and the capital letter for the frequency domain representation.
End of explanation
plot(t,real(iP),black,t,1*imag(iP),'y') ; xlabel("t/s") ; ylabel("p(t)/N") ;
Explanation: I have computed also the inverse FFT of the FFT of the loading, naming it iP, it is a sequence of complex numbers and here we plot the real and the imaginary part of each component versus time.
End of explanation
f = fft.fftfreq(T*sps, 1./sps)
f, f[2046:2051]
Explanation: It seems OK...
Next, we use a convenience function to compute a sequence of frequencies (in Hertz!) associated with the components of P, the FFT of p. The parameters are the number of points and the sampling interval..
Note that the sequence of frequencies has a discontinuity when the Nyquist frequency
is reached, i.e., the next frequency is the most negative one.
End of explanation
plot(f,real(P), black, f,imag(P),'b')
xlim(-256,256) ; xticks(range(-256,257,64))
xlabel("f/Hz") ; ylabel("P(f)") ;
Explanation: Plots of P, the FFT of p
The x axis is streching over the interval $-f_\text{Ny}$, $+f_\text{Ny}$
End of explanation
plot(f,abs(P), black)
xticks(range(-4,5,2))
xlabel("f/Hz") ; ylabel("abs(P(f))")
xlim(-4,4) ; ylim(-0.2E7,3.3E7);
Explanation: The plot above is not much clear, because the frequency components are significantly different from zero only in a narrow range of frequencies around the origin.
In the next 3 plots we zoom near the origin of the frequency axis to have a bit more of detail. There are 3 plots, first the absolute value of P vs f, then the real part and finally the imaginary part of P, versus f.
End of explanation
plot(fftshift(f),fftshift(abs(P)), black)
axhline(0,color=black, linewidth=0.25)
xticks(range(-4,5,2))
xlabel("f/Hz") ; ylabel("abs(P(f))")
xlim(-4,4) ; ylim(-0.2E7,3.3E7);
Explanation: Not afwully nice, this last plot...the baseline and the missing line
at the left of the zero are artifacts, due do the particular sequence
with which the positive and negative frequencies are arranged in the DFT output.
To obviate these problems we can use the function fftshift, that reorders (shifts)
the elements in an array such that the sequence goes from the most negative
frequency to the most positive.
End of explanation
plot(fftshift(f),fftshift(real(P)), black)
axhline(0,color=black, linewidth=0.25)
xticks(range(-4,5,2))
xlabel("f/Hz") ; ylabel("real(P(f))")
xlim(-4,4) ; ylim(-3.3E7,3.3E7);
plot(fftshift(f),fftshift(imag(P)), black)
axhline(0,color=black, linewidth=0.25)
xticks(range(-4,5,2))
xlabel("f/Hz") ; ylabel("imag(P(f))")
xlim(-4,4) ; ylim(-3.3E7,3.3E7);
Explanation: and now the other two plots I promised,
End of explanation
z = 0.1; fn = 1/0.6 ; m =6E5 ; wn = fn*2*pi ; k = m*wn**2
def H(f):
b = f/fn
return 1./((1-b*b)+1j*(2*z*b))
Explanation: The response function
Until now, we did without the SDOF, now it's time to describe it and derive its response function.
All the parameters are the same as in the excel example, we compute k because we need it to normalize the response.
End of explanation
plot(fftshift(f),fftshift(abs(H(f)))) ; xlabel("f/Hz") ; xlim(-8,8) ; ylabel("H(f)") ;
Explanation: As usual, we plot the response function, or rather the absolute value of, against a short span of the frequency axis, centered about the origin, to show the details of the response function itself.
End of explanation
X = [ P_*H(f_) for P_, f_ in zip(P,f)]
x = ifft(X)/k
Explanation: Computing the response
The FFT of the response is computed multiplying, term by term, P by the response or transfer function, then we compute the IFFT of X to obtain x, the time domain representation of the response domain.
End of explanation
plot(t,1000*real(x))
xlabel("t/s"); ylabel(r"$\Re(x)/$mm");
axhline(0,color=black, linewidth=0.25)
show()
plot(t,1E18*imag(x),linewidth=0.33)
axhline(0,color=black, linewidth=0.25)
xlabel("t/s"); ylabel(r"$\Im(x)/$am");
show()
Explanation: Note that the response function is periodic with period $T=8\,{}$s.
In the end, we remain with the task of plotting the response function, that is the real part of x. Just to be certain we plot also the imaginary part of x, so we can be sure that it is negligible with respect to the real part
End of explanation
z = 0.01
X = [ P_*H(f_) for P_, f_ in zip(P,f)]
x = ifft(X)/k
plot(t,1000*real(x))
xlabel("t/s"); ylabel(r"$\Re(x)/$mm");
axhline(0,color=black, linewidth=0.25)
show()
Explanation: The zero trail
The importance of the zero trail to adjust for initial rest condition cannot be
underestimated. The length required depends, of course, on how much damped our system is,
the lesser the damping, the longer the time required to damp out the response.
Lets try to see what happens if we go from $\zeta=0.10$ to $\zeta=0.01$:
End of explanation |
10,295 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Математическая статистика
Практическое задание 5
В данном задании предлагается провести некоторое исследование модели линейной регрессии и критериев для проверки статистических гипотез, в частности применить этим модели к реальным данным.
Правила
Step1: 1. Линейная регрессия
Задача 1. По шаблону напишите класс, реализующий линейную регрессию. Интерфейс этого класса в некоторой степени соответствует классу <a href="http
Step2: Загрузите данные о потреблении мороженного в зависимости от температуры воздуха и цены (файл ice_cream.txt).
Примените реализованный выше класс линейной регрессии к этим данным предполагая, что модель имеет вид $ic = \theta_1 + \theta_2\ t$, где $t$ --- температура воздуха (столбец temp), $ic$ --- постребление мороженного в литрах на человека (столбец IC).
Значения температуры предварительно переведите из Фаренгейта в Цельсий [(Фаренгейт — 32) / 1,8 = Цельсий].
К обученной модели примените фунцию summary и постройте график регрессии, то есть график прямой $ic = \widehat{\theta}_1 + \widehat{\theta}_2\ t$, где $\widehat{\theta}_1, \widehat{\theta}_2$ --- МНК-оценки коэффициентов.
На график нанесите точки выборки.
Убедитесь, что построейнный график совпадает с графиком из презентации с первой лекции, правда, с точностью до значений температура (она была неправильно переведена из Фаренгейта в Цельсий).
Step3: Вывод. Действительно, график тот же (с поправкой на пересчет температуры). Линейная регрессия неплохо, но не идеально приближает зависимость потребления мороженого в зависимости от температуры. Для более точного вывода стоит посчитать доверительный интервал.
Теперь учтите влияние года (столбец Year) для двух случаев
Step4: Вывод. В разрезе разных лет также можно уследить, что линейная зависимсоть неплохо приближает реальную. Номер года нельзя брать как признак, так это больше характеризация класса каких-то значений, в то время линейная регрессия работает со значениями, от которых зависимость линейная, численная
Второй случай.
Step5: Вывод. При разделении выборки на три части результаты в какой-то степени стали лучше. Кроме того, такое разделение позволяет заметить, что с при увеличении года в целом потребляют больше мороженого, и что прирост литров мороженого на один градус возрастает при увеличении года.
Наконец, обучите модель на предсказание потребления мороженного в зависимости от всех переменных.
Не забудьте, что для года нужно ввести две переменных.
Для полученной модели выведите summary.
Step6: Вывод. Похоже, не все признаки стоит учитывать. Некоторые значения по модулю очень маленькие по сравнению со значениями, характерными для соответствующих признаков, что их лучше не учитывать. Например, это номер измерения (theta_2) или температура в будущем месяце.(theta_5).
Но это еще не все.
Постройте теперь линейную регрессию для модели $ic = \theta_1 + \theta_2\ t + \theta_3\ t^2 + \theta_4\ t^3$.
Выведите для нее summary и постройте график предсказания, то есть график кривой $ic = \widehat{\theta}_1 + \widehat{\theta}_2\ t + \widehat{\theta}_3\ t^2 + \widehat{\theta}_4\ t^3$. Хорошие ли получаются результаты?
Step7: Вывод. Результаты выглядят более естественно, однако теперь кажется, что при 30-40 градусах люди потребляют невероятно много мороженого, что, скорее всего, неправда (все-таки есть какой-то лимит). Кроме того, значения последних двух параметров очень малы, что говорит о малом влиянии этих параметров при малых температурах. Возможно, линейная модель и так достаточно хороша. Однако в нашем случае будет видно, что при отрицательных температурах потребление мороженого падает стремительно. Наверное, в этом есть некоторая правда.
Чтобы понять, почему так происходит, выведите значения матрицы $(X^T X)^{-1}$ для данной матрицы и посчитайте для нее индекс обусловленности $\sqrt{\left.\lambda_{max}\right/\lambda_{min}}$, где $\lambda_{max}, \lambda_{min}$ --- максимальный и минимальный собственные значения матрицы $X^T X$. Собственные значения можно посчитать функцией <a href="https
Step8: Вывод. Как говорилось на семинаре, высокий индекс обусловленности (больше 30) говорит о мультиколлинеарности, которая ведет к переобучению. Видимо, мы перестарались.
Задача 2. В данной задаче нужно реализовать функцию отбора признаков для линейной регрессии. Иначе говоря, пусть есть модель $y = \theta_1 x_1 + ... + \theta_k x_k$. Нужно определить, какие $\theta_j$ нужно положить равными нулю, чтобы качество полученной модели было максимальным.
Для этого имеющиеся данные нужно случайно разделить на две части --- обучение и тест (train и test). На первой части нужно обучить модель регресии, взяв некоторые из признаков, то есть рассмотреть модель $y = \theta_{j_1} x_{j_1} + ... + \theta_{j_s} x_{j_s}$. По второй части нужно посчитать ее качество --- среднеквадратичное отклонение (mean squared error) предсказания от истинного значения отклика, то есть величину
$$MSE = \sum\limits_{i \in test} \left(\widehat{y}(x_i) - Y_i\right)^2,$$
где $x_i = (x_{i,1}, ..., x_{i,k})$, $Y_i$ --- отклик на объекте $x_i$, а $\widehat{y}(x)$ --- оценка отклика на объекте $x$.
Если $k$ невелико, то подобным образом можно перебрать все поднаборы признаков и выбрать наилучший по значению MSE.
Для выполнения задания воспользуйтесь следующими функциями
Step9: Примените реализованный отбор признаков к датасетам
* <a href="http
Step10: Вывод. Признак 5 (Froude number) наиболее полезный признак, остальные встречаются как-то рандомно (кроме 4, который почти не встречается, это Length-beam ratio).
Boston Housing Prices.
Step11: Вывод. Первые 3 и последние 3 признака самые полезные.
Задача 3<font size="5" color="red">*</font>. Загрузите <a href="http
Step12: Вывод. При увеличении размера выборки площадь доверительной область (она, очевидно, измерима даже по Жордану) сильно уменьшается. Это говорит о том, что чем больше значений получено, тем точнее с заданным уровнем доверия можно оценить параметр.
Задача 5<font size="5" color="red">*</font>.
Пусть дана линейная гауссовская модель $Y = X\theta + \varepsilon$, где $\varepsilon \sim \mathcal{N}(0, \beta^{-1}I_n)$.
Пусть $\theta$ имеет априорное распределение $\mathcal{N}(0, \alpha^{-1}I_k)$.
Такая постановка задачи соответствует Ridge-регрессии.
Оценкой параметров будет математическое ожидание по апостериорному распределению, аналогично можно получить доверительный интервал.
Кроме того, с помощью апостериорного распределения можно получить доверительный интервал для отклика на новом объекте, а не только точечную оценку.
Реализуйте класс RidgeRegression подобно классу LinearRegression, но добавьте в него так же возможность получения доверительного интервала для отклика на новом объекте.
Примените модель к некоторых датасетам, которые рассматривались в предыдущих задачах.
Нарисуйте графики оценки отклика на новом объекте и доверительные интервалы для него.
2. Проверка статистических гипотез
Задача 6.
Существует примета, что если перед вам дорогу перебегает черный кот, то скоро случится неудача.
Вы же уже достаточно хорошо знаете статистику и хотите проверить данную примету.
Сформулируем задачу на математическом языке.
Пусть $X_1, ..., X_n \sim Bern(p)$ --- проведенные наблюдения, где $X_i = 1$, если в $i$-м испытании случилась неудача после того, как черный кот перебежал дорогу, а $p$ --- неизвестная вероятность такого события.
Нужно проверить гипотезу $H_0
Step13: Вывод. Если $t/n$ сильно больше 0.5 (скажем, 0.67), то p-value выходит меньше 0.05, и гипотеза отвергается. Критерий работает.
Для каких истинных значений $p$ с точки зрения практики можно считать, что связь между черным котом и неудачей есть?
Теперь сгенерируйте 10 выборок для двух случаев
Step14: Вывод. Выходит, почти всегда на малых выборках даже при большом $p$ мы гипотезу не отвергнем, а при больших даже при $p$, близких к 0.5, гипотеза будет отвергнута. Похоже на ошибки II и I рода соответственно.
Возникает задача подбора оптимального размера выборки.
Для этого сначала зафиксируйте значение $p^ > 1/2$, которое будет обладать следующим свойством.
Если истинное $p > p^$, то такое отклонение от $1/2$ с практической точки зрения признается существенным, то есть действительно чаще случается неудача после того, как черный кот перебегает дорогу.
В противном случае отклонение с практической точки зрения признается несущественным.
Теперь для некоторых $n$ постройте графики функции мощности критерия при $1/2 < p < 1$ и уровне значимости 0.05.
Выберите такое $n^$, для которого функция мощности дает значение 0.8 при $p^$.
Для выбранного $n^$ проведите эксперимент, аналогичный проведенным ранее экспериментам, сгенерировав выборки для следующих истинных значений $p$
Step15: Вывод. Оптимальный в некотором смысле размер выборки получается примерно 25–30, на пересечении $\theta=p^*=0.75$ и $\beta(\theta, X)=\beta=0.8$. Выберем 27.
Step16: Вывод. При выбранном значении $p^$ и подобранном оптималном значении выборки если $p < p^$, то гипотеза не отвергается почти всегда, а при $p > p^*$ гипотеза отвергается почти всегда.
Справка для выполнения следующих задач
Критерий согласия хи-квадрат
<a href=https
Step17: Вывод. Выборка согласуется с распределением Вейбулла (так как pvalue больше 0.05, гипотеза не отвергается).
Пусть $X_1, ..., X_n$ --- выборка из распределения $\mathcal{N}(\theta, 1)$. Известно, что $\overline{X}$ является асимптотически нормальной оценкой параметра $\theta$. Вам нужно убедиться в этом, сгенерировав множество выборок и посчитав по каждой из них оценку параметра в зависимости от размера выборки.
Сгенерируйте 200 выборок $X_1^j, ..., X_{300}^j$ из распределения $\mathcal{N}(0, 1)$. По каждой из них посчитайте оценки $\widehat{\theta}{jn} = \frac{1}{n}\sum\limits{i=1}^n X_i^j$ для $1 \leqslant n \leqslant 300$, то есть оценка параметра по первым $n$ наблюдениям $j$-й выборки. Для этой оценки посчитайте статистику $T_{jn} = \sqrt{n} \left( \widehat{\theta}_{jn} - \theta \right)$, где $\theta = 0$.
Step18: Вывод. Выборка согласуется со стандартным нормальным распределением (так как pvalue больше 0.05, гипотеза не отвергается).
Пусть $X_1, ..., X_n$ --- выборка из распределения $Pois(\theta)$. Известно, что $\overline{X}$ является асимптотически нормальной оценкой параметра $\theta$.
Step19: Вывод. Выборка согласуется со стандартным нормальным распределением (так как pvalue больше 0.05, гипотеза не отвергается).
Пусть $X_1, ..., X_n$ --- выборка из распределения $U[0, \theta]$. Из домашнего задания известно, что $n\left(\theta - X_{(n)}\right) \stackrel{d_\theta}{\longrightarrow} Exp\left(1/\theta\right)$.
Step20: Вывод. Выборка согласуется со экспоненциальным распределением с параметром 1 (так как pvalue больше 0.05, гипотеза не отвергается).
Мороженое.
Step21: Вывод. Распределение согласуется с нормальным.
Выбирал всех своих друзей в ВК, к которым удалось получить доступ. Это подсчитывалось в C# через HTTP-запросы. Зато прошедшее время у меня появились новые друзья, я их учитывать не буду. У моих друзей тоже появились новые друзья, они тоже не посчитаны.
Step22: В первой практике я говорил, что не могу сделать вывод о распределении на основании такого графика. Может быть, сейчас получится. Выдвинем гипотезу, что распределение числа друзей моих друзей есть распределение Рэлея с параметром масштаба 180. Почему нет, имеем право. | Python Code:
import numpy as np
import scipy.stats as sps
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
Explanation: Математическая статистика
Практическое задание 5
В данном задании предлагается провести некоторое исследование модели линейной регрессии и критериев для проверки статистических гипотез, в частности применить этим модели к реальным данным.
Правила:
Выполненную работу нужно отправить на почту [email protected], указав тему письма "[номер группы] Фамилия Имя - Задание 5". Квадратные скобки обязательны. Вместо Фамилия Имя нужно подставить свои фамилию и имя.
Прислать нужно ноутбук и его pdf-версию. Названия файлов должны быть такими: 5.N.ipynb и 5.N.pdf, где N - ваш номер из таблицы с оценками.
Никакой код из данного задания при проверке запускаться не будет.
Некоторые задачи отмечены символом <font size="5" color="red">*</font>. Эти задачи являются дополнительными. Успешное выполнение большей части таких задач (за все задания) является необходимым условием получения бонусного балла за практическую часть курса.
Баллы за каждую задачу указаны далее. Если сумма баллов за задание меньше 25% (без учета доп. задач), то все задание оценивается в 0 баллов.
Баллы за задание:
Задача 1 - 7 баллов
Задача 2 - 2 балла
Задача 3<font size="5" color="red">*</font> - 3 балла
Задача 4 - 2 балла
Задача 5<font size="5" color="red">*</font> - 10 баллов
Задача 6 - 5 баллов
Задача 7 - 4 балла
Задача 8<font size="5" color="red">*</font> - 4 балла
Задача 9<font size="5" color="red">*</font> - 10 баллов
End of explanation
from scipy.linalg import inv
from numpy.linalg import norm
class LinearRegression:
def __init__(self):
super()
def fit(self, X, Y, alpha=0.95):
''' Обучение модели. Предполагается модель Y = X * theta + epsilon,
где X --- регрессор, Y --- отклик,
а epsilon имеет нормальное распределение с параметрами (0, sigma^2 * I_n).
alpha --- уровень доверия для доверительного интервала.
'''
# Размер выборки и число признаков
self.n, self.k = X.shape
# Оценки на параметры
self.theta = inv(X.T @ X) @ X.T @ Y
self.sigma_sq = norm(Y - X @ self.theta) ** 2 / (self.n - self.k)
# Считаем доверительные интервалы
l_quant = sps.t.ppf((1 - alpha) / 2, df=self.n - self.k)
r_quant = sps.t.ppf((1 + alpha) / 2, df=self.n - self.k)
diag = inv(X.T @ X).diagonal()
coeff = np.sqrt(self.sigma_sq * diag)
self.conf_int = np.array([self.theta + l_quant * coeff, self.theta + r_quant * coeff]).T
return self
def summary(self):
print('Linear regression on %d features and %d examples' % (self.k, self.n))
print('Sigma: %.6f' % self.sigma_sq)
print('\t\tLower\t\tEstimation\tUpper')
for j in range(self.k):
print('theta_%d:\t%.6f\t%.6f\t%.6f' % (j, self.conf_int[j, 0],
self.theta[j], self.conf_int[j, 1]))
def predict(self, X):
''' Возвращает предсказание отклика на новых объектах X. '''
Y_pred = X @ self.theta
return Y_pred
Explanation: 1. Линейная регрессия
Задача 1. По шаблону напишите класс, реализующий линейную регрессию. Интерфейс этого класса в некоторой степени соответствует классу <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression">LinearRegression</a> из библиотеки sklearn.
End of explanation
import csv
# Я загурзил файл локально
data = np.array(list(csv.reader(open('ice_cream.txt', 'r'), delimiter='\t')))
source = data[1:, :].astype(float)
print(source[:5])
# Переводим в Цельсий
source[:, 4] = (source[:, 4] - 32) / 1.8
source[:, 5] = (source[:, 5] - 32) / 1.8
# Размер выборки и число параметров
n, k = source.shape[0], 2
print(n)
# Отклик и регрессор
Y = source[:, 1]
X = np.zeros((n, k))
X[:, 0] = np.ones(n)
X[:, 1] = source[:, 4]
print(X[:5])
# Обучаем модель
model = LinearRegression()
model.fit(X, Y)
# Выводим общую информацию
model.summary()
grid = np.linspace(-5, 25, 1000)
plt.figure(figsize=(20, 8))
plt.plot(grid, model.theta[0] + grid * model.theta[1], color='brown', label='Предсказание', linewidth=2.5)
plt.scatter(X[:, 1], Y, s=40.0, label='Выборка', color='red', alpha=0.5)
plt.legend()
plt.ylabel('Литров на человека')
plt.xlabel('Температура')
plt.title('Потребление мороженого')
plt.grid()
plt.show()
Explanation: Загрузите данные о потреблении мороженного в зависимости от температуры воздуха и цены (файл ice_cream.txt).
Примените реализованный выше класс линейной регрессии к этим данным предполагая, что модель имеет вид $ic = \theta_1 + \theta_2\ t$, где $t$ --- температура воздуха (столбец temp), $ic$ --- постребление мороженного в литрах на человека (столбец IC).
Значения температуры предварительно переведите из Фаренгейта в Цельсий [(Фаренгейт — 32) / 1,8 = Цельсий].
К обученной модели примените фунцию summary и постройте график регрессии, то есть график прямой $ic = \widehat{\theta}_1 + \widehat{\theta}_2\ t$, где $\widehat{\theta}_1, \widehat{\theta}_2$ --- МНК-оценки коэффициентов.
На график нанесите точки выборки.
Убедитесь, что построейнный график совпадает с графиком из презентации с первой лекции, правда, с точностью до значений температура (она была неправильно переведена из Фаренгейта в Цельсий).
End of explanation
# Размер выборки и число параметров
n, k = source.shape[0], 4
# Отклик и регрессор
Y = source[:, 1]
X = np.zeros((n, k))
X[:, 0] = np.ones(n)
X[:, 1] = source[:, 4]
X[:, 2] = (source[:, 6] == 1).astype(int)
X[:, 3] = (source[:, 6] == 2).astype(int)
print(X[:5])
# Обучаем модель
model = LinearRegression()
model.fit(X, Y)
# Выводим общую информацию
model.summary()
grid = np.linspace(-5, 25, 1000)
y_0 = model.theta[0] + grid * model.theta[1]
y_1 = model.theta[0] + grid * model.theta[1] + model.theta[2]
y_2 = model.theta[0] + grid * model.theta[1] + model.theta[3]
plt.figure(figsize=(20, 8))
plt.plot(grid, y_0, color='gold', label='Предсказание (год 0)', linewidth=2.5)
plt.plot(grid, y_1, color='turquoise', label='Предсказание (год 1)', linewidth=2.5)
plt.plot(grid, y_2, color='springgreen', label='Предсказание (год 2)', linewidth=2.5)
plt.scatter(X[source[:, 6] == 0, 1], Y[source[:, 6] == 0], s=40.0, label='Выборка (год 0)', color='yellow', alpha=0.5)
plt.scatter(X[source[:, 6] == 1, 1], Y[source[:, 6] == 1], s=40.0, label='Выборка (год 1)', color='blue', alpha=0.5)
plt.scatter(X[source[:, 6] == 2, 1], Y[source[:, 6] == 2], s=40.0, label='Выборка (год 2)', color='green', alpha=0.5)
plt.legend()
plt.ylabel('Литров на человека')
plt.xlabel('Температура')
plt.title('Потребление мороженого')
plt.grid()
plt.show()
Explanation: Вывод. Действительно, график тот же (с поправкой на пересчет температуры). Линейная регрессия неплохо, но не идеально приближает зависимость потребления мороженого в зависимости от температуры. Для более точного вывода стоит посчитать доверительный интервал.
Теперь учтите влияние года (столбец Year) для двух случаев:
* модель $ic = \theta_1 + \theta_2\ t + \theta_3 y_1 + \theta_4 y_2$, где $y_1 = I{1\ год}, y_2 = I{2\ год}$. Поясните, почему нельзя рассмативать одну переменную $y$ --- номер года.
* для каждого года рассматривается своя линейная зависимость $ic = \theta_1 + \theta_2\ t$.
В каждом случае нарисуйте графики. Отличаются ли полученные результаты? От чего это зависит? Как зависит потребление мороженного от года?
Первый случай.
End of explanation
# Число признаков
k = 2
# Размеры подвыборок
n_0 = (source[:, 6] == 0).sum()
n_1 = (source[:, 6] == 1).sum()
n_2 = (source[:, 6] == 2).sum()
print(n_0, n_1, n_2)
# Три подвыборки
source_0, source_1, source_2 = np.vsplit(source, [n_0, n_0 + n_1])
print(source_0.astype(int)[:3]) # Кастим к инт, чтобы при выводе не размазывались по всей строке
print(source_1.astype(int)[:3])
print(source_2.astype(int)[:3])
# Отклики и регрессоры
Y_0 = source_0[:, 1]
X_0 = np.zeros((n_0, k))
X_0[:, 0] = np.ones(n_0)
X_0[:, 1] = source_0[:, 4]
print(X_0[:3])
Y_1 = source_1[:, 1]
X_1 = np.zeros((n_1, k))
X_1[:, 0] = np.ones(n_1)
X_1[:, 1] = source_1[:, 4]
print(X_1[:3])
Y_2 = source_2[:, 1]
X_2 = np.zeros((n_2, k))
X_2[:, 0] = np.ones(n_2)
X_2[:, 1] = source_2[:, 4]
print(X_2[:3])
# Обучаем модели
model_0 = LinearRegression()
model_0.fit(X_0, Y_0)
model_1 = LinearRegression()
model_1.fit(X_1, Y_1)
model_2 = LinearRegression()
model_2.fit(X_2, Y_2)
# Выводим общую информацию
model_0.summary()
model_1.summary()
model_2.summary()
grid = np.linspace(-5, 25, 1000)
plt.figure(figsize=(20, 8))
plt.plot(grid, model_0.theta[0] + grid * model_0.theta[1], color='gold', label='Предсказание (год 0)', linewidth=2.5)
plt.plot(grid, model_1.theta[0] + grid * model_1.theta[1], color='turquoise', label='Предсказание (год 1)', linewidth=2.5)
plt.plot(grid, model_2.theta[0] + grid * model_2.theta[1], color='springgreen', label='Предсказание (год 2)', linewidth=2.5)
plt.scatter(X[source[:, 6] == 0, 1], Y[source[:, 6] == 0], s=40.0, label='Выборка (год 0)', color='yellow', alpha=0.5)
plt.scatter(X[source[:, 6] == 1, 1], Y[source[:, 6] == 1], s=40.0, label='Выборка (год 1)', color='blue', alpha=0.5)
plt.scatter(X[source[:, 6] == 2, 1], Y[source[:, 6] == 2], s=40.0, label='Выборка (год 2)', color='green', alpha=0.5)
plt.legend()
plt.ylabel('Литров на человека')
plt.xlabel('Температура')
plt.title('Потребление мороженого')
plt.grid()
plt.show()
Explanation: Вывод. В разрезе разных лет также можно уследить, что линейная зависимсоть неплохо приближает реальную. Номер года нельзя брать как признак, так это больше характеризация класса каких-то значений, в то время линейная регрессия работает со значениями, от которых зависимость линейная, численная
Второй случай.
End of explanation
# Размер выборки и число параметров
n, k = source.shape[0], 8
print(n, k)
# Cтроим регрессор
X = np.zeros((n, k))
X[:, 0] = np.ones(n)
X[:, 1] = source[:, 4] # Температура
X[:, 2] = source[:, 0] # Дата
X[:, 3:5] = source[:, 2:4] # Пропускаем IC
X[:, 5] = source[:, 5]
X[:, 6] = (source[:, 6] == 1).astype(int) # Индикатор год 1
X[:, 7] = (source[:, 6] == 2).astype(int) # Индикатор год 2
print(X.astype(int)[:5])
# Отлкик
Y = source[:, 1]
# Обучаем модель
model = LinearRegression()
model.fit(X, Y)
# Выводим общую информацию
model.summary()
Explanation: Вывод. При разделении выборки на три части результаты в какой-то степени стали лучше. Кроме того, такое разделение позволяет заметить, что с при увеличении года в целом потребляют больше мороженого, и что прирост литров мороженого на один градус возрастает при увеличении года.
Наконец, обучите модель на предсказание потребления мороженного в зависимости от всех переменных.
Не забудьте, что для года нужно ввести две переменных.
Для полученной модели выведите summary.
End of explanation
# Размер выборки и число параметров
n, k = source.shape[0], 4
print(n, k)
# Отклик и регрессор
Y = source[:, 1]
X = np.zeros((n, k))
X[:, 0] = np.ones(n)
X[:, 1] = source[:, 4]
X[:, 2] = X[:, 1] ** 2
X[:, 3] = X[:, 1] ** 3
print(X[:5])
# Обучаем модель
model = LinearRegression()
model.fit(X, Y)
# Выводим общую информацию
model.summary()
grid = np.linspace(-5, 25, 1000)
y = model.theta[0] + model.theta[1] * grid + model.theta[2] * grid ** 2 + model.theta[3] * grid ** 3
plt.figure(figsize=(20, 8))
plt.plot(grid, y, color='brown', label='Предсказание (год 0)', linewidth=2.5)
plt.scatter(X[:, 1], Y, s=40.0, label='Выборка', color='red', alpha=0.5)
plt.legend()
plt.ylabel('Литров на человека (первый случай)')
plt.xlabel('Температура')
plt.title('Потребление мороженого')
plt.grid()
plt.show()
Explanation: Вывод. Похоже, не все признаки стоит учитывать. Некоторые значения по модулю очень маленькие по сравнению со значениями, характерными для соответствующих признаков, что их лучше не учитывать. Например, это номер измерения (theta_2) или температура в будущем месяце.(theta_5).
Но это еще не все.
Постройте теперь линейную регрессию для модели $ic = \theta_1 + \theta_2\ t + \theta_3\ t^2 + \theta_4\ t^3$.
Выведите для нее summary и постройте график предсказания, то есть график кривой $ic = \widehat{\theta}_1 + \widehat{\theta}_2\ t + \widehat{\theta}_3\ t^2 + \widehat{\theta}_4\ t^3$. Хорошие ли получаются результаты?
End of explanation
D = inv(X.T @ X)
print(D)
from scipy.linalg import eigvals
vals = eigvals(D)
print(vals) # Комплексных нет, мы не налажали
vals = vals.real.astype(float)
print(vals)
CI = np.sqrt(vals.max() / vals.min())
print(CI)
Explanation: Вывод. Результаты выглядят более естественно, однако теперь кажется, что при 30-40 градусах люди потребляют невероятно много мороженого, что, скорее всего, неправда (все-таки есть какой-то лимит). Кроме того, значения последних двух параметров очень малы, что говорит о малом влиянии этих параметров при малых температурах. Возможно, линейная модель и так достаточно хороша. Однако в нашем случае будет видно, что при отрицательных температурах потребление мороженого падает стремительно. Наверное, в этом есть некоторая правда.
Чтобы понять, почему так происходит, выведите значения матрицы $(X^T X)^{-1}$ для данной матрицы и посчитайте для нее индекс обусловленности $\sqrt{\left.\lambda_{max}\right/\lambda_{min}}$, где $\lambda_{max}, \lambda_{min}$ --- максимальный и минимальный собственные значения матрицы $X^T X$. Собственные значения можно посчитать функцией <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigvals.html">scipy.linalg.eigvals</a>.
Прокомментируйте полученные результаты. Помочь в этом может следующая <a href="https://ru.wikipedia.org/wiki/%D0%A7%D0%B8%D1%81%D0%BB%D0%BE_%D0%BE%D0%B1%D1%83%D1%81%D0%BB%D0%BE%D0%B2%D0%BB%D0%B5%D0%BD%D0%BD%D0%BE%D1%81%D1%82%D0%B8">статья</a>.
End of explanation
from sklearn import linear_model
from sklearn import cross_validation
from sklearn.metrics import mean_squared_error
def best_features(X_train, X_test, Y_train, Y_test):
mses = [] # сюда записывайте значения MSE
k = X_train.shape[1]
for j in range(1, 2 ** k): # номер набора признаков
mask = np.array([j & (1 << s) for s in range(k)], dtype=bool)
features_numbers = np.arange(k)[mask] # набор признаков
model = linear_model.LinearRegression()
model.fit(X_train[:, features_numbers], Y_train)
mse = mean_squared_error(Y_test, model.predict(X_test[:, features_numbers])) # MSE для данного набора признаков
mses.append(mse)
# Печать 10 лучших наборов
print('mse\t features')
mses = np.array(mses)
best_numbres = np.argsort(mses)[:10]
for j in best_numbres:
mask = np.array([j & (1 << s) for s in range(k)], dtype=bool)
features_numbers = np.arange(k)[mask]
print('%.3f\t' % mses[j], features_numbers)
Explanation: Вывод. Как говорилось на семинаре, высокий индекс обусловленности (больше 30) говорит о мультиколлинеарности, которая ведет к переобучению. Видимо, мы перестарались.
Задача 2. В данной задаче нужно реализовать функцию отбора признаков для линейной регрессии. Иначе говоря, пусть есть модель $y = \theta_1 x_1 + ... + \theta_k x_k$. Нужно определить, какие $\theta_j$ нужно положить равными нулю, чтобы качество полученной модели было максимальным.
Для этого имеющиеся данные нужно случайно разделить на две части --- обучение и тест (train и test). На первой части нужно обучить модель регресии, взяв некоторые из признаков, то есть рассмотреть модель $y = \theta_{j_1} x_{j_1} + ... + \theta_{j_s} x_{j_s}$. По второй части нужно посчитать ее качество --- среднеквадратичное отклонение (mean squared error) предсказания от истинного значения отклика, то есть величину
$$MSE = \sum\limits_{i \in test} \left(\widehat{y}(x_i) - Y_i\right)^2,$$
где $x_i = (x_{i,1}, ..., x_{i,k})$, $Y_i$ --- отклик на объекте $x_i$, а $\widehat{y}(x)$ --- оценка отклика на объекте $x$.
Если $k$ невелико, то подобным образом можно перебрать все поднаборы признаков и выбрать наилучший по значению MSE.
Для выполнения задания воспользуйтесь следующими функциями:
* <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression">sklearn.linear_model.LinearRegression</a>
--- реализация линейной регрессии. В данной реализации свободный параметр $\theta_1$ по умолчанию автоматически включается в модель. Отключить это можно с помощью fit_intercept=False, но это не нужно. В данной задаче требуется, чтобы вы воспользовались готовой реализацией линейной регрессии, а не своей. Ведь на практике важно уметь применять готовые реализации, а не писать их самостоятельно.
<a href="http://scikit-learn.org/0.16/modules/generated/sklearn.cross_validation.train_test_split.html">sklearn.cross_validation.train_test_split</a>
--- функция разбиения данных на train и test. Установите параметр test_size=0.3.
<a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html">sklearn.metrics.mean_squared_error</a>
--- реализация MSE.
Для перебора реализуйте функцию.
End of explanation
# Я загурзил файл локально и исправил проблелы на табы (там были лишние проблеы в некоторых местах)
data = np.array(list(csv.reader(open('yh.data', 'r'), delimiter='\t')))
yacht = data.astype(float)
print(yacht)
Y = yacht[:, 6]
X = yacht[:, :6]
X_train, X_test, Y_train, Y_test = cross_validation.train_test_split(X, Y, test_size=0.3)
best_features(X_train, X_test, Y_train, Y_test)
Explanation: Примените реализованный отбор признаков к датасетам
* <a href="http://archive.ics.uci.edu/ml/datasets/Yacht+Hydrodynamics">Yacht Hydrodynamics</a> --- для парусных яхт нужно оценить остаточное сопротивление на единицу массы смещения (последний столбец) в зависимости от различных характеристик яхты.
<a href="http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston">Boston Housing Prices</a> --- цены на дома в Бостоне в зависимости от ряда особенностей.
Yacht Hydrodynamics.
End of explanation
# Я загурзил файл локально и убрал первую стрчоку с кучей запятых
data = np.array(list(csv.reader(open('bhp.csv', 'r'))))
houses = data[1:, :].astype(float)
print(houses)
# Число столбцов
cols = houses.shape[1]
Y = houses[:, 9] # Столбец TAX
X = houses[:, np.delete(np.arange(cols), 9)]
X_train, X_test, Y_train, Y_test = cross_validation.train_test_split(X, Y, test_size=0.3)
best_features(X_train, X_test, Y_train, Y_test)
Explanation: Вывод. Признак 5 (Froude number) наиболее полезный признак, остальные встречаются как-то рандомно (кроме 4, который почти не встречается, это Length-beam ratio).
Boston Housing Prices.
End of explanation
def find_conf_reg(sample):
size = sample.size
alpha_r = np.sqrt(0.95)
u_1 = sps.t.ppf((1 - alpha_r) / 2, df=size - 1)
u_2 = sps.t.ppf((1 + alpha_r) / 2, df=size - 1)
v_1 = sps.chi2.ppf((1 - alpha_r) / 2, df=size - 1)
v_2 = sps.chi2.ppf((1 + alpha_r) / 2, df=size - 1)
mean = sample.mean()
std = sample.std()
a_low = mean - u_2 * (std / size) ** 0.5
a_high = mean - u_1 * (std / size) ** 0.5
s_low = (size - 1) * std / v_2
s_hight = (size - 1) * std / v_1
return ((a_low, a_high), (s_low, s_hight))
plt.figure(figsize=(20, 8))
for size in [5, 20, 50]:
a_conf, s_conf = find_conf_reg(sps.norm.rvs(size=size))
plt.hlines(s_conf[0], a_conf[0], a_conf[1], linewidth=2.5, color='tomato')
plt.hlines(s_conf[1], a_conf[0], a_conf[1], linewidth=2.5, color='tomato')
plt.vlines(a_conf[0], s_conf[0], s_conf[1], linewidth=2.5, color='tomato')
plt.vlines(a_conf[1], s_conf[0], s_conf[1], linewidth=2.5, color='tomato')
plt.ylabel('Выборочная дисперсия')
plt.xlabel('Выборочное среднее')
plt.title('Доверительная область')
plt.grid()
plt.show()
Explanation: Вывод. Первые 3 и последние 3 признака самые полезные.
Задача 3<font size="5" color="red">*</font>. Загрузите <a href="http://people.sc.fsu.edu/~jburkardt/datasets/regression/x01.txt">датасет</a>, в котором показана зависимость веса мозга от веса туловища для некоторых видов млекопитающих. Задача состоит в том, чтобы подобрать по этим данным хорошую модель регрессии. Для этого, можно попробовать взять некоторые функции от значения веса туловища, например, степенную, показательную, логарифмическую. Можно также сделать преобразование значений веса мозга, например, прологарифмировать. Кроме того, можно разбить значения веса туловища на несколько частей и на каждой части строить свою модель линейной регрессии.
Задача 4. Пусть $X_1, ..., X_n$ --- выборка из распределения $\mathcal{N}(a, \sigma^2)$. Постройте точную доверительную область для параметра $\theta = (a, \sigma^2)$ уровня доверия $\alpha=0.95$ для сгенерированной выборки размера $n \in {5, 20, 50}$ из стандартного нормального распределения. Какой вывод можно сделать?
Вспомним, что $T=\frac{\overline{X}-a}{\frac{S}{\sqrt{n}}} \sim T_{n-1}$. Положим $u_{\frac{1\pm\sqrt{\alpha}}{2}}$ - квантили $T_{n-1}$. Тогда $\sqrt{\alpha}=P\left(u_{\frac{1-\sqrt{\alpha}}{2}} < T < u_{\frac{1+\sqrt{\alpha}}{2}}\right)=P\left(\overline{X} - \frac{S}{\sqrt{n}} u_{\frac{1+\sqrt{\alpha}}{2}} < a < \overline{X} - \frac{S}{\sqrt{n}} u_{\frac{1-\sqrt{\alpha}}{2}}\right)$, то есть доверительный интервал уровня доверия $\sqrt{\alpha}$ для $a$ есть $$\left(\overline{X} - \frac{S}{\sqrt{n}} u_{\frac{1+\sqrt{\alpha}}{2}}, \overline{X} - \frac{S}{\sqrt{n}} u_{\frac{1-\sqrt{\alpha}}{2}}\right).$$
Вспомним, что $R=\frac{n-1}{\sigma^2}S^2 \sim \chi^2_{n-1}$. Пусть $v_{\frac{1\pm\sqrt{\alpha}}{2}}$ - квантили $\chi^2_{n-1}$. Тогда $\sqrt{\alpha}=P\left(v_{\frac{1-\sqrt{\alpha}}{2}} < R < v_{\frac{1+\sqrt{\alpha}}{2}}\right)=P\left(\frac{(n-1)S^2}{v_{\frac{1+\sqrt{\alpha}}{2}}} < \sigma^2 < \frac{(n-1)S^2}{v_{\frac{1-\sqrt{\alpha}}{2}}}\right)$, то есть доверительный интервал уровня доверия $\sqrt{\alpha}$ для $\sigma^2$ есть $$\left(\frac{(n-1)S^2}{v_{\frac{1+\sqrt{\alpha}}{2}}}, \frac{(n-1)S^2}{v_{\frac{1-\sqrt{\alpha}}{2}}}\right).$$
Из домашнего задания $\overline{X}$ и $S^2$ независимы, значит, $T$ и $R$ тоже независимы. Таким образом, $\alpha=P(u_{\ldots} < T < u_{\ldots}, v_{\ldots} < R < v_{\ldots})$, откуда декартово произведение полученных доверительных интервалов уровней доверия $\sqrt{\alpha}$ есть доверительная область уровня доверия $\alpha$ для $(a, \sigma^2)$ .
End of explanation
alpha = 0.05 # Уровень значимости
# Считаем параметры для выборок четырех разных размеров
stats = np.zeros((4, 4))
for i, size in enumerate([5, 15, 30, 50]):
t = sps.bernoulli(p=0.5).rvs(size=size).sum() # Сумма бернуллиевских случайных величин
pvalue = sps.binom(n=size, p=0.5).sf(t) # Статистика T имеет биномиальное распределение
c_alpha = sps.binom(n=size, p=0.5).ppf(1 - alpha)
stats[i, :] = np.array([size, t, c_alpha, pvalue])
pd.DataFrame(data=stats, columns=['$n$', '$t$', '$c_{\\alpha}$', 'p-value'])
Explanation: Вывод. При увеличении размера выборки площадь доверительной область (она, очевидно, измерима даже по Жордану) сильно уменьшается. Это говорит о том, что чем больше значений получено, тем точнее с заданным уровнем доверия можно оценить параметр.
Задача 5<font size="5" color="red">*</font>.
Пусть дана линейная гауссовская модель $Y = X\theta + \varepsilon$, где $\varepsilon \sim \mathcal{N}(0, \beta^{-1}I_n)$.
Пусть $\theta$ имеет априорное распределение $\mathcal{N}(0, \alpha^{-1}I_k)$.
Такая постановка задачи соответствует Ridge-регрессии.
Оценкой параметров будет математическое ожидание по апостериорному распределению, аналогично можно получить доверительный интервал.
Кроме того, с помощью апостериорного распределения можно получить доверительный интервал для отклика на новом объекте, а не только точечную оценку.
Реализуйте класс RidgeRegression подобно классу LinearRegression, но добавьте в него так же возможность получения доверительного интервала для отклика на новом объекте.
Примените модель к некоторых датасетам, которые рассматривались в предыдущих задачах.
Нарисуйте графики оценки отклика на новом объекте и доверительные интервалы для него.
2. Проверка статистических гипотез
Задача 6.
Существует примета, что если перед вам дорогу перебегает черный кот, то скоро случится неудача.
Вы же уже достаточно хорошо знаете статистику и хотите проверить данную примету.
Сформулируем задачу на математическом языке.
Пусть $X_1, ..., X_n \sim Bern(p)$ --- проведенные наблюдения, где $X_i = 1$, если в $i$-м испытании случилась неудача после того, как черный кот перебежал дорогу, а $p$ --- неизвестная вероятность такого события.
Нужно проверить гипотезу $H_0: p=1/2$ (отсутствие связи между черным котом и неудачей) против альтернативы $H_1: p>1/2$ (неудача происходит чаще если черный кот перебегает дорогу).
Известно, что $S = \left{T(X) > c_\alpha\right}$, где $T(X) = \sum X_i$, является равномерно наиболее мощным критерием для данной задачи.
Чему при этом равно $c_\alpha$?
При этом p-value в данной задаче определяется как $p(t) = \mathsf{P}_{0.5}(T(X) > t)$, где $t = \sum x_i$ --- реализация статистики $T(X)$.
Для начала проверьте, что критерий работает.
Возьмите несколько значений $n$ и реализаций статистики $T(X)$.
В каждом случае найдите значение $c_\alpha$ и p-value.
Оформите это в виде таблицы.
Пользуйтесь функциями из scipy.stats, про которые подробно написано в файле python_5. Внимательно проверьте правильность строгих и нестрогих знаков.
End of explanation
# В двух случаях строим для 10 выборок табличку
for size, p in [(5, 0.75), (100000, 0.51)]:
stats = np.zeros((10, 5))
for i in np.arange(10):
t = sps.bernoulli(p=p).rvs(size=size).sum()
pvalue = sps.binom(n=size, p=0.5).sf(t)
c_alpha = sps.binom(n=size, p=0.5).ppf(1 - alpha)
rejected = int(t > c_alpha)
stats[i, :] = np.array([size, p, t, pvalue, rejected])
print(pd.DataFrame(data=stats, columns=['size', 'prob', 'stat', 'p-value', 'rej']))
Explanation: Вывод. Если $t/n$ сильно больше 0.5 (скажем, 0.67), то p-value выходит меньше 0.05, и гипотеза отвергается. Критерий работает.
Для каких истинных значений $p$ с точки зрения практики можно считать, что связь между черным котом и неудачей есть?
Теперь сгенерируйте 10 выборок для двух случаев: 1). $n=5, p=0.75$; 2). $n=10^5, p=0.51$.
В каждом случае в виде таблицы выведите реализацию статистики $T(X)$, соответствующее p-value и 0/1 - отвергается ли $H_0$ (выводите 1, если отвергается).
Какие выводы можно сделать?
End of explanation
p_conf = 0.75 # Если больше, то уверены в том, что действительно неудачи связаны с черной кошкой.
beta = 0.8 # Магическая константа с семинара и из условия
grid = np.linspace(0.501, 0.999, 500)
plt.figure(figsize=(20, 8))
for size in [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]:
power = sps.binom(n=size, p=grid).sf(sps.binom(n=size, p=0.5).ppf(1 - alpha))
plt.plot(grid, power, label='n={}'.format(size))
plt.legend()
plt.vlines(p_conf, 0, 1)
plt.hlines(beta, 0.5, 1)
plt.title('Мощность критериев')
plt.ylabel('$\\beta(\\theta, X)$')
plt.xlabel('$\\theta$')
plt.grid()
plt.show()
Explanation: Вывод. Выходит, почти всегда на малых выборках даже при большом $p$ мы гипотезу не отвергнем, а при больших даже при $p$, близких к 0.5, гипотеза будет отвергнута. Похоже на ошибки II и I рода соответственно.
Возникает задача подбора оптимального размера выборки.
Для этого сначала зафиксируйте значение $p^ > 1/2$, которое будет обладать следующим свойством.
Если истинное $p > p^$, то такое отклонение от $1/2$ с практической точки зрения признается существенным, то есть действительно чаще случается неудача после того, как черный кот перебегает дорогу.
В противном случае отклонение с практической точки зрения признается несущественным.
Теперь для некоторых $n$ постройте графики функции мощности критерия при $1/2 < p < 1$ и уровне значимости 0.05.
Выберите такое $n^$, для которого функция мощности дает значение 0.8 при $p^$.
Для выбранного $n^$ проведите эксперимент, аналогичный проведенным ранее экспериментам, сгенерировав выборки для следующих истинных значений $p$: 1). $1/2 < p < p^$; 2). $p > p^*$.
Сделайте вывод.
Выберем $p^*=0.75$.
End of explanation
size_conf = 27 # Оптимальные размер выборки
# В двух случаях строим для 10 выборок табличку
for p in [0.6, 0.85]:
stats = np.zeros((10, 6))
for i in np.arange(10):
t = sps.bernoulli(p=p).rvs(size=size_conf).sum()
pvalue = sps.binom(n=size_conf, p=0.5).sf(t)
c_alpha = sps.binom(n=size_conf, p=0.5).ppf(1 - alpha)
rejected = int(t > c_alpha)
stats[i, :] = np.array([size_conf, p, t, c_alpha, pvalue, rejected])
print(pd.DataFrame(data=stats, columns=['size', 'prob', 'stat', 'c_alpha', 'p-value', 'rej']))
Explanation: Вывод. Оптимальный в некотором смысле размер выборки получается примерно 25–30, на пересечении $\theta=p^*=0.75$ и $\beta(\theta, X)=\beta=0.8$. Выберем 27.
End of explanation
alpha = 0.05 # Уровень значимости
sample = [3.4, 0.5, 0.2, 1, 1.7, 1, 1, 3.9, 4.1, 3.6, 0.5, 0.7,
1.2, 0.5, 0.5, 1.9, 0.5, 0.3, 1.5, 1.9, 1.9, 2.4, 1.2,
2.9,3.2, 1.2, 1.7, 2.9, 1.5, 2.4, 3.4, 0.7, 1.2, 1.7,
1.5, 3.2, 3.9, 1.7, 2.7, 1, 1.5, 1.5, 2.9, 0.7, 2.2, 2.2,
1.9, 1.7, 1.7, 1.9, 1.9, 3.9, 1.2, 1.5, 2.4, 3.3, 2.9,
2.2, 4.6, 3.9, 2.2, 1.2, 3.6, 3.2, 2.2, 2.9, 3.4, 2.4,
2.9, 3.2, 1.7, 1.7, 2.2, 2.7, 3.2, 3.2, 2.9, 1.9, 1.7,
2.2, 1.7, 1.2, 1.2, 1.9, 0.7, 2.2, 1.5, 1.5, 2.7, 4.9,
3.2, 0.7, 2.2, 3.6, 3.6, 1.7, 3.2, 3.4, 1, 0.5, 3.4, 5.3,
4.4, 6.8, 4.6, 3.4, 2.2, 2.2, 2.7, 2.2, 1.2, 1.7, 1.9,
1.2, 1.2, 3.6, 2.4, 1, 2.9, 3.6, 1.7, 2.8] # Выборка из второго практикума
k = 2.00307 # Параметры из второго практикума
l = 2.53379
sps.kstest(sample, sps.weibull_min(c=k, scale=l).cdf)
Explanation: Вывод. При выбранном значении $p^$ и подобранном оптималном значении выборки если $p < p^$, то гипотеза не отвергается почти всегда, а при $p > p^*$ гипотеза отвергается почти всегда.
Справка для выполнения следующих задач
Критерий согласия хи-квадрат
<a href=https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chisquare.html#scipy.stats.chisquare>scipy.stats.chisquare</a>(f_obs, f_exp=None, ddof=0)
f_obs --- число элементов выборки, попавших в каждый из интервалов
f_exp --- ожидаемое число элементов выборки (по умолчанию равномерное)
ddof --- поправка на число степеней свободы. Статистика асимптотически будет иметь распределение хи-квадрат с числом степеней свободы $k - 1 - ddof$, где $k$ --- число интервалов.
Возвращает значение статистики критерия и соответствующее p-value.
Критерий согласия Колмогорова
<a href=https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kstest.html#scipy.stats.kstest>scipy.stats.kstest</a>(rvs, cdf, args=())
rvs --- выборка
cdf --- функция распределения (сама функция или ее название)
args --- параметры распределения
Возвращает значение статистики критерия и соответствующее p-value.
Задача 7.
Проверьте, что ваша выборка значений скорости ветра из задания 2 действительно согласуется с распределением Вейбулла.
Проверьте, что при больших $n$ распределение статистики из задач 3 и 4 задания 2 действительно хорошо приближают предельное распределение.
Проверьте, что остатки в регрессии из задач выше нормальны.
Подберите класс распределений для выборки количества друзей из задания 1.
Использовать можно два описанных выше критерия, либо любой другой критерий, если будет обоснована необходимость его применения в данной задаче, а так же будет приведено краткое описание критерия.
Уровень значимости взять равным 0.05.
End of explanation
# Код из задачи 3а прака 2
sample = sps.norm.rvs(size=(200, 300))
estimator = sample.cumsum(axis=1) / (np.ones(200).reshape(200, 1) @ np.linspace(1, 300, 300).reshape(1, 300))
stat = estimator * np.linspace(1, 300, 300) ** 0.5
sample = stat[:, -1]
# Проверка критерием Колмогорова
sps.kstest(sample, 'norm')
Explanation: Вывод. Выборка согласуется с распределением Вейбулла (так как pvalue больше 0.05, гипотеза не отвергается).
Пусть $X_1, ..., X_n$ --- выборка из распределения $\mathcal{N}(\theta, 1)$. Известно, что $\overline{X}$ является асимптотически нормальной оценкой параметра $\theta$. Вам нужно убедиться в этом, сгенерировав множество выборок и посчитав по каждой из них оценку параметра в зависимости от размера выборки.
Сгенерируйте 200 выборок $X_1^j, ..., X_{300}^j$ из распределения $\mathcal{N}(0, 1)$. По каждой из них посчитайте оценки $\widehat{\theta}{jn} = \frac{1}{n}\sum\limits{i=1}^n X_i^j$ для $1 \leqslant n \leqslant 300$, то есть оценка параметра по первым $n$ наблюдениям $j$-й выборки. Для этой оценки посчитайте статистику $T_{jn} = \sqrt{n} \left( \widehat{\theta}_{jn} - \theta \right)$, где $\theta = 0$.
End of explanation
# Код из задачи 3б прака 2
sample = sps.poisson.rvs(mu=1, size=(200, 300))
estimator = sample.cumsum(axis=1) / (np.ones(200).reshape(200, 1) @ np.linspace(1, 300, 300).reshape(1, 300))
stat = (estimator - 1) * np.linspace(1, 300, 300) ** 0.5
sample = stat[:, -1]
# Проверка критерием Колмогорова
sps.kstest(sample, 'norm')
Explanation: Вывод. Выборка согласуется со стандартным нормальным распределением (так как pvalue больше 0.05, гипотеза не отвергается).
Пусть $X_1, ..., X_n$ --- выборка из распределения $Pois(\theta)$. Известно, что $\overline{X}$ является асимптотически нормальной оценкой параметра $\theta$.
End of explanation
# Код из задачи 4 прака 2
sample = sps.uniform.rvs(size=(200, 300))
estimator = np.maximum.accumulate(sample, axis=1)
stat = (1 - estimator) * np.linspace(1, 300, 300)
sample = stat[:, -1]
# Проверка критерием Колмогорова
sps.kstest(sample, 'expon')
Explanation: Вывод. Выборка согласуется со стандартным нормальным распределением (так как pvalue больше 0.05, гипотеза не отвергается).
Пусть $X_1, ..., X_n$ --- выборка из распределения $U[0, \theta]$. Из домашнего задания известно, что $n\left(\theta - X_{(n)}\right) \stackrel{d_\theta}{\longrightarrow} Exp\left(1/\theta\right)$.
End of explanation
# Я загурзил файл локально
data = np.array(list(csv.reader(open('ice_cream.txt', 'r'), delimiter='\t')))
source = data[1:, :].astype(float)
# Переводим в Цельсий
source[:, 4] = (source[:, 4] - 32) / 1.8
source[:, 5] = (source[:, 5] - 32) / 1.8
# Размер выборки и число параметров
n, k = source.shape[0], 2
# Отклик и регрессор
Y = source[:, 1]
X = np.zeros((n, k))
X[:, 0] = np.ones(n)
X[:, 1] = source[:, 4]
# Обучаем модель
model = LinearRegression()
model.fit(X, Y)
# Получаем остатки
errors = Y - model.predict(X)
# Проверяем критерием Колмогороова.
sps.kstest(errors, sps.norm(scale=model.sigma_sq ** 0.5).cdf)
Explanation: Вывод. Выборка согласуется со экспоненциальным распределением с параметром 1 (так как pvalue больше 0.05, гипотеза не отвергается).
Мороженое.
End of explanation
from statsmodels.distributions.empirical_distribution import ECDF
source = np.array(list(csv.reader(open('friends.csv', 'r'))))
sample = np.sort(source[:, 1].astype(int))
grid = np.linspace(0, sample.max() + 1, 1000)
Explanation: Вывод. Распределение согласуется с нормальным.
Выбирал всех своих друзей в ВК, к которым удалось получить доступ. Это подсчитывалось в C# через HTTP-запросы. Зато прошедшее время у меня появились новые друзья, я их учитывать не буду. У моих друзей тоже появились новые друзья, они тоже не посчитаны.
End of explanation
sps.kstest(sample, sps.rayleigh(scale=180).cdf)
plt.figure(figsize=(20, 5))
plt.plot(grid, ECDF(sample)(grid), color='red', label='ЭФР')
plt.plot(grid, sps.rayleigh(scale=180).cdf(grid), color='blue', label='ФР')
plt.legend()
plt.title('Число друзей моих друзей в ВК')
plt.grid()
plt.show()
plt.figure(figsize=(20, 5))
plt.plot(grid, sps.rayleigh(scale=180).pdf(grid), color='blue', label='Плотность')
plt.legend()
plt.title('Плотность распределения Рэлея')
plt.grid()
plt.show()
Explanation: В первой практике я говорил, что не могу сделать вывод о распределении на основании такого графика. Может быть, сейчас получится. Выдвинем гипотезу, что распределение числа друзей моих друзей есть распределение Рэлея с параметром масштаба 180. Почему нет, имеем право.
End of explanation |
10,296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Shots are fired isotropically from a point and hit a position sensitive detector
There is no scattering
y is fixed to be 1 away
Step1: if both are unknown
Step2: Now repeat all this updating the prior with the posterior one count at a time
Step3: How does the posterior compare to just riunning N points? | Python Code:
# generate some data
with pm.Model() as model:
x = pm.Cauchy(name='x', alpha=0, beta=1)
trace = pm.sample(10000, njobs=4)
pm.traceplot(trace)
sampledat = trace['x']
trace.varnames, trace['x']
sns.distplot(sampledat, kde=False, norm_hist=True)
# plt.hist(sampledat, 200, normed=True);
plt.yscale('log');
np.random.randint(0, len(sampledat), 10)
# generate some data
bins = np.linspace(-4,4,100)
hists = {}
stats = {}
for npts in tqdm.tqdm_notebook(range(1,102,40)):
d1 = sampledat[np.random.randint(0, len(sampledat), npts)]
with pm.Model() as model:
alpha = pm.Uniform('loc', -10, 10)
# beta = pm.Uniform('dist', 1, 1)
x = pm.Cauchy(name='x', alpha=alpha, beta=1, observed=d1)
trace = pm.sample(5000, njobs=4)
hists[npts] = np.histogram(trace['loc'], bins)
stats[npts] = np.percentile(trace['loc'], (1, 5, 25, 50, 75, 95, 99))
keys = sorted(list(hists.keys()))
for k in keys:
p = plt.plot(tb.bin_edges_to_center(bins), hists[k][0]/np.max(hists[k][0]),
drawstyle='steps', label=str(k), lw=1)
c = p[0].get_color()
plt.axvline(stats[k][3], lw=3, color=c)
print(k, stats[k][2:5], stats[k][3]/(stats[k][4]-stats[k][2]), )
plt.legend()
plt.xlim((-2,2))
Explanation: Shots are fired isotropically from a point and hit a position sensitive detector
There is no scattering
y is fixed to be 1 away
End of explanation
# generate some data
bins = np.linspace(-4,4,100)
hists2 = {}
stats2 = {}
hists2d = {}
binsd = np.linspace(0.1,5,100)
for npts in tqdm.tqdm_notebook((1,2,5,10,20,40,60,80,200)):
d1 = sampledat[np.random.randint(0, len(sampledat), npts)]
with pm.Model() as model:
alpha = pm.Uniform('loc', -10, 10)
beta = pm.Uniform('dist', 0.1, 5)
x = pm.Cauchy(name='x', alpha=alpha, beta=beta, observed=d1)
trace = pm.sample(5000, njobs=4)
hists2[npts] = np.histogram(trace['loc'], bins)
stats2[npts] = np.percentile(trace['loc'], (1, 5, 25, 50, 75, 95, 99))
hists2d[npts] = np.histogram2d(trace['loc'], trace['dist'], bins=(bins, binsd))
keys = sorted(list(hists2.keys()))
for k in keys:
p = plt.plot(tb.bin_edges_to_center(bins), hists2[k][0]/np.max(hists2[k][0]),
drawstyle='steps', label=str(k), lw=1)
c = p[0].get_color()
plt.axvline(stats2[k][3], lw=3, color=c)
print(k, stats2[k][2:5], stats2[k][3]/(stats2[k][4]-stats2[k][2]), )
plt.legend()
plt.xlim((-2,2))
# plt.contour(hists2d[1][0], 5)
from matplotlib.colors import LogNorm
keys = sorted(list(hists2.keys()))
for k in keys:
plt.figure()
plt.pcolormesh(tb.bin_edges_to_center(binsd),
tb.bin_edges_to_center(bins),
hists2d[k][0],
norm=LogNorm())
plt.title(str(k))
plt.colorbar()
plt.axvline(1, lw=0.5, c='k')
plt.axhline(0, lw=0.5, c='k')
Explanation: if both are unknown
End of explanation
def from_posterior(param, samples):
smin, smax = np.min(samples), np.max(samples)
width = smax - smin
x = np.linspace(smin, smax, 100)
y = stats.gaussian_kde(samples)(x)
# what was never sampled should have a small probability but not 0,
# so we'll extend the domain and use linear approximation of density on it
x = np.concatenate([[x[0] - 3 * width], x, [x[-1] + 3 * width]])
y = np.concatenate([[0], y, [0]])
return Interpolated(param, x, y)
# sampledat is the data
dat2 = sampledat.copy().tolist()
# traces = []
with pm.Model() as model:
data = dat2.pop()
data = [2.5, 2.5] # start kinda bad
alpha = pm.Uniform('loc', -10, 10)
# beta = pm.Uniform('dist', 1, 1)
x = pm.Cauchy(name='x', alpha=alpha, beta=1, observed=data)
trace = pm.sample(5000, njobs=4)
traces= [trace]
sns.distplot(traces[-1]['loc'][1000:], kde=False)
plt.axvline(0, c='k', linestyle='--')
plt.axvline(data[0], c='r')
plt.axvline(data[1], c='r')
plt.title('{} points'.format(len(traces)))
plt.xlim((-4,4))
alpha
traces[-1].varnames
for _ in tqdm.tqdm_notebook(range(20)):
with pm.Model() as model:
data = [dat2.pop(), dat2.pop()]
alpha = from_posterior('loc', trace['loc'])
x = pm.Cauchy(name='x', alpha=alpha, beta=1, observed=data)
trace = pm.sample(5000, njobs=4)
traces.append(trace)
for ii, t in enumerate(traces):
plt.figure()
sns.distplot(t['loc'][1000:], kde=False, norm_hist=True)
plt.axvline(0, c='k', linestyle='--')
plt.title('{} points'.format((ii+1)*2))
plt.xlim((-4,4))
plt.show()
for ii, t in enumerate(traces):
sns.distplot(t['loc'][1000:], kde=True, hist=False, color=plt.cm.OrRd(ii/len(traces)))
plt.axvline(0, c='k', linestyle='--')
plt.title('{} points'.format(ii))
plt.xlim((-4,4))
plt.ylim((0, 1.3))
plt.figure()
sns.distplot(traces[-1]['loc'][1000:], kde=True, hist=False, color=plt.cm.OrRd(ii/len(traces)))
for t in traces:
pm.plot_posterior(t)
for t in traces:
print(pm.gelman_rubin(t))
Explanation: Now repeat all this updating the prior with the posterior one count at a time
End of explanation
# traces = []
with pm.Model() as model:
alpha = pm.Uniform('loc', -10, 10)
# beta = pm.Uniform('dist', 1, 1)
x = pm.Cauchy(name='x', alpha=alpha, beta=1, observed=sampledat[:42])
trace = pm.sample(5000, njobs=4)
pm.plot_posterior(trace)
sns.distplot(trace['loc'][1000:], kde=True, hist=False, color=plt.cm.OrRd(ii/len(traces)))
plt.axvline(0, c='k', linestyle='--')
plt.xlim((-4,4))
plt.ylim((0, 1.3))
plt.figure()
sns.distplot(trace['loc'][1000::10], kde=True, hist=False, color=plt.cm.OrRd(ii/len(traces)))
plt.axvline(0, c='k', linestyle='--')
plt.xlim((-4,4))
plt.ylim((0, 1.3))
pm.forestplot(trace)
pm.autocorrplot(trace)
pm.traceplot(trace)
pm.gelman_rubin(trace)
pm.energyplot(trace)
energy = trace['loc']
energy_diff = np.diff(energy)
sns.distplot(energy - energy.mean(), label='energy')
sns.distplot(energy_diff, label='energy diff')
plt.legend()
Explanation: How does the posterior compare to just riunning N points?
End of explanation |
10,297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Embedding a Feedforward Cascade in a Recurrent Network
Alex Williams 10/24/2015
If you are viewing a static version of this notebook (e.g. on nbviewer), you can launch an interactive session by clicking below
Step1: Methods
Consider a recurrent network initialized with random connectivity. We split the network into two groups — neurons in the first group participate in the feedforward cascade, and neurons in the second group do not. We use recursive least-squares to train the presynaptic weights for each of the neurons in the cascade. The presynaptic weights for the second group of neurons are left untrained.
The second group of neurons provides chaotic behavior that helps stabilize and time the feedforward cascade. This is necessary for the target feedforward cascade used in this example. The intrinsic dynamics of the system are too fast to match the slow timescale of the target pattern we use.
<img src="./rec-ff-net.png" width=600>
We previously applied FORCE learning to the output/readout weights of a recurrent network (see notebook here). In this case we will train a subset of the recurrent connections in the network (blue lines in the schematic above). This is described in the supplemental materials of Susillo & Abbott (2009). We start with random initial synaptic weights for all recurrent connections and random weights for the input stimulus to the network.<sup><a href="#f1b" id="f1t">[1]</a></sup> The dynamics are given by
Step2: Test the behavior
We want the network to only produce a feedforward cascade only in response to a stimulus input. Note that this doesn't always work — it is difficult for the network to perform this task. Nonetheless, the training works pretty well most of the time.<sup><a href="#f2b" id="f2t">[2]</a></sup>
Step3: Note that when we apply two inputs in quick succession (the last two inputs) the feedforward cascade restarts.
Connectivity matrix | Python Code:
from __future__ import division
from scipy.integrate import odeint,ode
from numpy import zeros,ones,eye,tanh,dot,outer,sqrt,linspace,pi,exp,tile,arange,reshape
from numpy.random import uniform,normal,choice
import pylab as plt
import numpy as np
%matplotlib inline
Explanation: Embedding a Feedforward Cascade in a Recurrent Network
Alex Williams 10/24/2015
If you are viewing a static version of this notebook (e.g. on nbviewer), you can launch an interactive session by clicking below:
There has been renewed interest in feedforward networks in both theoretical (Ganguli et al., 2008; Goldman, 2009; Murphy & Miller, 2009) and experimental (Long et al. 2010; Harvey et al. 2012) neuroscience lately. On a structural level, most neural circuits under study are highly recurrent. However, recurrent networks can still encode simple feedforward dynamics as we'll show in this notebook (also see Ganguli & Latham, 2009, for an intuitive overview).
<img src="./feedforward.png" width=450>
End of explanation
## Network parameters and initial conditions
N1 = 20 # neurons in chain
N2 = 20 # neurons not in chain
N = N1+N2
tI = 10
J = normal(0,sqrt(1/N),(N,N))
x0 = uniform(-1,1,N)
tmax = 2*N1+2*tI
dt = 0.5
u = uniform(-1,1,N)
g = 1.5
## Target firing rate for neuron i and time t0
target = lambda t0,i: 2.0*exp(-(((t0%tmax)-(2*i+tI+3))**2)/(2.0*9)) - 1.0
def f1(t0,x):
## input to network at beginning of trial
if (t0%tmax) < tI: return -x + g*dot(J,tanh_x) + u
## no input after tI units of time
else: return -x + g*dot(J,tanh_x)
P = []
for i in range(N1):
# Running estimate of the inverse correlation matrix
P.append(eye(N))
lr = 1.0 # learning rate
# simulation data: state, output, time, weight updates
x,z,t,wu = [x0],[],[0],[zeros(N1).tolist()]
# Set up ode solver
solver = ode(f1)
solver.set_initial_value(x0)
# Integrate ode, update weights, repeat
while t[-1] < 25*tmax:
tanh_x = tanh(x[-1]) # cache firing rates
wu.append([])
# train rates at the beginning of the simulation
if t[-1]<22*tmax:
for i in range(N1):
error = target(t[-1],i) - tanh_x[i]
q = dot(P[i],tanh_x)
c = lr / (1 + dot(q,tanh_x))
P[i] = P[i] - c*outer(q,q)
J[i,:] += c*error*q
wu[-1].append(np.sum(np.abs(c*error*q)))
else:
# Store zero for the weight update
for i in range(N1): wu[-1].append(0)
solver.integrate(solver.t+dt)
x.append(solver.y)
t.append(solver.t)
x = np.array(x)
r = tanh(x) # firing rates
t = np.array(t)
wu = np.array(wu)
wu = reshape(wu,(len(t),N1))
pos = 2*arange(N)
offset = tile(pos[::-1],(len(t),1))
targ = np.array([target(t,i) for i in range(N1)]).T
plt.figure(figsize=(12,11))
plt.subplot(3,1,1)
plt.plot(t,targ + offset[:,:N1],'-r')
plt.plot(t,r[:,:N1] + offset[:,:N1],'-k')
plt.yticks([]),plt.xticks([]),plt.xlim([t[0],t[-1]])
plt.title('Trained subset of network (target pattern in red)')
plt.subplot(3,1,2)
plt.plot(t,r[:,N1:] + offset[:,N1:],'-k')
plt.yticks([]),plt.xticks([]),plt.xlim([t[0],t[-1]])
plt.title('Untrained subset of network')
plt.subplot(3,1,3)
plt.plot(t,wu + offset[:,:N1],'-k')
plt.yticks([]),plt.xlim([t[0],t[-1]]),plt.xlabel('time (a.u.)')
plt.title('Change in presynaptic weights for each trained neuron')
plt.show()
Explanation: Methods
Consider a recurrent network initialized with random connectivity. We split the network into two groups — neurons in the first group participate in the feedforward cascade, and neurons in the second group do not. We use recursive least-squares to train the presynaptic weights for each of the neurons in the cascade. The presynaptic weights for the second group of neurons are left untrained.
The second group of neurons provides chaotic behavior that helps stabilize and time the feedforward cascade. This is necessary for the target feedforward cascade used in this example. The intrinsic dynamics of the system are too fast to match the slow timescale of the target pattern we use.
<img src="./rec-ff-net.png" width=600>
We previously applied FORCE learning to the output/readout weights of a recurrent network (see notebook here). In this case we will train a subset of the recurrent connections in the network (blue lines in the schematic above). This is described in the supplemental materials of Susillo & Abbott (2009). We start with random initial synaptic weights for all recurrent connections and random weights for the input stimulus to the network.<sup><a href="#f1b" id="f1t">[1]</a></sup> The dynamics are given by:
$$\mathbf{\dot{x}} = -\mathbf{x} + J \tanh(\mathbf{x}) + \mathbf{u}(t)$$
where $\mathbf{x}$ is a vector holding the activation of all neurons, the firing rates are $\tanh(\mathbf{x})$, the matrix $J$ holds the synaptic weights of the recurrent connections, and $\mathbf{u}(t)$ is the input/stimulus, which is applied in periodic step pulses.
Each neuron participating in the feedforward cascade/sequence has a target function for its firing rate. We use a Gaussian for this example:
$$f_i(t) = 2 \exp \left [ \frac{-(t-\mu_i)^2}{18} \right ] - 1$$
where $\mu_i$ is the time of peak firing for neuron $i$. Here, $t$ is the time since the last stimulus pulse was delivered — to reiterate, we repeatedly apply the stimulus as a step pulse during training.
We apply recursive least-squares to train the pre-synaptic weights for each neuron participating in the cascade. Denote the $i$<sup>th</sup> row of $J$ as $\mathbf{j}_i$ (these are the presynaptic inputs to neuron $i$). For each neuron, we store a running estimate of the inverse correlation matrix, $P_i$, and use this to tune our update of the presynaptic weights:
$$\mathbf{q} = P_i \tanh [\mathbf{x}]$$
$$c = \frac{1}{1+ \mathbf{q}^T \tanh(\mathbf{x})}$$
$$\mathbf{j}_i \rightarrow \mathbf{j}_i + c(f_i(t)- \tanh (x_i) ) \mathbf{q}$$
$$P_{i} \rightarrow P_{i} - c \mathbf{q} \mathbf{q}^T$$
We initialize each $P_i$ to the identity matrix at the beginning of training.
Training the Network
End of explanation
tstim = [80,125,170,190]
def f2(t0,x):
## input to network at beginning of trial
for ts in tstim:
if t0 > ts and t0 < ts+tI: return -x + g*dot(J,tanh(x)) + u
## no input after tI units of time
return -x + g*dot(J,tanh(x))
# Set up ode solver
solver = ode(f2)
solver.set_initial_value(x[-1,:])
x_test,t_test = [x[-1,:]],[0]
while t_test[-1] < 250:
solver.integrate(solver.t + dt)
x_test.append(solver.y)
t_test.append(solver.t)
x_test = np.array(x_test)
r_test = tanh(x_test) # firing rates
t_test = np.array(t_test)
pos = 2*arange(N)
offset = tile(pos[::-1],(len(t_test),1))
plt.figure(figsize=(10,5))
plt.plot(t_test,r_test[:,:N1] + offset[:,:N1],'-k')
plt.plot(tstim,ones(len(tstim))*80,'or',ms=8)
plt.ylim([37,82]), plt.yticks([]), plt.xlabel('time (a.u.)')
plt.title('After Training. Stimulus applied at red points.\n',fontweight='bold')
plt.show()
Explanation: Test the behavior
We want the network to only produce a feedforward cascade only in response to a stimulus input. Note that this doesn't always work — it is difficult for the network to perform this task. Nonetheless, the training works pretty well most of the time.<sup><a href="#f2b" id="f2t">[2]</a></sup>
End of explanation
plt.matshow(J)
plt.title("Connectivity Matrix, Post-Training")
Explanation: Note that when we apply two inputs in quick succession (the last two inputs) the feedforward cascade restarts.
Connectivity matrix
End of explanation |
10,298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Neural Network using Numpy on Bike Sharing Time Series dataset
In this project, we'll build a neural network and use it to predict daily bike rental ridership.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data.
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below we'll build the network. We've built out the structure and the backwards pass. The forward pass through the network is to be implemented. We'll also set the hyperparameters
Step8: Training the network
Here we'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
We'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. We'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. We can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out the predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: A Neural Network using Numpy on Bike Sharing Time Series dataset
In this project, we'll build a neural network and use it to predict daily bike rental ridership.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data.
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
_VERBOSE = False
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes ** -0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes ** -0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = (lambda x: 1 / (1 + np.exp(-x)))
# All shapes
if _VERBOSE:
print(
'Inputs: {0}, Hidden: {1}, Output: {2}'.format(self.input_nodes, self.hidden_nodes, self.output_nodes))
print('Weights - Input-to-Hidden: {0}, Hidden-to-Output: {1}'.format(self.weights_input_to_hidden.shape,
self.weights_hidden_to_output.shape))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
if _VERBOSE:
print('Input-list: {0}, Target-list: {1}'.format(inputs_list.shape, targets_list.shape))
print('Transposed - Input-list: {0}, Target-list: {1}'.format(inputs.shape, targets.shape))
print('Targets:', targets_list, targets)
#### Implement the forward pass here ####
### Forward pass ###
# Hidden layer (Input to Hidden)
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # (2, 56) x (56, 1) -> (2, 1)
hidden_outputs = self.activation_function(hidden_inputs) # (2, 1) -> (2, 1)
# Output layer (Hidden to Output)
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # (1, 2) -> (2, 1) -> (1, 1)
final_outputs = final_inputs # signals from final output layer, eg. f(x)=x. (1, 1)
if _VERBOSE:
print('Final inputs:', final_inputs.shape, 'Final outputs:', final_outputs.shape)
#### Implement the backward pass here ####
### Backward pass ###
# Output error
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
# (1, 1) - (1, 1) -> (1, 1)
if _VERBOSE:
print('Shapes - Targets:', targets.shape, 'Final outputs:', final_outputs.shape, 'Output errors:',
output_errors.shape)
print('Values - Targets:', targets, 'Final outputs:', final_outputs, 'Output errors:', output_errors)
# Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer
# (1, 2) x (2, 1) -> (1, 1)
hidden_grad = hidden_outputs * (1 - hidden_outputs) # hidden layer gradients. (2, 1) -> (2, 1)
if _VERBOSE:
print('Shapes - Output errors:', output_errors.shape, 'Weights/Hidden to Output:',
self.weights_hidden_to_output.shape, 'Hidden errors:', hidden_errors.shape)
print('Shapes - Hidden outputs:', hidden_outputs.shape, 'Hidden grad:', hidden_grad.shape)
# Update the weights
self.weights_hidden_to_output += np.dot(output_errors,
hidden_outputs.T) * self.lr # update hidden-to-output weights with gradient descent step. (1, 2) x (2, 1) [transposed of (1, 2)] -> (1, 1)
if _VERBOSE:
print('Shapes - Output errors:', output_errors.shape, 'Hidden errors:', hidden_outputs.T.shape,
'Weights/Hidden to Output:', self.weights_hidden_to_output.shape)
print('Shapes - Hidden errors:', hidden_errors.shape, 'Hidden grad:', hidden_grad.shape, 'Input (trans):',
inputs.T.shape, 'Weights/Input to Hidden:', self.weights_input_to_hidden.shape)
self.weights_input_to_hidden += np.dot(hidden_errors * hidden_grad,
inputs.T) * self.lr # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below we'll build the network. We've built out the structure and the backwards pass. The forward pass through the network is to be implemented. We'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
We will need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. This function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 3000
learning_rate = 0.01
hidden_nodes = 15
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
Explanation: Training the network
Here we'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
We'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. We'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. We can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
accuracy = np.sum((predictions[0] > 0.5)) / len(predictions[0])
print('Accuracy:', accuracy)
Explanation: Check out the predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
10,299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!
Step1: 练习 2:写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
Step2: 练习 3:写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等
Step3: 练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符)。
Step4: 尝试性练习:写程序,能够在屏幕上显示空行。
Step5: 挑战性练习:写程序,由用户输入一些整数,能够得到几个整数中的次大值(第二大的值)并输出。 | Python Code:
name = input("请输入您的姓名:")
date = float(input("请输入您出生的月份.日期:"))
if 3.21 <= date <= 4.19:
print(name,",你是非常有性格的白羊座!")
elif 4.20 <= date <= 5.20:
print(name,",你是非常有性格的金牛座!")
elif 5.21 <= date <= 6.21:
print(name,",你是非常有性格的双子座!")
elif 6.22 <= date <= 7.22:
print(name,",你是非常有性格的巨蟹座!")
elif 7.23 <= date <= 8.22:
print(name,",你是非常有性格的狮子座!")
elif 8.23 <= date <= 9.22:
print(name,",你是非常有性格的处女座!")
elif 9.23 <= date <= 10.23:
print(name,",你是非常有性格的天秤座!")
elif 10.24 <= date <= 11.22:
print(name,",你是非常有性格的天蝎座!")
elif 11.23 <= date <= 12.21:
print(name,",你是非常有性格的射手座!")
elif 1.20 <= date <= 2.18:
print(name,",你是非常有性格的水瓶座!")
elif 2.19 <= date <= 3.20:
print(name,",你是非常有性格的双鱼座!")
else:
print(name,",你是非常有性格的摩羯座!")
Explanation: 练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!
End of explanation
m = int(input("请输入一个整数:"))
n = int(input("请输入一个整数:"))
temp = ''
total = 1
x = input("请问您想做什么运算?加和运算请按‘+’,乘积运算请按‘*’,求余运算请按‘%’,其他运算请按‘//’,回车结束。")
if x == '+':
if m > n:
temp = m
m = n
n = temp
else:
print((m+n)*(n-m+1)/2)
elif x == '*':
total = m*n
if m > n:
temp = m
m = n
n = temp
else:
while (m+1) <= (n-1):
total = total*(m+1)*(n-1)
m += 1
n -= 1
print(total)
elif x == '%':
print(m%n)
else:
print(m//n)
Explanation: 练习 2:写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
End of explanation
x = float(input("请输入PM2.5的数值:"))
if x > 500:
print("您应该打开空气净化器,带防雾霾口罩!")
else:
print("空气质量还不错,适宜出行!")
Explanation: 练习 3:写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等
End of explanation
sp = input("请输入一个英文动词的单数形式,回车结束:")
if sp.endswith("ch"):
print(sp+"es")
else:
print(sp+"s")
Explanation: 练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符)。
End of explanation
x = input("请输入一行数字:")
if len(x) <= 100:
print("\n")
else:
print(x)
Explanation: 尝试性练习:写程序,能够在屏幕上显示空行。
End of explanation
num = int(input("请输入整数的个数,回车结束:"))
if num < 2:
print("输入个数不能少于2,请您重新输入:")
else:
m = int(input("请输入一个整数,回车结束:"))
n = int(input("请再输入一个整数,回车结束:"))
if m >= n:
max_number = m
mid_number = n
else:
max_number = n
mid_number = m
i = 2
while i < num:
x = int(input("请再输入一个整数,回车结束:"))
if mid_number < x < max_number:
mid_number = x
if x > max_number:
mid_number = max_number
max_number = x
i += 1
print(mid_number)
Explanation: 挑战性练习:写程序,由用户输入一些整数,能够得到几个整数中的次大值(第二大的值)并输出。
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.